0% found this document useful (0 votes)
35 views

CH-2 Data Science

Data science involves extracting knowledge and insights from structured, semi-structured, and unstructured data. It requires multi-disciplinary skills and continues to evolve as a promising career path. Data must be acquired, analyzed, curated, stored, and used in a value chain to generate useful insights. The volume, velocity, variety, and veracity of big data present challenges at each stage of the data value chain.

Uploaded by

Test Test
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

CH-2 Data Science

Data science involves extracting knowledge and insights from structured, semi-structured, and unstructured data. It requires multi-disciplinary skills and continues to evolve as a promising career path. Data must be acquired, analyzed, curated, stored, and used in a value chain to generate useful insights. The volume, velocity, variety, and veracity of big data present challenges at each stage of the data value chain.

Uploaded by

Test Test
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 45

Introduction to Emerging Technologies

Chapter Two

Data Science

1
Overview of Data Science
• Data science is a multi-disciplinary field that uses scientific
methods, processes, algorithms, and systems to extract
knowledge and insights from structured, semi-structured and
unstructured data.

• Data science is much more than simply analyzing data. It


offers a range of roles and requires a range of skills.

• As an academic discipline and profession, data science


continues to evolve as one of the most promising and in-
demand career paths for skilled professionals.
2
Data and Information
• Data can be defined as a representation of facts, concepts, or
instructions in a formalized manner, which should be suitable
for communication, interpretation, or processing, by human
or electronic machines.

• It can be described as unprocessed facts and figures.


• It is represented with the help of characters such as alphabets
(A-Z, a-z), digits (0-9) or special characters (+, -, /, *, <,>, =,
etc.).

3
Data and Information
• Information is the processed data on which decisions and
actions are based.

• It is data that has been processed into a form that is


meaningful to the recipient and is of real or perceived value
in the current or the prospective action or decision of
recipient.

• Furtherer more, information is interpreted data; created from


organized, structured, and processed data in a particular
context.
4
Data Processing Cycle
• Data processing is the re-structuring or re-ordering of data by
people or machines to increase their usefulness and add
values for a particular purpose.

• Data processing consists of the following basic steps - input,


processing, and output. These three steps constitute the data
processing cycle.

5
Data Processing Cycle
• Input − in this step, the input data is prepared in some convenient form
for processing. The form will depend on the processing machine.

For example: hard disk, CD, flash disk and so on.

• Processing − in this step, the input data is changed to produce data in a


more useful form.

For example, interest can be calculated on deposit to a bank, or a


summary of sales for the month can be calculated from the sales orders.

• Output − at this stage, the result of the proceeding processing step is


collected. The particular form of the output data depends on the use of
the data.
6
For example, output data may be payroll for employees.
Data types and their representation
Data types from Computer programming perspective
• Almost all programming languages explicitly include the notion of data type,
though different languages may use different terminology. Common data
types include:
• Integers(int)- is used to store whole numbers, mathematically known as
integers
• Booleans(bool)- is used to represent restricted to one of two values: true or
false
• Characters(char)- is used to store a single character
• Floating-point numbers(float)- is used to store real numbers
• Alphanumeric strings(string)- used to store a combination of characters
and numbers
7
Data types and their representation
Data types from Data Analytics Perspective
• From a data analytics point of view, there are three common types of
data types or structures: Structured, Semi-structured, and
Unstructured data types.

• Structured data: is data that adheres to a pre-defined data model


and is therefore straightforward to analyze.

• Structured data conforms to a tabular format with a relationship


between the different rows and columns.

• Common examples of structured data are Excel files or SQL


databases. Each of these has structured rows and columns that can be
sorted. 8
Data types and their representation
Cont.
• Semi-structured: data is a form of structured data that does not
conform with the formal structure of data models associated with
relational databases or other forms of data tables, but nonetheless,
contains tags or other markers to separate semantic elements and
enforce hierarchies of records and fields within the data.

• Therefore, it is also known as a self-describing structure.

• Examples of semi-structured data include JSON and XML are forms


of semi-structured data.

9
Data types and their representation
Cont.
• Unstructured data: is data that either does not have a predefined
data model or is not organized in a pre-defined manner.

• Unstructured data is typically text-heavy but may contain data such as


dates, numbers, and facts as well. This results in irregularities and
ambiguities that make it difficult to understand using traditional
programs as compared to data stored in structured databases.

• Common examples of unstructured data include audio, video files or


No- SQL databases.

10
Data types and their representation
Metadata
• From a technical point of view, this is not a separate data structure,
but it is one of the most important elements for Big Data analysis and
big data solutions.

• Metadata is data about data. It provides additional information


about a specific set of data.

• In a set of photographs, for example, metadata could describe when


and where the photos were taken. The metadata then provides fields
for dates and locations which, by themselves, can be considered
structured data. Because of this reason, metadata is frequently used
by Big Data solutions for initial analysis. 11
Data types and their representation

12
Big Data
• Big data is the term for a collection of data sets so large and
complex that it becomes difficult to process using on-hand
database management tools or traditional data processing
applications.

• Big data is characterized by 4V and more:

Volume: large amounts of data Zeta bytes/Massive datasets


Velocity: Data is live streaming or in motion
Variety: data comes in many different forms from diverse sources
Veracity: can we trust the data? How accurate is it? etc.
13
Big Data

14
2019-10-20 15
2019-10-20 16
2019-10-20 17
2019-10-20 18
Big Data

2019-10-20 19
Data Value Chain
• The Data Value Chain is introduced to describe the information flow
within a big data system as a series of steps needed to generate value
and useful insights from data.

• The Big Data Value Chain identifies the following key high-level
activities:
• Data Acquisition
• Data Analysis
• Data Curation
• Data Storage
• Data Usage 20
Data Value Chain
Data Acquisition
• It is the process of gathering, filtering, and cleaning data before
it is put in a data warehouse or any other storage solution on
which data analysis can be carried out.

• Data acquisition is one of the major big data challenges in terms


of infrastructure requirements. The infrastructure required to
support the acquisition of big data must deliver low, predictable
latency in both capturing data and in executing queries; be able
to handle very high transaction volumes, often in a distributed
environment; and support flexible and dynamic data structures 21
Data Value Chain
Data Analysis
• It is concerned with making the raw data acquired amenable to
use in decision-making as well as domain-specific usage.

• Data analysis involves exploring, transforming, and modeling


data with the goal of highlighting relevant data, synthesizing
and extracting useful hidden information with high potential
from a business point of view.

• Related areas include data mining, business intelligence, and


machine learning
22
Data Value Chain
Data Curation
• It is the active management of data over its life cycle to ensure it
meets the necessary data quality requirements for its effective usage.

• Data curation processes can be categorized into different activities


such as content creation, selection, classification, transformation,
validation, and preservation.

• Data curation is performed by expert curators that are responsible for


improving the accessibility and quality of data.

• Data curators (also known as scientific curators or data annotators)


hold the responsibility of ensuring that data are trustworthy,
discoverable, accessible, reusable and fit their purpose 23
Data Value Chain
Data Storage
• It is the persistence and management of data in a scalable way that
satisfies the needs of applications that require fast access to the data.

• RDBMS have been the main, and almost unique, a solution to the
storage paradigm for nearly 40 years. However, the ACID (Atomicity,
Consistency, Isolation, and Durability) properties that guarantee
database transactions lack flexibility with regard to schema changes
and the performance and fault tolerance when data volumes and
complexity grow, making them unsuitable for big data scenarios.

• NoSQL technologies have been designed with the scalability goal in


mind and present a wide range of solutions based on alternative data
24
Data Value Chain
Data Usage
• It covers the data-driven business activities that need access to data,
its analysis, and the tools needed to integrate the data analysis within
the business activity.

• Data usage in business decision-making can enhance competitiveness


through the reduction of costs, increased added value, or any other
parameter that can be measured against existing performance criteria.

25
Data Value Chain
Big Data Value Chain

26
Clustered Computing

• Because of the qualities of big data, individual


computers are often inadequate for handling the data at
most stages.

• To better address the high storage and computational


needs of big data, computer clusters are a better fit.

• Big data clustering software combines the resources of


many smaller machines, seeking to provide a number
of benefits:
27
Clustered Computing
• Resource Pooling: Combining the available storage space to hold data is a
clear benefit, but CPU and memory pooling are also extremely important.
Processing large datasets requires large amounts of all three of these
resources.

• High Availability: Clusters can provide varying levels of fault tolerance and
availability guarantees to prevent hardware or software failures from
affecting access to data and processing. This becomes increasingly important
as we continue to emphasize the importance of real-time analytics.

• Easy Scalability: Clusters make it easy to scale horizontally by adding


additional machines to the group. This means the system can react to changes
in resource requirements without expanding the physical resources on a
28

machine.
Clustered Computing
• Using clusters requires a solution for managing cluster membership,
coordinating resource sharing, and scheduling actual work on
individual nodes.

• Cluster membership and resource allocation can be handled by


software like Hadoop’s YARN (which stands for Yet Another
Resource Negotiator).

• The machines involved in the computing cluster are also typically


involved with the management of a distributed storage system.

29
Hadoop and its Ecosystem

• Hadoop is an open-source framework intended to make


interaction with big data easier.

• It is a framework that allows for the distributed processing of


large datasets across clusters of computers using simple
programming models.

• It is inspired by a technical document published by Google.

30
Hadoop and its Ecosystem
The four key characteristics of Hadoop are:

• Economical: Its systems are highly economical as ordinary


computers can be used for data processing.

• Reliable: It is reliable as it stores copies of the data on different


machines and is resistant to hardware failure.

• Scalable: It is easily scalable both, horizontally and vertically. A


few extra nodes help in scaling up the framework.

• Flexible: It is flexible and you can store as much structured and


unstructured data as you need to and decide to use them later.
31
Hadoop and its Ecosystem
• Hadoop has an ecosystem that has evolved from its four core
components: data management, access, processing, and storage.
• It is continuously growing to meet the needs of Big Data. It
comprises the following components and many others:
• HDFS: Hadoop Distributed File System
• HBase: NoSQL Database
• YARN: Yet Another Resource Negotiator
• MapReduce: Programming based Data Processing
• Spark: In-Memory data processing
• PIG, HIVE: Query-based processing of data services
• Mahout, Spark MLLib: Machine Learning algorithm libraries
• Solar, Lucene: Searching and Indexing
• Zookeeper: Managing cluster
• Oozie: Job Scheduling 32
Hadoop and its Ecosystem

33
Hadoop and its Ecosystem

• HDFS: Hadoop distributed file system


• HDFS is the primary storage system of Hadoop.

34
Hadoop and its Ecosystem

• MapReduce:

• It is the core Hadoop ecosystem component which provides data


processing.
• MapReduce is a software framework for easily writing applications
that process the vast amount of structured and unstructured data
stored in the Hadoop Distributed File system.
• MapReduce programs are parallel in nature, thus are very useful
for performing large-scale data analysis using multiple machines in
the cluster. Thus, it improves the speed and reliability of cluster this
parallel processing
35
Hadoop and its Ecosystem

• MapReduce:

36
Hadoop and its Ecosystem
• YARN: Yet Another Resource Negotiator
• YARN is a one of the most important component of Hadoop
Ecosystem that provides the resource management.
• YARN is called as the operating system of Hadoop as it is
responsible for managing and monitoring workloads.

37
Hadoop and its Ecosystem

38
Hadoop and its Ecosystem
• HIVE:
• Apache Hive, is an open source data warehouse system for querying
and analyzing large datasets stored in Hadoop files
• Hive use language called HiveQL (HQL), which is similar to SQL.
HiveQL automatically translates SQL-like queries into MapReduce
jobs which will execute on Hadoop..

39
Hadoop and its Ecosystem
• PIG:

• Apache Pig is a high-level language platform for analyzing and


querying huge dataset that are stored in HDFS.

40
Hadoop and its Ecosystem

41
Hadoop and its Ecosystem

42
Hadoop and its Ecosystem

• Zookeeper: Apache Zookeeper is a centralized service for maintaining


configuration information, naming, providing distributed synchronization,
and providing group services.
• Zookeeper manages and coordinates a large cluster of machines.

43
Big Data Life Cycle with Hadoop
Ingesting data into the system

• The first stage of Big Data processing is Ingest. The data is ingested or
transferred to Hadoop from various sources such as relational databases,
systems, or local files. Sqoop transfers data from RDBMS to HDFS,
whereas Flume transfers event data.

Processing the data in storage

• The second stage is Processing. In this stage, the data is stored and
processed. The data is stored in the distributed file system, HDFS, and
the NoSQL distributed data, HBase.

• Spark and MapReduce perform data processing.


44
Big Data Life Cycle with Hadoop
Computing and analyzing data
• The third stage is to Analyze. Here, the data is analyzed by processing
frameworks such as Pig, Hive, and Impala.

• Pig converts the data using a map and reduce and then analyzes it.

• Hive is also based on the map and reduce programming and is most
suitable for structured data.

Visualizing the results


• The fourth stage is Access, which is performed by tools such as Hue and
Cloudera Search. In this stage, the analyzed data can be accessed by
users. 45

You might also like