0% found this document useful (0 votes)
11 views15 pages

Combinepdf

Uploaded by

omkar2342004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
11 views15 pages

Combinepdf

Uploaded by

omkar2342004
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 15

Powered by AI

Copilot

Sure, let’s start with the short answers:

Short Answers
1. Define data science: Data science is an interdisciplinary field that uses scientific methods, processes,
algorithms, and systems to extract knowledge and insights from structured and unstructured data.

2. What is the role of data science?: The role of data science is to analyze and interpret complex data to
help organizations make informed decisions, predict trends, and optimize processes.

3. Define unstructured data: Unstructured data refers to information that does not have a predefined data
model or is not organized in a predefined manner. Examples include text, images, and videos.

4. Define data source: A data source is the origin from which data is obtained. It can be a database, a file,
an API, or any other repository of information.

5. What is a data set?: A data set is a collection of related data points, typically organized in a table, where
each column represents a variable and each row represents a record.

6. What is CSV format?: CSV (Comma-Separated Values) format is a simple file format used to store
tabular data, where each line of the file is a data record, and each record consists of fields separated by
commas.

7. What are uses of Zip files?: Zip files are used to compress one or more files into a single file, reducing
the overall size and making it easier to transport or store.

8. What is open data?: Open data is data that is freely available for anyone to use, modify, and share
without restrictions.

9. What is meant by semi-structured data?: Semi-structured data is a form of data that does not conform
to a rigid structure like structured data but has some organizational properties, such as tags or markers, to
separate data elements. Examples include JSON and XML files.

10. What are rasterized and vectorized image files?: Rasterized images are made up of pixels, each with a
specific color value, while vectorized images are composed of paths defined by mathematical
expressions, allowing them to be scaled without losing quality.

11. What is SAS?: SAS (Statistical Analysis System) is a software suite used for advanced analytics,
business intelligence, data management, and predictive analytics.

12. List applications of data science: Applications of data science include fraud detection, recommendation
systems, healthcare analytics, customer segmentation, predictive maintenance, and more.

13. What is social media data?: Social media data refers to the information generated by users on social
media platforms, including posts, comments, likes, shares, and interactions.

14. List tools for data scientists: Tools for data scientists include Python, R, SQL, Hadoop, Spark, Tableau,
Jupyter Notebooks, and TensorFlow.

15. What is 3V’s?: The 3V’s refer to Volume, Velocity, and Variety, which are the three dimensions that
characterize big data.

16. What is data science life cycle?: The data science life cycle is a process that includes stages such as
data collection, data cleaning, data exploration, data modeling, and deployment.

17. Give the use of tar files: Tar files are used to archive multiple files into a single file, often used in Unix
and Linux systems for backup and distribution.
18. What is compressed data?: Compressed data is data that has been reduced in size using various
algorithms to save storage space and transmission time.

19. What is data source?: A data source is the origin from which data is obtained. It can be a database, a
file, an API, or any other repository of information.

20. List different file formats: Different file formats include CSV, JSON, XML, Excel, SQL, Parquet, and
Avro.

Long Answers
1. What is data science? Why is learning data science important? Explain in detail.

Data science is an interdisciplinary field that combines domain expertise, programming skills, and
knowledge of mathematics and statistics to extract meaningful insights from data. It involves several
processes, including data collection, data cleaning, data analysis, and data visualization.

Learning data science is important because it enables individuals and organizations to make data-driven
decisions, uncover hidden patterns, and predict future trends. In today’s data-rich world, the ability to
analyze and interpret data is crucial for gaining a competitive edge, improving efficiency, and driving
innovation.

2. Explain the 3V’s of data science with a diagram.

The 3V’s of data science are:

Volume: Refers to the vast amount of data generated every second.


Velocity: Refers to the speed at which new data is generated and processed.
Variety: Refers to the different types of data (structured, semi-structured, unstructured).

3V's of Data Science (Note: Replace with an actual diagram)

3. What are the components of a data scientist’s toolbox? Explain two of them.

Components of a data scientist’s toolbox include:

Programming Languages: Python, R


Data Manipulation Tools: Pandas, dplyr
Data Visualization Tools: Matplotlib, ggplot2
Machine Learning Libraries: Scikit-learn, TensorFlow
Big Data Tools: Hadoop, Spark

Python: A versatile programming language widely used in data science for data manipulation, analysis,
and machine learning.

Pandas: A powerful data manipulation and analysis library for Python, providing data structures like
DataFrames for handling structured data.

4. What are the applications of data science?

Applications of data science include:

Healthcare: Predicting disease outbreaks, personalized medicine


Finance: Fraud detection, risk management
Retail: Customer segmentation, recommendation systems
Manufacturing: Predictive maintenance, quality control
Marketing: Sentiment analysis, campaign optimization

5. With the help of a diagram, describe the lifecycle of data science.

The data science lifecycle includes the following stages:

Data Collection: Gathering data from various sources


Data Cleaning: Removing inconsistencies and errors
Data Exploration: Analyzing data to understand its structure
Data Modeling: Building predictive models
Deployment: Implementing the model in a production environment

Data Science Lifecycle (Note: Replace with an actual diagram)

6. What are different types of data? Explain in detail with appropriate examples.

Different types of data include:

Structured Data: Organized in a predefined format, such as databases (e.g., SQL tables).
Unstructured Data: Lacks a predefined structure, such as text, images, and videos.
Semi-structured Data: Contains organizational properties but not in a rigid format, such as JSON
and XML files.

Examples:

Structured Data: Customer information in a CRM system.


Unstructured Data: Social media posts, emails.
Semi-structured Data: Web server logs, JSON files.

7. Distinguish between structured and unstructured data.

Structured Data: Organized in rows and columns, easily searchable (e.g., databases).
Unstructured Data: Not organized in a predefined manner, more challenging to analyze (e.g.,
text, images).

8. What are different sources of data in data science? Explain in detail.

Different sources of data include:

Internal Databases: Company databases, CRM systems


External Databases: Public databases, government data
APIs: Web services providing data access
Web Scraping: Extracting data from websites
Sensors: IoT devices, environmental sensors

Internal Databases: Data collected and stored within an organization, such as sales records and
customer information.

External Databases: Data obtained from external sources, such as public datasets and research
databases.

9. Explain different data formats in brief.

Different data formats include:

CSV: Comma-separated values, used for tabular data.


JSON: JavaScript Object Notation, used for structured data interchange.
XML: Extensible Markup Language, used for data representation.
Excel: Spreadsheet format, used for data analysis.
SQL: Structured Query Language, used for database management.
Parquet: Columnar storage format, used for big data processing.
Avro: Row-oriented storage format, used for data serialization.

10. Explain unstructured data with an example. What are problems with unstructured data?

Unstructured Data: Data that does not have a predefined structure, such as text, images, and videos.

Example: Social media posts, which include text, images, and videos.

Problems:

Complexity: Difficult to analyze due to lack of structure.


Storage: Requires more storage space.
Processing: Requires advanced techniques for processing and analysis.

Feel free to ask if you need more details on any of these topics!
Powered by AI

Copilot

Sure, let’s start with the short answers for Chapter 2 on Statistical Data Analysis:

Short Answers
1. Define statistical data analysis: Statistical data analysis involves collecting, organizing, interpreting,
and presenting data to discover underlying patterns and trends, and to make informed decisions.

2. Define descriptive statistics: Descriptive statistics summarize and describe the main features of a
dataset through measures such as mean, median, mode, and standard deviation.

3. Define inferential statistics: Inferential statistics use a random sample of data taken from a population
to make inferences about the population, including hypothesis testing and confidence intervals.

4. What is meant by central tendency of data?: Central tendency refers to the measure that represents the
center or typical value of a dataset, commonly measured by mean, median, and mode.

5. Define geometric mean: The geometric mean is the nth root of the product of n numbers, used to
calculate average rates of growth or return.

6. What is meant by mode?: The mode is the value that appears most frequently in a dataset.

7. Define outlier: An outlier is a data point that differs significantly from other observations, potentially
indicating variability in measurement or an error.

8. What is inferential statistics?: Inferential statistics involve making predictions or inferences about a
population based on a sample of data.

9. What is mean by range?: The range is the difference between the highest and lowest values in a
dataset.

10. Define standard deviation: Standard deviation measures the amount of variation or dispersion in a set
of values.

11. Define hypothesis testing: Hypothesis testing is a statistical method used to decide whether there is
enough evidence to reject a null hypothesis based on sample data.

12. What is binary attribute and ordinal attribute?:

Binary attribute: A variable with two possible values (e.g., yes/no, true/false).
Ordinal attribute: A variable with ordered categories (e.g., rankings, levels of satisfaction).

13. Define variance: Variance measures the average squared deviation of each number from the mean of a
dataset.

14. Define interquartile range: The interquartile range (IQR) is the difference between the first quartile
(Q1) and the third quartile (Q3) of a dataset, representing the middle 50% of the data.

15. Give parameter estimation methods: Parameter estimation methods include point estimation and
interval estimation.

Long Answers

1. What is statistical data analysis? Explain its role in detail.

Statistical data analysis is the process of examining data to uncover patterns, trends, and relationships. It
involves collecting data, organizing it, analyzing it using statistical methods, and interpreting the results.
The role of statistical data analysis is crucial in various fields such as business, healthcare, social
sciences, and engineering, as it helps in making informed decisions, predicting future trends, and
optimizing processes.

2. Explain any two methods of dispersion for a series of data values and write the Python code for the
same.

Two common methods of dispersion are:

Range: The difference between the maximum and minimum values in a dataset.
Standard Deviation: Measures the average distance of each data point from the mean.

Python Code:
import numpy as np

data = [23, 18, 19, 27, 22, 31, 38, 24, 11, 16, 20]

# Range
data_range = np.ptp(data)
print("Range:", data_range)

# Standard Deviation
std_dev = np.std(data)
print("Standard Deviation:", std_dev)

3. How is the interquartile range used as a measure of position? Find the interquartile range for the
data values: 23, 18, 19, 27, 22, 31, 38, 24, 11, 16, 20.

The interquartile range (IQR) measures the spread of the middle 50% of the data, providing a robust
measure of variability. It is calculated as the difference between the third quartile (Q3) and the first
quartile (Q1).

Python Code:
data = [23, 18, 19, 27, 22, 31, 38, 24, 11, 16, 20]
Q1 = np.percentile(data, 25)
Q3 = np.percentile(data, 75)
IQR = Q3 - Q1
print("Interquartile Range:", IQR)

4. What is meant by dispersion? If the standard deviation of a given dataset is equal to zero, what
can we say about the data values included in the given dataset? The frequency table of the monthly
salaries of 20 people is shown below. Find the range, standard deviation, and variance of the given
data.

Dispersion refers to the extent to which data points in a dataset are spread out. If the standard deviation
is zero, all data values are identical.

Frequency Table:

Salary (Rs.) Frequency


3500 5
4000 8
4200 5
4300 2

Python Code:

import numpy as np

salaries = [3500]*5 + [4000]*8 + [4200]*5 + [4300]*2


data_range = np.ptp(salaries)
std_dev = np.std(salaries)
variance = np.var(salaries)

print("Range:", data_range)
print("Standard Deviation:", std_dev)
print("Variance:", variance)
5. What is meant by parameter estimation? Differentiate between a point estimate and interval
estimate.

Parameter estimation involves using sample data to estimate population parameters.

Point Estimate: A single value estimate of a population parameter (e.g., sample mean).
Interval Estimate: A range of values within which the parameter is expected to lie, with a certain
level of confidence (e.g., confidence interval).

6. The amount spent by the group of 10 students in the school canteen is as follows: 110, 117, 129,
197, 190, 100, 100, 178, 255, 790. Find the range and the coefficient of the range.

Python Code:
data = [110, 117, 129, 197, 190, 100, 100, 178, 255, 790]
data_range = np.ptp(data)
coefficient_of_range = data_range / (max(data) + min(data))

print("Range:", data_range)
print("Coefficient of Range:", coefficient_of_range)

7. What is meant by descriptive statistics? Explain types of descriptive statistics.

Descriptive statistics summarize and describe the main features of a dataset. Types include:

Measures of Central Tendency: Mean, median, mode.


Measures of Dispersion: Range, variance, standard deviation.
Measures of Position: Percentiles, quartiles.

8. What are measures of central tendency? Explain each in detail.

Measures of central tendency describe the center of a dataset:

Mean: The average of all data points.


Median: The middle value when data points are ordered.
Mode: The most frequently occurring value.

9. Find the mean, median, mode, and range for the following list of values: 13, 18, 13, 14, 13, 16, 14,
21, 13.

Python Code:
from scipy import stats

data = [13, 18, 13, 14, 13, 16, 14, 21, 13]
mean = np.mean(data)
median = np.median(data)
mode = stats.mode(data)[0][0]
data_range = np.ptp(data)

print("Mean:", mean)
print("Median:", median)
print("Mode:", mode)
print("Range:", data_range)

10. How to measure data similarity and dissimilarity? Explain in detail. Describe data matrix versus
dissimilarity matrix.

Data similarity and dissimilarity can be measured using various metrics:

Similarity: Cosine similarity, Jaccard index.


Dissimilarity: Euclidean distance, Manhattan distance.

Data Matrix: Represents data points and their attributes. Dissimilarity Matrix: Represents pairwise
dissimilarities between data points.

11. What is an outlier? List types of outliers. Also explain methods for outlier detection.
An outlier is a data point significantly different from others. Types include:

Global Outliers: Deviate significantly from the entire dataset.


Contextual Outliers: Deviate within a specific context.
Collective Outliers: A group of data points deviating together.

Detection Methods: Z-score, IQR method, DBSCAN clustering.

12. Explain t-test and z-test with example.

t-test: Used to compare the means of two groups when the sample size is small.
**
Powered by AI

Copilot

Sure, let’s start with the short answers for Chapter 3 on Data Preprocessing:

Short Answers
1. What is data? Data refers to raw facts and figures that are collected, stored, and processed to extract
meaningful information.

2. Define data attributes. Data attributes are characteristics or properties of data objects. They represent
the various features or variables of the data.

3. What is nominal attribute? A nominal attribute is a categorical attribute with no inherent order or
ranking among its categories. Examples include gender, color, and nationality.

4. Give purpose of preprocessing. The purpose of preprocessing is to clean and transform raw data into a
format suitable for analysis, ensuring data quality and consistency.

5. Which operations are not possible with ordinal attributes? Operations such as arithmetic calculations
(e.g., addition, subtraction) are not possible with ordinal attributes because they only represent order, not
magnitude.

6. Define data object. A data object is an entity that contains data and is described by a set of attributes.
Examples include a customer record or a product entry.

7. What is missing values? Missing values are data points that are not recorded or are absent in a dataset.

8. Define data cleaning. Data cleaning is the process of detecting and correcting (or removing) errors and
inconsistencies in data to improve its quality.

9. Define data discretization. Data discretization is the process of converting continuous data into discrete
buckets or intervals.

10. Differentiate discrete and continuous attributes (any two points).

Discrete Attributes: Take on a finite number of distinct values (e.g., number of students).
Continuous Attributes: Take on an infinite number of values within a range (e.g., height, weight).

11. Why is data reduction an important operation of data preprocessing? Data reduction simplifies the
dataset by reducing its size, making it easier to analyze and process without losing significant
information.

12. What is data transformation? Data transformation involves converting data into a suitable format or
structure for analysis, such as normalization or encoding.

13. What is one-hot coding? One-hot coding is a method of converting categorical variables into binary
vectors, where each category is represented by a unique binary vector.

14. What are different methods for feature extraction? Methods for feature extraction include Principal
Component Analysis (PCA), Linear Discriminant Analysis (LDA), and feature selection techniques.

15. Define rescaling. Rescaling involves adjusting the range of data values to a standard scale, typically
between 0 and 1.

16. What is binarizing? Binarizing converts numerical or categorical data into binary values (0 or 1) based
on a threshold.

17. Define data wrangling. Data wrangling is the process of cleaning, transforming, and organizing raw
data into a usable format for analysis.
18. What is data filling? Data filling involves imputing missing values in a dataset using various techniques
such as mean imputation or interpolation.

19. What is meant by data quality? Data quality refers to the accuracy, completeness, consistency, and
reliability of data.

20. What is data cube? A data cube is a multi-dimensional array of values used to represent data along
multiple dimensions, often used in OLAP (Online Analytical Processing).

Long Answers
1. Explain the need for data preprocessing before applying any analysis of data.

Data preprocessing is essential because raw data often contains inconsistencies, missing values, and
noise that can lead to inaccurate analysis and poor model performance. Preprocessing steps such as data
cleaning, normalization, and transformation ensure that the data is of high quality and suitable for
analysis. This process helps in:

Improving Data Quality: Removing errors and inconsistencies.


Enhancing Model Performance: Ensuring that the data is in a format that machine learning
algorithms can effectively process.
Reducing Complexity: Simplifying the data to make it more manageable and interpretable.

2. Explain different types of data attributes with examples.

Nominal Attributes: Categorical data without any order (e.g., colors: red, blue, green).
Ordinal Attributes: Categorical data with a meaningful order but no fixed interval (e.g., rankings:
first, second, third).
Interval Attributes: Numerical data with meaningful intervals but no true zero point (e.g.,
temperature in Celsius).
Ratio Attributes: Numerical data with a true zero point and meaningful intervals (e.g., height,
weight).

3. What are the various types of data available? Give an example of each.

Structured Data: Organized in rows and columns (e.g., SQL databases).


Unstructured Data: No predefined structure (e.g., text, images).
Semi-structured Data: Contains tags or markers to separate data elements (e.g., JSON, XML).

4. What is the role of data cleaning? Explain in detail, any two standard data cleaning methods.

Data cleaning improves data quality by removing errors and inconsistencies. Two standard methods are:

Handling Missing Values: Techniques include mean imputation, median imputation, and using
algorithms to predict missing values.
Removing Duplicates: Identifying and removing duplicate records to ensure data integrity.

5. Differentiate between equal-width binning and equal-frequency binning.

Equal-Width Binning: Divides the range of data into equal-sized intervals.


Equal-Frequency Binning: Divides the data into intervals that contain an equal number of data
points.

6. What is data quality? Which factors affect data quality?

Data quality refers to the condition of data based on factors such as accuracy, completeness, consistency,
and reliability. Factors affecting data quality include data entry errors, missing values, and
inconsistencies in data collection methods.

7. Explain the following data transformation techniques:

Rescaling Data: Adjusting the range of data values to a standard scale.


Normalizing Data: Converting data to a common scale without distorting differences in the
ranges of values.
Binarizing Data: Converting data into binary values based on a threshold.
Standardizing Data: Scaling data to have a mean of 0 and a standard deviation of 1.
Label Encoding: Converting categorical labels into numerical values.
One-Hot Encoding: Converting categorical variables into binary vectors.

8. What is meant by dimensionality reduction? Differentiate between feature selection and feature
extraction.

Dimensionality reduction reduces the number of features in a dataset to simplify analysis and improve
model performance.

Feature Selection: Selecting a subset of relevant features from the original dataset.
Feature Extraction: Creating new features by transforming the original features.

9. What is meant by discretization and concept hierarchy? Differentiate between top-down


discretization and bottom-up discretization.

Discretization converts continuous data into discrete intervals. Concept hierarchy organizes data into
multiple levels of abstraction.

Top-Down Discretization: Starts with a single interval and recursively splits it.
Bottom-Up Discretization: Starts with individual data points and merges them into intervals.

10. Suppose a group of 12 sales price records has been sorted as follows: 5, 10, 11, 13, 15, 35, 50, 55, 72,
92, 204, 215. Partition them into three bins by each of the following methods: (i) equal-frequency
(equal-depth) Binning (ii) equal-width Binning.

Equal-Frequency Binning:

Bin 1: 5, 10, 11, 13


Bin 2: 15, 35, 50, 55
Bin 3: 72, 92, 204, 215

Equal-Width Binning:

Bin 1: 5, 10, 11, 13, 15


Bin 2: 35, 50, 55, 72
Bin 3: 92, 204, 215

11. Explain data cube aggregation in detail.

Data cube aggregation involves summarizing data along multiple dimensions to provide a multi-
dimensional view of data. It is used in OLAP to perform complex queries and analysis efficiently.

12. What do you mean by discretization? Explain discretization by Histogram analysis.

Discretization converts continuous data into discrete intervals. Histogram analysis involves creating a
histogram to visualize the distribution of data and then defining intervals based on the histogram.

13. Explain concept hierarchy generation for nominal data.

Concept hierarchy generation organizes nominal data into multiple levels of abstraction. For example, a
hierarchy for geographical data might include levels such as country, state, and city.

Feel free to ask if you need more details on any of these topics!
Powered by AI

Copilot

Sure, let’s start with the short answers for Chapter 4 on Data Visualization:

Short Answers
1. Define data visualization: Data visualization is the graphical representation of information and data
using visual elements like charts, graphs, and maps to make data easier to understand and interpret.

2. What is the purpose of data visualization?: The purpose of data visualization is to help people
understand complex data sets by presenting the data in a visual context, making it easier to identify
patterns, trends, and insights.

3. What is geographical data?: Geographical data, also known as geospatial data, refers to information
that has a geographic aspect to it, such as coordinates, addresses, or regions.

4. Define Exploratory Data Analysis (EDA): EDA is an approach to analyzing data sets to summarize
their main characteristics, often using visual methods. It helps in understanding the data, detecting
anomalies, and testing hypotheses.

5. What is visual encoding?: Visual encoding is the process of representing data values as visual elements,
such as points, lines, or bars, in a chart or graph.

6. What is tag cloud?: A tag cloud, or word cloud, is a visual representation of text data where the size of
each word indicates its frequency or importance.

7. List data visualization libraries in Python:

Matplotlib
Seaborn
Plotly
Bokeh
Altair

8. Define box plot: A box plot, or box-and-whisker plot, is a standardized way of displaying the
distribution of data based on a five-number summary: minimum, first quartile (Q1), median, third
quartile (Q3), and maximum.

9. What is the use of bubble plot?: A bubble plot is used to display three dimensions of data. Each point is
represented by a bubble, with the position determined by two variables and the size of the bubble
representing the third variable.

10. Define dendrogram: A dendrogram is a tree-like diagram used to illustrate the arrangement of clusters
produced by hierarchical clustering.

Long Answers

1. What is data visualization? Why is data visualization important for data analysis? Explain in
detail.

Data visualization is the practice of translating information into a visual context, such as a map or graph,
to make data easier for the human brain to understand and draw insights from. It is important for data
analysis because it:

Simplifies Complex Data: Makes large and complex data sets more accessible, understandable,
and usable.
Reveals Patterns and Trends: Helps identify patterns, trends, and outliers that might not be
apparent in raw data.
Enhances Communication: Facilitates the communication of data insights to stakeholders,
making it easier to convey findings and support decision-making.
Supports Data Exploration: Allows for interactive exploration of data, enabling users to drill
down into details and uncover deeper insights.

2. Explain any five data visualization libraries that are commonly used in Python.

Matplotlib: A comprehensive library for creating static, animated, and interactive visualizations in
Python. It is highly customizable and supports a wide range of plot types.
Seaborn: Built on top of Matplotlib, Seaborn provides a high-level interface for drawing attractive
and informative statistical graphics.
Plotly: An interactive graphing library that makes it easy to create interactive plots, dashboards,
and web applications.
Bokeh: A library for creating interactive and scalable visualizations that can be embedded in web
applications.
Altair: A declarative statistical visualization library based on Vega and Vega-Lite, making it easy
to create complex visualizations with concise code.

3. How to create a line chart? Explain with an example.

A line chart is used to display information as a series of data points connected by straight line segments.
Here’s an example using Matplotlib:

import matplotlib.pyplot as plt

# Sample data
x = [1, 2, 3, 4, 5]
y = [2, 3, 5, 7, 11]

# Create line chart


plt.plot(x, y, marker='o')
plt.title('Sample Line Chart')
plt.xlabel('X-axis')
plt.ylabel('Y-axis')
plt.show()

4. Write a short note on: Data visualization types.

Data visualization types include:

Bar Charts: Used to compare different categories.


Line Charts: Used to show trends over time.
Pie Charts: Used to show proportions of a whole.
Scatter Plots: Used to show relationships between two variables.
Histograms: Used to show the distribution of a dataset.

5. What is the various statistical information that a box plot can display? Draw a diagram to
represent a box plot containing various statistical information.

A box plot displays:

Minimum: The smallest data point excluding outliers.


First Quartile (Q1): The median of the lower half of the dataset.
Median: The middle value of the dataset.
Third Quartile (Q3): The median of the upper half of the dataset.
Maximum: The largest data point excluding outliers.
Outliers: Data points that fall outside the whiskers.

Box Plot Diagram (Note: Replace with an actual diagram)

6. What are the three dimensions used in a bubble plot? Write the Python code to create a simple
bubble plot.

The three dimensions in a bubble plot are:

X-axis: Represents one variable.


Y-axis: Represents another variable.
Bubble Size: Represents the third variable.

Python Code:
import matplotlib.pyplot as plt

# Sample data
x = [1, 2, 3, 4, 5]
y = [10, 20, 25, 30, 35]
size = [40, 60, 80, 100, 120]

# Create bubble plot


plt.scatter(x, y, s=size, alpha=0.5)
plt.title('Sample Bubble Plot')
plt.xlabel('X-axis')
plt.ylabel('Y-axis')
plt.show()

7. Mention any two cases when a dendrogram can be used. Write the Python code to create a simple
dendrogram.

Dendrograms can be used in:

Hierarchical Clustering: To visualize the arrangement of clusters.


Phylogenetic Trees: To represent evolutionary relationships.

Python Code:
from scipy.cluster.hierarchy import dendrogram, linkage
import matplotlib.pyplot as plt

# Sample data
data = [[1, 2], [3, 4], [5, 6], [7, 8]]

# Perform hierarchical clustering


linked = linkage(data, 'single')

# Create dendrogram
dendrogram(linked)
plt.title('Sample Dendrogram')
plt.show()

8. Write the Python code to create a simple Venn diagram for the following two sets:

Python Code:

from matplotlib_venn import venn2


import matplotlib.pyplot as plt

# Sample sets
set1 = {1, 2, 3, 4, 5}
set2 = {4, 5, 6, 7, 8}

# Create Venn diagram


venn2([set1, set2], ('Set1', 'Set2'))
plt.title('Sample Venn Diagram')
plt.show()

9. Use Python code to design a 3D scatter plot for the given x, y, and z values.

Python Code:

import matplotlib.pyplot as plt


from mpl_toolkits.mplot3d import Axes3D

# Sample data
x = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
y = [25, 34, 67, 88, 23, 69, 12, 45, 33, 61]
z = [10, 15, 12, 22, 25, 30, 18, 26, 15, 22]
# Create 3D scatter plot
fig = plt.figure()
ax = fig.add_subplot(111, projection='3d')
ax.scatter(x, y, z)
ax.set_title('3D Scatter Plot')
ax.set_xlabel('X-axis')
ax.set_ylabel('Y-axis')
ax.set_zlabel('Z-axis')
plt.show()

10. What are the roles of choropleth map and connection map? Which Python library function is
needed to be used for these two maps?

Choropleth Map: Used to represent data values across geographical regions using color gradients.
Connection Map: Used to show connections or relationships between different geographical
locations.

Python Libraries:

Choropleth Map: folium or plotly

You might also like