0% found this document useful (0 votes)
39 views10 pages

Deep Learning Is A Type of Machine Learning

Deep learning is a type of machine learning that uses artificial neural networks inspired by the human brain, consisting of layers of interconnected nodes. These neural networks have different layers, including input, hidden, and output layers, that help computers learn from data to make intelligent predictions and decisions by adjusting weights during training as the models learn. Key elements of deep learning include activation functions, backpropagation, and using multiple layers to understand complex patterns in data.

Uploaded by

Dev Nirmal
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
0% found this document useful (0 votes)
39 views10 pages

Deep Learning Is A Type of Machine Learning

Deep learning is a type of machine learning that uses artificial neural networks inspired by the human brain, consisting of layers of interconnected nodes. These neural networks have different layers, including input, hidden, and output layers, that help computers learn from data to make intelligent predictions and decisions by adjusting weights during training as the models learn. Key elements of deep learning include activation functions, backpropagation, and using multiple layers to understand complex patterns in data.

Uploaded by

Dev Nirmal
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1/ 10

 Deep learning is a type of machine learning.

 It uses artificial neural networks inspired by the human brain.


 These networks consist of layers of interconnected nodes.
 These layers help computers learn and make intelligent decisions.

Key points:

 Neural Networks: Models use layers of nodes to understand data.


 Layers: Input, hidden, and output layers change during training.
 Training: Models learn and adjust for better predictions.
 Activation Functions: Nodes use functions for learning complex patterns.
 Backpropagation: Important for training, adjusts weights to reduce errors.
DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a clustering
algorithm that groups data points based on their density. It defines clusters as sets of
points with sufficient nearby neighbors.

Key terms:

 Density: Measured by the number of points within a specified radius (Eps).


 Cluster: A maximal set of density-connected points.
 Core Point: Has more than MinPts neighbors within Eps, forming the interior
of a cluster.
 Border Point: Has fewer than MinPts neighbors within Eps but is near a core
point.
 Noise Point: Neither a core nor a border point, considered an outlier.

DBSCAN identifies clusters of arbitrary shapes, is robust to noise, and requires setting
parameters like Eps and MinPts.
Q1: What is data exploration? Explain applications of it.

 Data Exploration Defined: It's the first step in analyzing data to understand it
better.
 Why it Matters:
1. Spotting Patterns and Trends: Find hidden structures in the data.
2. Spotting Outliers: Identify unusual data points.
3. Understanding Data Spread: Get insights into how different data
points are distributed.
4. Building Better Models: Improve predictions by choosing and
enhancing the right data features.
5. Generating Ideas for Testing: Come up with testable ideas.
6. Cleaning Data: Fix any issues with data quality.

 Applications:

1. Business: Understand customers, spot trends, improve decisions.


2. Research: Analyze data, form hypotheses, make discoveries.
3. Healthcare: Identify diseases, track patients, personalize treatments.
4. Finance: Assess risk, manage investments, detect fraud.
5. Government: Analyze demographics, track crime, guide policies.
Q2: What are the objectives of data exploration?

 Objectives:
1. Understanding Data: Grasp how the data is structured.
2. Spotting Patterns: Find trends, relationships, and unusual things.
3. Checking Data Quality: Make sure the data is good to work with.
4. Choosing the Right Features: Pick the best data parts for analysis.
5. Testing Ideas: Create testable hypotheses.
6. Making Informed Decisions: Get insights for better decision-making.
Q3: Explain any four types of data visualization techniques.

Scatter Plots:

 Shows data points on a graph.


 Helps see relationships between two continuous things.

• Bar Charts:

 Uses bars to show data.


 Great for comparing different categories.

• Line Charts:

 Connects data points with lines.


 Shows how things change over time.

• Heatmaps:

 Uses colors in a grid to show data.


 Helps see patterns in big sets of data.
A confusion matrix is like a performance report card for a classifier (a tool that
predicts categories).

It helps us see how well the classifier is doing. Here's what each part means:

 True Positive (TP): The things it correctly predicted as positive.


 True Negative (TN): The things it correctly predicted as negative.
 False Positive (FP): The things it predicted as positive but were actually
negative.
 False Negative (FN): The things it predicted as negative but were actually
positive.

The matrix looks like this:


A ROC curve visually shows how well a classification model distinguishes between
classes (like positive and negative outcomes) at different settings.

 Components:
 True Positive Rate (TPR): Correctly identifies actual positives.
 False Positive Rate (FPR): Incorrectly identifies actual negatives as
positives.
 Comparison:
 Visual Check: Higher curve, especially toward the upper left, means
better performance.
 AUC: Larger area under the curve (AUC) indicates superior model
performance.
 Statistical Assessment: Tests like Wilcoxon help statistically compare
models.
 Medical Diagnosis: Compares diagnostic test accuracy.
 Spam Filtering: Evaluates effectiveness in distinguishing spam.
 Anomaly Detection: Assesses identification of abnormal events.

You might also like