MachineLearning Algorithm - Hope
MachineLearning Algorithm - Hope
l EEE Wire
l
M echanica Netw ess
ork
Ph
CE ysi
E cs
Ch
vil
em
Ci
ist
ry
tics
His
ma
tor
the
y.
Ma
www.hopelearning.net @hope_artificial_intelligence
#learnaiwithramisha
Machine Learning
Heart SVM
Disease KNN Final
Forest Fire … Model
…
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 2
#learnaiwithramisha
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 3
#learnaiwithramisha
Supervised
Learning
Past Data/input Data
Input/Variables/Feature
Output/label
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 4
#learnaiwithramisha
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 5
SCREENING-1
#learnaiwithramisha
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 6
#learnaiwithramisha
Unsupervised
Learning
Past Data/input Data
Input/Variables/Feature
@hope_artificial_intelligence
www.hopelearning.net Machine Deep Learning(TM)
Hope_Artificial_Intelligence:HAI 7
#learnaiwithramisha
Unsupervised
Learning
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 8
#learnaiwithramisha
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 9
#learnaiwithramisha
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 10
#learnaiwithramisha
Semi Supervised
Input/Variables/Feature
Output/label
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 11
#learnaiwithramisha
We know requirements
@hope_artificial_intelligence
www.hopelearning.net Machine Deep Learning(TM)
Hope_Artificial_Intelligence:HAI 12
#learnaiwithramisha
SCREENING - 2
Machine Learning
Problem Identification
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 13
#learnaiwithramisha
Problem Identification on Supervised Learning
Classification
Classifying the output based on the input parameter.
Yes/No
Dog/ Cat
Categorical Value House/not house
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 14
#learnaiwithramisha
Problem Identification on Supervised Learning
Regression
Numerical Value
Hope_Artificial_Intelligence:HAI 15
@hope_artificial_intelligence
www.hopelearning.net
#learnaiwithramisha
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 16
#learnaiwithramisha
Linear Graph
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 17
#learnaiwithramisha
Multiple Linear
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 18
#learnaiwithramisha
Artificial Intelligence
Machine Learning
Deep Learning
@hope_artificial_intelligence
Hope_Artificial_Intelligence:HAI 19
www.hopelearning.net
#learnaiwithramisha
Algorithms
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 20
#learnaiwithramisha
Polynomial Graph
www.hopelearning.net Hope_Artificial_Intelligence:HAI 21
Image Source: http://www.egwald.ca/linearalgebra/polynomials.php @hope_artificial_intelligence
#learnaiwithramisha
Validating Parameter:
Sum of Square Error(SSE) or Residual Sum of Square(RSS)
Sum of Square Regression(SSR) or Explained Sum of Square(ESS)
Sum of Square Total(SST)
R Squared(R2)
Adjusted R Squared
www.hopelearning.net Hope_Artificial_Intelligence:HAI 22
@hope_artificial_intelligence
#learnaiwithramisha
Where,
y= output(which forms straight line by adding all data points)
m= slope= dy/dx= weight (which says about constant distance between two data points)
c= bias= intercept= starting of the straight line= initial value= minimum value.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 23
@hope_artificial_intelligence
#learnaiwithramisha How this regression helps for future prediction?
SIMPLE LINEAR REGRESSION
Dataset
Predicted Value X Y
10
Y=0.3X+0.5 1 2
Y Dependant Variable 8 2 4
6 3 6
4 8
4
w 5 10
0 3 5
1 2 4
Y =wX +b X Independent Variable
y)( n (∑ 𝑥𝑦 ¿−(∑ x ) ( ∑ 𝑦)
W= Slope= ___________________ B= Bias=Initial Value=Minimum Value= ___________________
( (
www.hopelearning.net Hope_Artificial_Intelligence:HAI 24
@hope_artificial_intelligence
#learnaiwithramisha
Types of scattered data with Linear Regression line
D.V D.V
D.V
D.V
I.V
I.V
www.hopelearning.net Hope_Artificial_Intelligence:HAI
@hope_artificial_intelligence
#learnaiwithramisha Validating parameter: 1 .Sum of Square Error(SSE) or Residual Sum of Square(RSS)
Formula:
yi
10 Error
−
−Y Error= Actual Value(yi) – Predicted Value(yi)
Y Dependant Variable
8
Where, i = Observation point
6
n= number of Observation point
_
4 = Actual Value
2 =Predicted Value
0
1 2 3 4 5
X Independent Variable
www.hopelearning.net Hope_Artificial_Intelligence:HAI 26
@hope_artificial_intelligence
#learnaiwithramisha
Error
Input Actual Output Predicted Output Error=(Actual -Predicted)2
1 3.8 3.5 0.09
3 4.5 4.7 0.02
4 5.6 5.3 0.09
5 4.6 1.4 10.24
6 2.3 3.4 1.44
9 7.6 7.1 0.25
10 3.4 2.3 0.87
www.hopelearning.net Hope_Artificial_Intelligence:HAI 27
@hope_artificial_intelligence
#learnaiwithramisha Validating parameter: 1.Sum of Square Error(SSE) or Residual Sum of Square(RSS)
Formula:
𝑛
𝑆𝑢𝑚 𝑜𝑓 𝑆𝑞𝑢𝑎𝑟𝑒 𝐸𝑟𝑟𝑜𝑟 (𝑆𝑆𝐸)=∑ ( 𝑦𝑖−−
𝑦𝑖 ) 2
r 𝑖=0
r ro Where, i = Observation point
reE yi
10 u a
f Sq −Y n= number of Observation point
o
m _
Su
Y Dependant Variable
8 = Actual Value
6 =Predicted Value
Take away:
2 If,
Higher the SSE, then predicted value is poor
0 Smaller the SSE, then predicted value is good
1 2 3 4 5
X Independent Variable
www.hopelearning.net Hope_Artificial_Intelligence:HAI 28
@hope_artificial_intelligence
#learnaiwithramisha Validating parameter: 2.Sum of Square Regression(SSR) or Explained Sum of Square(ESS)
Formula:
𝑛
𝑆𝑢𝑚 𝑜𝑓 𝑆𝑞𝑢𝑎𝑟𝑒 𝑅𝑒𝑔𝑟𝑒𝑠𝑠𝑖𝑜𝑛(𝑆𝑆𝑅)=∑−
( 𝑦 𝑖− 𝑦𝑚𝑒𝑎𝑛 ) 2
𝑖=0
2
Take away:
Higher the SSR(or)ESS, better the model performance
0
1 2 3 4 5
X Independent Variable
www.hopelearning.net Hope_Artificial_Intelligence:HAI 29
@hope_artificial_intelligence
#learnaiwithramisha Validating parameter: 3.Sum of Square Total(SST)
Formula: 𝑛
𝑆𝑢𝑚 𝑆𝑞𝑢𝑎𝑟𝑒 𝑇𝑜𝑡𝑎𝑙(𝑆𝑆𝑇 )=∑ ( 𝑦 𝑖 − 𝑦𝑚𝑒𝑎𝑛 ) 2
𝑖=0
SST= SSR+SSE
Where, i = Observation point
Y
10
n= number of Observation point
Y Dependant Variable
6 = Mean of Dependant
ymean
Variable(Response variable)
4
2
Take away:
0
1 2 3 4 5 If,
Smaller the SST, better the model
X Independent Variable
www.hopelearning.net Hope_Artificial_Intelligence:HAI
@hope_artificial_intelligence
#learnaiwithramisha Validating parameter: 4. R Squared(R2)
𝑛
_
SSR
∑_________________
( 𝑦 𝑖− 𝑦𝑚𝑒𝑎𝑛 ) 2 Purpose of R2 :
To know , how well the model is fitted.
______ = 𝑖= 0
R 2 =
SST 𝑛
∑ ( 𝑦 𝑖− 𝑦𝑚𝑒𝑎𝑛 ) 2 How R2 differs from other parameters like SSE,SSR and SST ?
𝑖= 0
SSE, SSR and SST range varies with dataset to dataset.
But,
Where, i = Observation point R2 exists between 0 and 1
If,
n= number of Observation point R2 = nearly to 1, then built model has better performance.
_ R2 = nearly to 0, then built model has poor performance.
= Actual Value
The only drawback of R2 is that if new predictors (X) are added
=Predicted Value to our model, R2 only increases or remains constant but it never
decreases. We can not judge that by increasing complexity of
= Mean of Dependant our model, are we making it more accurate
Variable(Response variable)
www.hopelearning.net Hope_Artificial_Intelligence:HAI
@hope_artificial_intelligence
#learnaiwithramisha
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 32
#learnaiwithramisha
Validating parameter: 5.Adjusted R2
n 1
R2 1 (1 R2 );
nk
n number of observations,
k number of independent variables.
R2 Adjusted R2
R2 shows, how the model is fitted with Adjusted R2 helps to find most
the actual data points. significant independent variable.
R2 gets increase when new independent Adjusted R2 only gets increase when
variable is added to the existing model most significant independent variable
irrespective of poor significant or most is added to the model otherwise
significant independent variable. stays constant.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 33
@hope_artificial_intelligence
Simple Linear Regression- Take Away points
#learnaiwithramisha Model = 0.3X + 5 (for understanding purpose 0.3 = weight or slope, 5= Intercept or Initial value )
www.hopelearning.net Hope_Artificial_Intelligence:HAI 34
@hope_artificial_intelligence
#learnaiwithramisha
Normality of Error
Log Transform if slightly curve can use Log Transform to convert perfect linear.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 35
@hope_artificial_intelligence
#learnaiwithramisha
The Purpose of Training and Test Set
Hope_Artificial_Intelligence:HAI 36
www.hopelearning.net
#learnaiwithramisha
Training Dataset: 80
Weight
,Bais
Y= 0.3x1+0.4x2+0
Using Training
Data Set
Model
www.hopelearning.net Hope_Artificial_Intelligence:HAI 37
@hope_artificial_intelligence
#learnaiwithramisha
Test Set: 20
+0 =27.95
35.5
If(y>30):
print(“Unfit”)
Else
print(“fit”)
www.hopelearning.net Hope_Artificial_Intelligence:HAI 38
@hope_artificial_intelligence
Types of fitting
Hope_Artificial_Intelligence:HAI 39
Steps
Problem Identification
Reg/classification
Check Pattern
Model(Training Set)
Assumption
Hope_Artificial_Intelligence:HAI 40
#learnaiwithramisha
Multiple Linear
Regression
www.hopelearning.net Hope_Artificial_Intelligence:HAI 41
@hope_artificial_intelligence
#learnaiwithramisha
Multiple Linear
Regression
Simple Linear
Regression
www.hopelearning.net Hope_Artificial_Intelligence:HAI 42
@hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 43
@hope_artificial_intelligence
#learnaiwithramisha
Algorithm
NON-LINEAR ALGORITHM
Polynomial Algorithm
LINEAR ALGORITHM
Support Vector Machine
Simple Linear Algorithm
Decision Tree
Multiple Linear Algorithm
Random Forest
KNN
Naive's Bayes
www.hopelearning.net Hope_Artificial_Intelligence:HAI 44
@hope_artificial_intelligence
#learnaiwithramisha
Finding Truth
www.hopelearning.net Hope_Artificial_Intelligence:HAI 45
@hope_artificial_intelligence
#learnaiwithramisha
Types of Fitting| Over fitting, Under fitting, well fitting
www.hopelearning.net Hope_Artificial_Intelligence:HAI 46
@hope_artificial_intelligence
#learnaiwithramisha
Polynomial Regression
Polynomial
Regression
www.hopelearning.net Hope_Artificial_Intelligence:HAI 47
@hope_artificial_intelligence
#learnaiwithramisha
Ccomparison
Multiple
Linear
Regression
Simple
Linear
Regression
Polyno
mial
Regressi
on
www.hopelearning.net Hope_Artificial_Intelligence:HAI 48
@hope_artificial_intelligence
#learnaiwithramisha
The Outlier
www.hopelearning.net Hope_Artificial_Intelligence:HAI 49
@hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 51
Image source: https://blog.statsbot.co/support-vector-machines-tutorial-c1618e635e93 @hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 54
Image source: https://blog.statsbot.co/support-vector-machines-tutorial-c1618e635e93 @hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 55
Image source: https://blog.statsbot.co/support-vector-machines-tutorial-c1618e635e93 @hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 56
Image source: https://blog.statsbot.co/support-vector-machines-tutorial-c1618e635e93 @hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 57
Image source: https://blog.statsbot.co/support-vector-machines-tutorial-c1618e635e93 @hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 59
@hope_artificial_intelligence
#learnaiwithramisha
No Outliers
www.hopelearning.net Hope_Artificial_Intelligence:HAI 60
@hope_artificial_intelligence
#learnaiwithramisha
Decision Tree
www.hopelearning.net Hope_Artificial_Intelligence:HAI 61
@hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 62
@hope_artificial_intelligence
#learnaiwithramisha
Decision Tree
www.hopelearning.net Hope_Artificial_Intelligence:HAI 63
@hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 64
@hope_artificial_intelligence
#learnaiwithramisha
How to select the best variable from the dataset for Root Node
Reduction in
Variance
www.hopelearning.net Hope_Artificial_Intelligence:HAI 65
@hope_artificial_intelligence
How to select the best variable from the dataset for Root Node
Entropy
If ,
Entropy is larger Randomness is high perfectly will not able to
predict and Vice versa
Hope_Artificial_Intelligence:HAI 66
#learnaiwithramisha
How to select the best variable from the dataset for Root Node
Entropy
Information
gain
Constructing a decision tree is all about finding an attribute that returns the highest
information gain and the smallest entropy.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 68
@hope_artificial_intelligence
#learnaiwithramisha
How to select the best variable from the dataset for Root Node
Information
gain
Where “before” is the dataset before the split, K is the number of subsets generated by the split, and
(j, after) is subset j after the split.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 69
@hope_artificial_intelligence
#learnaiwithramisha
How to select the best variable from the dataset for Root Node
Gini index,
You can understand the Gini index as a cost function used to evaluate splits in the dataset.
It is calculated by subtracting the sum of the squared probabilities of each class from one.
It favors larger partitions and easy to implement whereas information gain favors smaller partitions with
distinct values.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 70
@hope_artificial_intelligence
#learnaiwithramisha
How to select the best variable from the dataset for Root Node
Gain Ratio,
Gain ratio overcomes the problem with information gain by taking into account the number
of branches that would result before making the split.
It corrects information gain by taking the intrinsic information of a split into account.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 71
@hope_artificial_intelligence
#learnaiwithramisha
How to select the best variable from the dataset for Root Node
Reduction in
Variance
This algorithm uses the standard formula of variance to choose the best split.
The split with lower variance is selected as the criteria to split the population:
www.hopelearning.net Hope_Artificial_Intelligence:HAI 72
@hope_artificial_intelligence
#learnaiwithramisha
How to avoid/counter Overfitting in Decision Trees?
www.hopelearning.net Hope_Artificial_Intelligence:HAI 73
@hope_artificial_intelligence
#learnaiwithramisha
How to avoid/counter Overfitting in Decision Trees?
www.hopelearning.net Hope_Artificial_Intelligence:HAI 74
@hope_artificial_intelligence
#learnaiwithramisha
Decision Tree
www.hopelearning.net Hope_Artificial_Intelligence:HAI 75
@hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 76
@hope_artificial_intelligence
Random Forest
Hope_Artificial_Intelligence:HAI 77
#learnaiwithramisha
Random Forest
Ensemble Learning
www.hopelearning.net Hope_Artificial_Intelligence:HAI 78
@hope_artificial_intelligence
#learnaiwithramisha Random Forest
Ensemble Learning
www.hopelearning.net Hope_Artificial_Intelligence:HAI 79
@hope_artificial_intelligence
#learnaiwithramisha Random Forest
Ensemble Learning
www.hopelearning.net 80
Hope_Artificial_Intelligence:HAI @hope_artificial_intelligence
#learnaiwithramisha
Random Forest
Bagging or Bootstrap Aggregation
www.hopelearning.net Hope_Artificial_Intelligence:HAI 81
@hope_artificial_intelligence
#learnaiwithramisha
Random Forest
Ensemble Learning
www.hopelearning.net Hope_Artificial_Intelligence:HAI 82
@hope_artificial_intelligence
#learnaiwithramisha
Random Forest
www.hopelearning.net Hope_Artificial_Intelligence:HAI 83
@hope_artificial_intelligence
#learnaiwithramisha
K-Nearest Neighbor
Navies' Bayes
www.hopelearning.net Hope_Artificial_Intelligence:HAI 84
@hope_artificial_intelligence
#learnaiwithramisha
K- Nearest Neighbour
www.hopelearning.net Hope_Artificial_Intelligence:HAI 85
@hope_artificial_intelligence
#learnaiwithramisha
K- Nearest Neighbour
www.hopelearning.net Hope_Artificial_Intelligence:HAI 86
@hope_artificial_intelligence
#learnaiwithramisha
K- Nearest Neighbour
www.hopelearning.net Hope_Artificial_Intelligence:HAI 87
@hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 88
Image Source: https://medium.com/30-days-of-machine-learning/day-3-k-nearest-neighbors-and-bias-variance-tradeoff-75f84d515bdb @hope_artificial_intelligence
#learnaiwithramisha
Navies' Bayes
Navies' Bayes
www.hopelearning.net Hope_Artificial_Intelligence:HAI 90
@hope_artificial_intelligence
#learnaiwithramisha
Navies' Bayes
www.hopelearning.net Hope_Artificial_Intelligence:HAI 91
@hope_artificial_intelligence
#learnaiwithramisha
Navies' Bayes
Assumptions
www.hopelearning.net Hope_Artificial_Intelligence:HAI 92
@hope_artificial_intelligence
#learnaiwithramisha
Navies' Bayes
The variable y is the class variable(stolen?), which represents if the car is stolen or not given the conditions.
Variable X represents the parameters/features.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 93
@hope_artificial_intelligence
#learnaiwithramisha
Navies' Bayes
www.hopelearning.net Hope_Artificial_Intelligence:HAI 94
@hope_artificial_intelligence
#learnaiwithramisha
Navies' Bayes
Since 0.144 > 0.048, Which means given the features RED
SUV and Domestic, our example gets classified as ’NO’ the car
is not stolen.
www.hopelearning.net Hope_Artificial_Intelligence:HAI 95
@hope_artificial_intelligence
#learnaiwithramisha
Navies' Bayes
The zero-frequency problem
www.hopelearning.net Hope_Artificial_Intelligence:HAI 96
@hope_artificial_intelligence
#learnaiwithramisha
Linear Algorithm
• Linear
• Multiple Linear
www.hopelearning.net Hope_Artificial_Intelligence:HAI 97
@hope_artificial_intelligence
Validating Parameter For Supervised Learning
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 98
@hope_artificial_intelligence
#learnaiwithramisha
www.hopelearning.net Hope_Artificial_Intelligence:HAI 99
@hope_artificial_intelligence
Validating Parameter For Supervised Learning
#learnaiwithramisha
Predicted Class
N= Positive Negative
N= Positive Negative
(TP) (FN)
(TP) (FN)
(TP) (FN)
to be positive.
(TP) (FN)
to be positive.
Negative False Positive True Negative False Negative (FN) : Observation is positive, but is predicted
(FP) (TN) negative.
(TP) (FN)
to be positive.
Negative False Positive True Negative False Negative (FN) : Observation is positive, but is predicted
(FP) (TN) negative.
False Positive (FP) : Observation is negative, but is predicted
positive.
(TP) (FN)
to be positive.
Negative False Positive True Negative False Negative (FN) : Observation is positive, but is predicted
(FP) (TN) negative.
False Positive (FP) : Observation is negative, but is predicted
positive.
True Negative (TN) : Observation is negative, and is predicted
to be negative.
(TP) (FN)
Type I Error to be positive.
Negative False Positive True Negative False Negative (FN) : Observation is positive, but is predicted
(FP) (TN) negative.
True Negative (TN) : Observation is negative, and is predicted
to be negative.
(TP) (FN)
Type I Error to be positive.
Negative False Positive True Negative False Negative (FN) : Observation is positive, but is predicted
(FP) (TN) negative.
Type II Error
True Negative (TN) : Observation is negative, and is predicted
to be negative.
(TP) (FN)
Type I Error to be positive.
Negative False Positive True Negative False Negative (FN) : Observation is positive, but is predicted
(FP) (TN) negative.
Type II Error
True Negative (TN) : Observation is negative, and is predicted
to be negative.
Predicted Class
N= Positive Negative
(TP) (FN)
Error I
Negative False Positive True Negative
(FP) (TN)
Error II
Predicted Class
N= Positive Negative
(TP) (FN)
Error I
Negative False Positive True Negative
(FP) (TN)
Error II
Predicted Class
N= Positive Negative
(TP) (FN)
Error I
Negative False Positive True Negative
(FP) (TN)
Error II
Predicted Class
N= Positive Negative
(TP) (FN)
Error I
Negative False Positive True Negative
(FP) (TN)
Error II
Predicted Class
N= Positive Negative
(TP) (FN)
Error I
Negative False Positive True Negative
(FP) (TN)
Error II
Predicted Class
N= Positive Negative
(TP) (FN)
Error I
Negative False Positive True Negative
(FP) (TN)
Error II
@hope_artificial_intelligence
www.hopelearning.net Hope_Artificial_Intelligence:HAI 116
#learnaiwithramisha
Logistic Algorithm
K-Means
Hierarchical
K means
K means
Hierarchical
Agglomerative
• Compute the proximity matrix
• Let each data point be a cluster
• Repeat: Merge the two closest clusters
and update the proximity matrix
• Until only a single cluster remains
Divisive
• Opposite of Agglomerative
https://towardsdatascience.com/the-complete-guide-to-decisio
n-trees-28a4e3c7be14
https://www.listendata.com/2018/01/linear-regression-in-pyth
on.html
https://www.statsmodels.org/dev/examples/notebooks/genera
ted/regression_diagnostics.html
Hope_Artificial_Intelligence:HAI 125