Linear Regression Notes
Linear Regression Notes
Regression
Notes
Linear Regression
Below, types of Linear Regression.
* Regression Metrics
* Gradient Descent
Intuition
5. Adjusted R-squared
ffi
1. Mean Absolute Error (MAE)
The Mean Absolute Error (MAE) is a metric used to
measure the average absolute di erence between the
observed values and the values predicted by a model.
1 n
yi − ^yi
n∑
MAE =
i=1
Advantages
Disadvantages
Advantages
Disadvantages
1 n
( )
2
^yi
n∑
RMSE = yi −
i=1
R2 = 1 −
∑i=1 (yi − y i)
n − 2
−
Here, y is actual data point, ^y is predicted. y is the mean
of the observed values.
fi
ffi
ffi
5. Adjusted R-squared
The Adjusted R-squared (Adjusted R-squared) is a
modi ed version of the R-squared score that adjusts for
the number of predictors in a regression model. It
penalises the inclusion of irrelevant predictors that do
not improve the model's explanatory power.
(1 − R )(n − 1)
2
R2 = 1 −
n−k −1
Here, n is numbers of data points, k is the number of
independent variables in data point and R 2 is for
coe cient for determination.
Advantages
Equation
y = b0 + b1 x1 + b2 x 2 + … + bn xn + ε
Formula
y = Xb + ε
ε = (y − ^y)
b = (X T X )−1X T y
How it works ?
What is Gradient ?
2. Until 0 is reached.
( ∂b )
∂J
bnew = bold − α
( ∂m )
∂J
m new = mold − α
b is the intercept.
fi
fi
For Multiple Linear Regression.
( ∂b )
∂J
bnew = bold − α
∑(
−2 yi − ^yi)
i=1
( ∂m )
∂J
m new = mold − α
Formula
Formula
Equation
y = b 0 + b1 ⋅ x + b2 ⋅ x 2 + ⋅ ⋅ ⋅ + ε
Bias
Variance
3. Elastic Net
fi
t
t
ffi
1. Ridge Regularisation (L2)
Formula
1 n
( )
2
^yi + λ || w || 2
n∑
yi −
i=1
y = mx + b
Formula
w = (X T X + α I )−1X T y
Formula
w = w − λ ⋅ (X T X ⋅ w − X T y + α w)
Formula
1 n
( )
2
^yi + λ || w ||
n∑
yi −
i=1
ff
ffi
fi
ffi
fi
t
t
3. Elastic Net
Formula
1 n
( )
2
^yi + a || w || 2 + b || w ||
n∑
yi −
i=1