FACULTY OF ENGINEERING AND TECHNOLOGY
BACHELOR OF TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
7th - SEMESTER
Computer Science & Engineering
Department
NAME : Kasula PavanKumar
ENROLLMENT NUMBER : 210303124588
LABORATORY MANUAL
CERTIFICATE
This is to certify Mr.KasulaPavanKumar with
Enrollment Number: 210303124588 has successfully complemented his
laboratory experiments in the Pattern Recognition Laboratery (203105479)
from the department of COMPUTER SCIENCE AND ENGINEERING(AI) during
the year 2024-2025.
Date of
Staff in charge HOD
Submission
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
INDEX
Page Assessment Marks Signature
Practical Name Performance
No : Date
Date
1 Implementation of Gradient Descent. 05 19/06/2024
Implementation of Linear
Regression using Gradient Descent.
2 10 26/06/2024
Comparison of Classification
Accuracy of SVM for given dataset
3
Generate your own feature set by
combining existing set of features,
4 or defining new ones. Feature
Representation
Generate samples of a normal
distribution with specific parameters
5 with respect to Mean and Covariance
6 Implement Linear Perceptron Learning
algorithm
7 Build IRIS flower classification in
python using pattern recognition
models
3|Page ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
PRACTICAL – 01
AIM : Implementation of Gradient Descent
TOOLS/SOFTWARE USED: Google Colab
THEORY:
Gradient Descent
Gradient Descent is a fundamental optimization algorithm used to minimize a function by iteratively
moving in the direction of the steepest descent (negative gradient) of the function. It's widely used in
machine learning for optimizing various models, such as linear regression, neural networks, and more
complex algorithms.
Key Concepts
1. Objective Function:
o The function f(x)f(x)f(x) that we aim to minimize or maximize. In the context of Gradient
Descent, we focus on minimization.
2. Gradient: The gradient of the objective function, denoted as 𝛻𝑓(𝜃), is a vector of partial derivatives
with respect to the parameter s. The gradient points in the direction of the steepest ascent of the
function. In gradient descent, we move in the opposite direction of the gradient to reach the
minimum
3. Learning Rate:
o The learning rate α\alphaα determines the size of the steps taken in the direction of the
gradient. It's a crucial hyperparameter that affects the convergence of Gradient Descent. Too
small a learning rate may result in slow convergence, while too large a learning rate may
cause oscillation or divergence.
4. Convergence:
o Gradient Descent converges when the updates to xxx become very small, indicating that
further iterations do not significantly change xxx or f(x)f(x)f(x).
Types of Gradient Descent
1. Batch Gradient Descent:
o In batch Gradient Descent, the gradient is computed using the entire dataset. It guarantees
convergence to the global minimum for convex functions but can be slow for large datasets
due to the computational cost of computing gradients over the entire dataset.
2. Stochastic Gradient Descent (SGD):
4|Page ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
o SGD updates the parameters xxx using gradients computed on a single training example at a
time. It's faster than batch Gradient Descent but may exhibit more variance in the
optimization path due to the noisy estimates of the gradients.
3. Mini-batch Gradient Descent:
o Mini-batch Gradient Descent strikes a balance between batch and stochastic Gradient
Descent by computing gradients on small random subsets of the training data. It combines
the efficiency of SGD with the stability of batch Gradient Descent.
ALGORITHM STEPS :
1. Initialization: Start with an initial guess for the parameters, 𝜃0.
2. Compute Gradient: Calculate the gradient of the objective function at the current parameters, 𝛻𝑓(𝜃).
3. Update Parameters: Adjust the parameters in the direction of the negative gradient: 𝜃 = 𝜃 − 𝛼𝛻𝑓(𝜃)
4. Repeat: Iterate the process for a specified number of times or until convergence is achieved.
Mathematical Formulation :
Given an objective function 𝑓(𝜃), where 𝜃 is a vector of parameters, the gradient descent update rule is:
θ(t+1)=θ(t)−α⋅∇f(θ(t))
where: • 𝜃 𝑡 represents the parameter values at iteration 𝑡. • 𝛼 is the learning rate. • 𝛻𝑓(𝜃(𝑡)) is the
gradient of 𝑓 at 𝜃 𝑡 .
PROCEDURE:
• Function Definition:
• f: The objective function to minimize.
• grad_f: The gradient of the objective function.
• initial_params: Initial values of the parameters.
• learning_rate: The learning rate for gradient descent.
• n_iterations: Number of iterations to run the gradient descent.
Gradient Descent Loop:
• Initialize parameters with initial_params.
• Iterate n_iterations times, updating the parameters using the gradient.
• Store the history of parameter values for analysis (optional).
5|Page ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
PROCEDURE :
CODE :
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.preprocessing import StandardScaler
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score
# Load your dataset
data = pd.read_csv("HousingData.csv")
# Separate features (X) and target variable (y)
# Replace 'MEDV' with the actual name of your target column
X = data.drop('MEDV', axis=1)
y = data['MEDV']
# Standardize the features
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
# Add intercept term to X (bias term)
X_scaled = np.c_[np.ones((X_scaled.shape[0], 1)), X_scaled]
def mean_squared_error(X, y, theta):
m = len(y)
predictions = X.dot(theta)
return np.sum((predictions - y) ** 2) / (2 * m)
def gradient(X, y, theta):
m = len(y)
gradients = X.T.dot(X.dot(theta) - y) / m
return gradients
6|Page ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
def gradient_descent(X, y, theta, learning_rate, iterations):
m = len(y)
history = []
for i in range(iterations):
grad = gradient(X, y, theta)
theta = theta - learning_rate * grad
cost = mean_squared_error(X, y, theta)
history.append(cost)
if (i % 100 == 0):
print(f"Iteration {i}: MSE = {cost}")
return theta, history
# Initialize parameters
learning_rate = 0.1
iterations = 1000
initial_theta = np.random.randn(X_scaled.shape[1])
# Run Gradient Descent
final_theta, history = gradient_descent(X_scaled, y, initial_theta, learning_rate,
iterations)
# Plot the convergence history
plt.plot(history)
plt.xlabel('Iterations')
plt.ylabel('Cost')
plt.show
7|Page ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
plt.figure(figsize=(10, 6))
plt.plot(range(iterations), history, color='blue')
plt.title('Gradient Descent Optimization')
plt.xlabel('Iterations')
plt.ylabel('Mean Squared Error (MSE)')
plt.grid(True)
plt.show()
print(f"Final Theta: {final_theta}")
OUTPUT :
CONCLUSION :
Gradient Descent is a powerful optimization algorithm widely used in machine learning for minimizing
functions iteratively. By computing gradients and adjusting parameters in the direction of steepest descent,
it efficiently converges towards local or global minima. The algorithm's effectiveness hinges on
appropriately tuning the learning rate and ensuring convergence criteria are met.
8|Page ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
PRACTICAL -02
AIM : Implementation of Linear Regression using Gradient Descent.
TOOLS/SOFTWARE USED
Google Colab
Python
NumPy
Matplotlib
THEORY
Linear regression is a supervised learning algorithm used for predicting a target variable based on one or
more input variables. The goal is to find the best-fit line that minimizes the difference between the
predicted values and the actual values.
Gradient descent is an optimization algorithm used to minimize the cost function by iteratively updating the
parameters in the direction of the negative gradient.
Key Concepts
Linear Regression: A method to model the relationship between a dependent variable and one or
more independent variables.
Gradient Descent: An optimization algorithm used to find the minimum of a function by iteratively
moving towards the steepest descent.
ALGORITHM STEPS
1. Import Libraries: Import necessary libraries for numerical operations and plotting.
2. Generate Example Data: Create synthetic data points for the input variable X and target variable y
with added Gaussian noise.
3. Prepare the Data: Add an intercept term to the data matrix X to account for the intercept
parameter.
4. Initialize Parameters: Initialize the parameters (theta values) randomly.
9|Page ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
5. Define Gradient Descent Parameters: Set the learning rate and the number of iterations for
gradient descent.
6. Implement Gradient Descent: Perform the gradient descent algorithm to minimize the cost
function.
7. Output the Results: Print the optimized values of the parameters.
8. Visualize the Results: Plot the original data points and the fitted regression line.
Mathematical Formulation :
The linear regression model is represented as:
y=θ0+θ1x
PROCEDURE :
import numpy as np
import matplotlib.pyplot as plt
# Set the random seed for reproducibility
np.random.seed(42)
10 | P a g e ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
# Generate 100 random data points
X = 2 * np.random.rand(100, 1)
y = 4 + 3 * X + np.random.randn(100, 1)
# Plot the data
plt.scatter(X, y)
plt.xlabel("x")
plt.ylabel("y")
plt.title("Generated Data")
plt.show()
# Add the intercept term to each instance
X_b = np.c_[np.ones((100, 1)), X]
# Random initialization of theta
theta = np.random.randn(2, 1)
# Gradient Descent parameters
learning_rate = 0.1
n_iterations = 1000
m = len(X_b)
11 | P a g e ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
# Gradient Descent loop
for iteration in range(n_iterations):
gradients = 2/m * X_b.T.dot(X_b.dot(theta) - y)
theta = theta - learning_rate * gradients
print("Theta found by Gradient Descent:", theta)
# Plot the data and the regression line
plt.plot(X, y, "b.")
plt.plot(X, X_b.dot(theta), "r-")
plt.xlabel("x")
plt.ylabel("y")
plt.title("Linear Regression using Gradient Descent")
plt.show()
CONCLUSION :
12 | P a g e ERP NUMBER: 210303124588
FACULTY OF ENGINEERING &
TECHNOLOGY
Pattern Recognition Laboratory
(203105479)
B. Tech – 4th Year 7th Semester
In this practical implementation, we successfully applied linear regression using gradient descent to fit a
model to synthetic data. The gradient descent algorithm iteratively minimized the cost function, resulting in
optimized parameters for the linear regression model. The visualization confirmed that the fitted line
closely matches the underlying pattern in the data, demonstrating the effectiveness of linear regression
and gradient descent for pattern recognition.
13 | P a g e ERP NUMBER: 210303124588