Implementing a Feed-Forward Neural Network
Implementing a Feed-Forward Neural Network
Date:
Reading Material:
A Feed-Forward Neural Network is one of the simplest types of artificial neural networks. It
consists of:
Input Layer: Accepts input features.
Hidden Layer(s): Performs computations.
Output Layer: Produces the final prediction/output.
Feed-forward means that data moves in only one direction from input to output without loops.
Basic Architecture:
Each neuron receives inputs, multiplies them by weights, adds a bias, and passes the result
through an activation function (like ReLU, Sigmoid, etc.).
Mathematically:
z=W×x+b
a=Activation(z)
Where:
W = weights matrix
x = input vector
b = bias vector
a = activated output
Activation Functions:
ReLU (Rectified Linear Unit):
Outputs the input directly if it’s positive; otherwise, it outputs zero.
It helps the network learn faster and reduces the chance of vanishing gradients.
Sigmoid Function:
Maps input values between 0 and 1, making it useful for binary classification.
It can suffer from vanishing gradients for very high or low input values.
Loss Functions:
Mean Squared Error (MSE):
Measures the average of the squares of the errors between predicted and true values.
Commonly used for regression tasks where output is continuous.
Cross-Entropy Loss:
Measures the difference between two probability distributions (predicted vs true).
Mainly used for classification tasks where output is categorical.
Optimization Methods:
Gradient Descent:
Iteratively updates weights by moving in the direction that reduces the loss.
Simple and effective but can be slow if the learning rate is not tuned properly.
Adam Optimizer:
Combines momentum and adaptive learning rates to improve training speed and
performance.
Often preferred because it requires less fine-tuning of learning rates.
1
2
[Link]: 01 IMPLEMENT A FEED - FORWARD NETWORK
Date:
AIM
ALGORITHM
3
4
PROGRAM
import pandas as pd
import numpy as np
import [Link] as plt
from sklearn.model_selection import train_test_split
from [Link] import mean_absolute_error, mean_squared_error, r2_score
from [Link] import Sequential
from [Link] import Dense
from [Link] import plot_model
df = pd.read_csv("/content/[Link]")
print([Link]().sum())
def preprocess_data(df):
df = [Link]([Link](numeric_only=True))
return df
df = preprocess_data(df)
X = [Link](columns=["Monthly_Income"])
y = df["Monthly_Income"]
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
def build_model(input_shape):
model = Sequential([
Dense(64, activation='relu', input_shape=(input_shape,)),
Dense(32, activation='relu'),
Dense(1)
])
[Link](optimizer='adam', loss='mse', metrics=['mae'])
return model
model = build_model(X_train.shape[1])
history = [Link](X_train, y_train, epochs=50, batch_size=16, validation_split=0.1, verbose=1)
y_pred = [Link](X_test).flatten()
mae = mean_absolute_error(y_test, y_pred)
mse = mean_squared_error(y_test, y_pred)
rmse = [Link](mse)
mape = [Link]([Link]((y_test - y_pred) / y_test)) * 100
mre = [Link]([Link](y_test - y_pred) / [Link]([Link](y_test), [Link](y_pred)))
r2 = r2_score(y_test, y_pred)
print(f"MAE: {mae:.2f}")
print(f"MSE: {mse:.2f}")
print(f"RMSE: {rmse:.2f}")
print(f"MAPE: {mape:.2f}%")
print(f"MRE: {mre:.2f}")
print(f"R² Score: {r2:.2f}")
5
OUTPUT
6
Evaluation by faculty
Criteria Marks
Preparation /20
Program /25
Output and Result /20
Viva /10
Total /75
Faculty Signature
with Date
RESULT
Thus, a Feed-Forward Network is implemented successfully.
7
EX NO: 2 IMPLEMENT AN IMAGE CLASSIFIER USING CNN
Date:
Reading Material:
8
A Convolutional Neural Network (CNN) is a type of deep learning model designed to process data
with a grid-like structure (e.g., images). CNNs are highly effective at automatically detecting
patterns such as edges, textures, and shapes in images.
9
EX NO: 2 IMPLEMENT AN IMAGE CLASSIFIER USING CNN
Date:
AIM
ALGORITHM
10
PROGRAM
import tensorflow as tf
from [Link] import datasets, layers, models
import [Link] as plt
import numpy as np
(x_train, y_train), (x_test, y_test) = datasets.cifar10.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
class_names = ['Airplane', 'Car', 'Bird', 'Cat', 'Deer',
'Dog', 'Frog', 'Horse', 'Ship', 'Truck']
[Link](figsize=(10, 2))
for i in range(10):
[Link](1, 10, i + 1)
[Link]([])
[Link]([])
[Link](False)
[Link](x_train[i])
[Link](class_names[int(y_train[i])])
[Link]()
model = [Link]([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(32, 32, 3)),
layers.MaxPooling2D((2, 2)),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D((2, 2)),
[Link](64, activation='relu'),
[Link](10)
])
[Link](optimizer='adam',
loss=[Link](from_logits=True),
metrics=['accuracy'])
history = [Link](x_train, y_train, epochs=5,
validation_data=(x_test, y_test))
[Link]([Link]['accuracy'], label='Train Acc')
[Link]([Link]['val_accuracy'], label='Val Acc')
[Link]('Epoch')
[Link]('Accuracy')
11
[Link]()
[Link](True)
[Link]('Training & Validation Accuracy')
[Link]()
test_loss, test_acc = [Link](x_test, y_test, verbose=2)
print(f"\nTest Accuracy: {test_acc:.4f}")
OUTPUT
Evaluation by faculty
Criteria Marks
Preparation /20
Program /25
Output and Result /20
Viva /10
Total /75
Faculty Signature
with Date
RESULT
Thus, an Image Classifier using CNN is implemented successfully.
12
EX NO: 3 IMPLEMENT A SIMPLE LONG SHORT-TERM MEMORY NETWORK
Date:
Reading Material:
An LSTM is a special type of Recurrent Neural Network (RNN) capable of learning long-term
dependencies. Unlike simple RNNs, LSTMs are designed to remember information for long periods
by using gates to control the flow of information.
where:
W_f represents the weight matrix associated with the forget gate.
[h_t-1, x_t] denotes the concatenation of the current input and the previous hidden
state.
b_f is the bias with the forget gate.
σ is the sigmoid activation function.
Input Gate: Decides which new information to store in the cell state.
Where:
⊙ denotes element-wise multiplication
tanh is tanh activation function
Output Gate: Decides what part of the cell state to output. The task of extracting useful
information from the current cell state to be presented as output is done by the output gate.
13
14
EX NO: 3 IMPLEMENT A SIMPLE LONG SHORT-TERM MEMORY NETWORK
Date:
AIM
ALGORITHM
15
16
PROGRAM
import pandas as pd
import numpy as np
import tensorflow as tf
from sklearn.model_selection import train_test_split
import visualkeras
from [Link] import Sequential
from [Link] import LSTM, Dense
import warnings
[Link]("ignore", category=UserWarning)
import visualkeras
train = pd.read_csv('/content/[Link]', parse_dates=['date'])
test = pd.read_csv('/content/[Link]', parse_dates=['date'])
train_agg = [Link](['store', 'item', 'date'])['sales'].mean().reset_index()
subset = train_agg[(train_agg['store'] == 1) & (train_agg['item'] == 1)].sort_values('date')
def series_to_supervised(data, n_in=1, n_out=1):
X, y = [], []
for i in range(len(data) - n_in - n_out + 1):
[Link](data[i:(i + n_in)])
[Link](data[(i + n_in):(i + n_in + n_out)])
return [Link](X), [Link](y)
n_steps_in, n_steps_out = 7, 1
sales_data = subset['sales'].values
X, y = series_to_supervised(sales_data, n_steps_in, n_steps_out)
X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)
X_train = X_train.reshape((X_train.shape[0], X_train.shape[1], 1))
X_val = X_val.reshape((X_val.shape[0], X_val.shape[1], 1))
model = Sequential([
LSTM(50, activation='relu', input_shape=(n_steps_in, 1)),
Dense(1)
])
[Link](optimizer='adam', loss='mae')
history = [Link](X_train, y_train, validation_data=(X_val, y_val), epochs=10, verbose=1)
visualkeras.layered_view(model).show()
from [Link] import plot_model
plot_model(model, to_file='model_architecture.png', show_shapes=True, show_layer_names=True)
from [Link] import Image
Image(filename='model_architecture.png')
17
18
OUTPUT
Evaluation by faculty
Criteria Marks
Preparation /20
Program /25
Output and Result /20
Viva /10
Total /75
Faculty Signature
with Date
RESULT
Thus, a simple long short-term memory network is implemented successfully.
19
20
EX NO: 4 IMPLEMENT AN OPINION MINING IN RECURRENT NEURAL
Date: NETWORK
Reading Material:
Opinion Mining or Sentiment Analysis refers to extracting subjective information from text. It
classifies opinions, attitudes, and emotions expressed in a piece of text.
Sentiment Classes:
Positive Sentiment: The opinion is favorable.
Negative Sentiment: The opinion is unfavorable.
Neutral Sentiment: The opinion is neither positive nor negative.
Recurrent Neural Networks (RNNs) for Sentiment Analysis:
RNNs are neural networks that are designed for sequential data, making them ideal for text
where the sequence of words matters.
Unlike traditional neural networks, RNNs have memory, storing previous inputs in their
hidden state, which is crucial for understanding the context in sequences like sentences.
RNN Model for Opinion Mining:
The basic steps in an RNN model for sentiment analysis are:
Tokenization:
Convert sentences into tokens (words) and encode them (e.g., one-hot encoding or word
embeddings like Word2Vec, GloVe).
RNN Layer:
The RNN processes the sequence of words in the input text one by one, maintaining a hidden
state that captures the temporal dependencies.
Output Layer:
Use a fully connected layer to map the RNN output to sentiment classes (positive, negative,
neutral).
Activation Function:
Use a softmax activation for multi-class sentiment classification.
21
22
EX NO: 4 IMPLEMENT AN OPINION MINING IN RECURRENT NEURAL
Date: NETWORK
AIM
To implement an opinion mining in Recurrent Neural Network.
ALGORITHM
Step 6: Split the data into training, validation, and test sets
Split the padded sequences into training, validation, and test sets.
23
24
PROGRAM
import tensorflow as tf
from [Link] import Sequential
from [Link] import Embedding, SimpleRNN, Dense
from [Link] import sequence
from [Link] import imdb
num_words = 10000 # Limit the vocabulary size to 10,000 words
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=num_words)
word_index = imdb.get_word_index()
reverse_word_index = {value: key for key, value in word_index.items()}
first_review = ' '.join([reverse_word_index.get(i - 3, '?') for i in x_train[0]]) # Offset of 3 because the
indices 0, 1, 2 are reserved
print(f"First review: {first_review}")
print(f"Label: {y_train[0]}")
print(f"Minimum length of reviews: {min(len(x) for x in x_train)}")
print(f"Maximum length of reviews: {max(len(x) for x in x_train)}")
max_review_length = 500
x_train = sequence.pad_sequences(x_train, maxlen=max_review_length)
x_test = sequence.pad_sequences(x_test, maxlen=max_review_length)
validation_samples = 10000
x_val = x_train[:validation_samples]
y_val = y_train[:validation_samples]
x_train = x_train[validation_samples:]
y_train = y_train[validation_samples:]
model = Sequential()
[Link](Embedding(input_dim=num_words, output_dim=32, input_length=max_review_length))
[Link](SimpleRNN(units=32, activation='tanh'))
[Link](Dense(1, activation='sigmoid'))
[Link](loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
[Link](x_train, y_train, epochs=5, batch_size=64, validation_data=(x_val, y_val))
score = [Link](x_test, y_test)
print(f"Test loss: {score[0]}")
print(f"Test accuracy: {score[1]}")
25
26
OUTPUT
Evaluation by faculty
Criteria Marks
Preparation /20
Program /25
Output and Result /20
Viva /10
Total /75
Faculty Signature
with Date
RESULT
Thus, an opinion mining in Recurrent Neural Network implemented successfully.
27
EX NO: 5 IMPLEMENT AN AUTOENCODER
Date:
Reading Material:
An Autoencoder is a type of artificial neural network used to learn efficient representations of input
data, typically for the purposes of dimensionality reduction or feature learning. The goal is to
reconstruct the input as accurately as possible after it has been compressed into a lower-
dimensional form.
28
Structure of an Autoencoder:
Encoder:
Compresses the input into a latent-space representation (a smaller size).
Latent Space (Code/Bottleneck):
The compressed, encoded knowledge.
Decoder:
Reconstructs the input back from the encoded representation.
Working Principle:
29
30
EX NO: 5 IMPLEMENT AN AUTOENCODER
Date:
AIM
To implement an autoencoder for MNIST dataset
ALGORITHM
31
32
PROGRAM
import numpy as np
import [Link] as plt
import pandas as pd
import tensorflow as tf
from tensorflow import keras
from [Link] import layers
(x_train, _), (x_test, _) = [Link].load_data()
x_train = x_train.astype('float32') / 255.0
x_test = x_test.astype('float32') / 255.0
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
print("Training set shape:", x_train.shape)
print("Testing set shape:", x_test.shape)
class Autoencoder([Link]):
def __init__(self):
super(Autoencoder, self).__init__()
[Link] = [Link]([
[Link](shape=(28,28,1)),
layers.Conv2D(16, (3,3), activation='relu', padding='same', strides=2),
layers.Conv2D(8, (3,3), activation='relu', padding='same', strides=2)
])
[Link] = [Link]([
layers.Conv2DTranspose(8, (3,3), strides=2, activation='relu', padding='same'),
layers.Conv2DTranspose(16, (3,3), strides=2, activation='relu', padding='same'),
layers.Conv2D(1, (3,3), activation='sigmoid', padding='same')
])
def call(self, x):
encoded = [Link](x)
decoded = [Link](encoded)
return decoded
autoencoder = Autoencoder()
[Link](optimizer='adam', loss='mse')
history = [Link](x_train, x_train,
epochs=1,
batch_size=128,
validation_data=(x_test, x_test))
reconstructed = [Link](x_test)
n = 10
[Link](figsize=(20, 4))
for i in range(n):
ax = [Link](2, n, i + 1)
[Link](x_test[i].reshape(28, 28), cmap="gray")
[Link]("Original")
[Link]("off")
ax = [Link](2, n, i + 1 + n)
[Link](reconstructed[i].reshape(28, 28), cmap="gray")
[Link]("Reconstructed")
33
[Link](“off”)
[Link]()
OUTPUT
34
Evaluation by faculty
Criteria Marks
Preparation /20
Program /25
Output and Result /20
Viva /10
Total /75
Faculty Signature
with Date
RESULT
Thus, an autoencoder is implemented successfully.
35
36
EX NO: 6 IMPLEMENT AN OBJECT DETECTION USING CNN
Read
Date:
37
38
39
EX NO: 6 IMPLEMENT AN OBJECT DETECTION USING CNN
Date:
AIM
The aim is to implement an object detection using CNN.
ALGORITHM
40
41
PROGRAM
import torch
import torchvision
from torchvision import transforms
import cv2
import numpy as np
import random
import os
from matplotlib import pyplot as plt
COCO_INSTANCE_CATEGORY_NAMES = [
'__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'stop sign',
'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'backpack', 'umbrella', 'handbag',
'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball', 'kite',
'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
'bottle', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl', 'banana',
'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'dining table',
'toilet', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'book', 'clock',
'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
]
COLORS = [Link](0, 255, size=(len(COCO_INSTANCE_CATEGORY_NAMES), 3))
transform = [Link]([
[Link]()
])
def predict(img, model, device, detection_threshold):
img_tensor = transform(img).to(device)
img_tensor = img_tensor.unsqueeze(0)
with torch.no_grad():
outputs = model(img_tensor)
pred_scores = outputs[0]['scores'].detach().cpu().numpy()
pred_labels = outputs[0]['labels'].detach().cpu().numpy()
pred_boxes = outputs[0]['boxes'].detach().cpu().numpy()
keep = pred_scores >= detection_threshold
pred_boxes = pred_boxes[keep]
pred_labels = pred_labels[keep]
pred_scores = pred_scores[keep]
42
pred_classes = []
for label in pred_labels:
if label < len(COCO_INSTANCE_CATEGORY_NAMES):
pred_classes.append(COCO_INSTANCE_CATEGORY_NAMES[label])
else:
pred_classes.append(f"Label_{label}")
return pred_boxes, pred_classes, pred_scores
def draw_boxes(img, boxes, classes, scores):
img = [Link]()
for i, box in enumerate(boxes):
color = COLORS[[Link](0, len(COLORS) - 1)]
x_min, y_min, x_max, y_max = box
[Link](img, (int(x_min), int(y_min)), (int(x_max), int(y_max)), color, 2)
label = f"{classes[i]}: {scores[i]:.2f}"
[Link](img, label, (int(x_min), int(y_min) - 10), cv2.FONT_HERSHEY_SIMPLEX,
0.5, color, 2)
return img
def get_model(name='default'):
if name == 'v2':
model = [Link].fasterrcnn_resnet50_fpn_v2(pretrained=True)
else:
model = [Link].fasterrcnn_resnet50_fpn(pretrained=True)
[Link]()
return model
device = [Link]('cuda' if [Link].is_available() else 'cpu')
print(f"Using device: {device}")
model = get_model('default')
model = [Link](device)
test_dir = '/content/drive/MyDrive/Mini Project/images/'
test_images = [[Link](test_dir, img) for img in [Link](test_dir) if [Link](('.jpg', '.jpeg',
'.png'))]
for img_path in test_images:
img_bgr = [Link](img_path)
img_rgb = [Link](img_bgr, cv2.COLOR_BGR2RGB)
boxes, classes, scores = predict(img_rgb, model, device, detection_threshold=0.5)
result_img = draw_boxes(img_rgb, boxes, classes, scores)
[Link](figsize=(12, 8))
[Link](result_img)
[Link](f"Detected {len(classes)} objects")
43
[Link]('off')
[Link]()
print(f"Classes detected: {classes}")
print(f"Scores: {scores}\n"
OUTPUT
44
45
Evaluation by faculty
Criteria Marks
Preparation /20
Program /25
Output and Result /20
Viva /10
Total /75
Faculty Signature
with Date
RESULT
Thus, an object detection using CNN implemented successfully
46
Overall Record Completion
Status
Completed
Date of completion
Faculty Signature
47