0% found this document useful (0 votes)
31 views44 pages

Deep Learning AD3511

Uploaded by

Shanmu Priya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views44 pages

Deep Learning AD3511

Uploaded by

Shanmu Priya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Mecheri, Salem Dt, Tamilnadu – 636 453.

(Approved by AICTE, New Delhi & Affiliated to Anna University)


An ISO 9001:2015 certified Institution and accredited by NAAC with A+ grade.

BONAFIDE CERTIFICATE
Name : …………………………………………………………

Reg.No. : …………………………………………………………

Degree : …………………………………………………………

Branch : …………………………………………………………

…………………………………………………………

Semester : ……………Year: ……………

Certified that this is the bonafide record of the work done by the above student in
............................................................................................................................ Laboratory
during the academic year …………………………………

Max. Marks. Marks Secured In Word


Record 20
Attendance 05
Total 25

HEAD OF THE DEPARTMENT LAB-IN-CHARGE

Submitted for University Practical Examination held on………………………………

INTERNAL EXAMINER EXTERNAL EXAMINER


LAB MANNERS

 Students must be present in proper dress code and wear the ID card.

 Students should enter the log-in and log-out time in the log register withoutfail.
 Students are not allowed to download pictures, music, videosor files without
the permission of respective lab in-charge.
 Student should wear their own lab coats and bring observation notebooks tothe laboratory classes
regularly.
 Record of experiments done in a particular class should be submitted in

the next lab class.


 Students who do not submit the record notebook in time will not be allowed todo the next experiment
and will not be given attendance for that laboratory class.
 Students will not be allowed to leave the laboratory until they complete the experiment.

 Students are advised to switch-off the Monitors and CPU when they leave thelab.

 Students are advised to arrange the chairs properly when they leave the lab.
lOMoAR cPSD| 30525524

College:
Vision:
To improve the quality of human life through multi-disciplinaryprograms in Engineering,
architecture and management that are internationally recognized and would facilitate
research work toincorporate social economical and environmental development.
Mission:
 To create vibrant atmosphere that creates competent engineersinnovators, scientists,
entrepreneurs, academicians and thinks of tomorrow.
 To establish centers of excellence that provides sustainable solutionsto industry and society.
 To enhance capability through various value added programs so as
to meet the
challenges of dynamically changing global needs.

Department:
Vision:
The vision of the Artificial Intelligence and Data Science department isto make the students
community pioneers in Information Technology, Analysis of new Technology, learning new
advanced Technology, research and to produce creative solutions to society needs.

Mission:
 To provide excellence in advanced education, new innovation insoftware services.
 To provide quality education and to make the students employable
 Continuous up gradation of new technology for reaching success ofexcellence in a global
improvement in Information Technology

PROGRAM EDUCATIONAL OBJECTIVES (PEOs)

1. Utilize their proficiencies in the fundamental knowledge of basic science,Artificial intelligence,


Data science and statistics to build systems that require and analysis of large volumes of data.
2. Advance their technical skills to pursue pioneering research in the field of science and create
disruptive and sustainable solutions for the welfare of ecosystem.
3. Think logically, pursue lifelong learning and collaborate with an ethical amultidisciplinary team.
4. Design and model AI based solutions to critical problems in the realworld.
5. Exhibit innovative thoughts and creative ideas for effectivecontribution towards
building.
lOMoAR cPSD| 30525524

Program Outcomes(POs)
To apply knowledge of mathematics, science, engineering
PO1 fundamentals and
Computer science theory to solve the complex problems in
Computer Science and Engineering.

To analyze problems, identify and define the solutions using basic


PO2
principles of mathematics, science, technology and computer
engineering.
To design, implement, and evaluate computer based systems, processes,
PO3 components, or software to meet the realistic
constraints for the public health and safety, and the cultural, society and
environmental considerations.
To design and conduct experiments, perform analysis &
PO4
interpretation and
Provide valid conclusions with the use of research-based knowledge
and research methodologies related to Computer Science and Engineering.

To propose innovative original ideas and solutions, culminating into modern


PO5
engineering products for a large section of the society with longevity.

To apply the understanding of legal, health, security, cultural & social issues,
PO6 And there by ones responsibility in their application in
Professional Engineering practices.

To understand the impact of the professional engineering


PO7
solutions in social and environmental issues, and the need for sustainable
development.
To demonstrate integrity, ethical behavior and commitment to code of conduct of
PO8 professional practices and standards to adapt to
the technological developments of revolutionary world.

To function effectively as an individual, and as a member or leader in diverse


PO9
teams, and in multifaceted environments.
To communicate effectively to end users, with effective
PO10 presentations and
Write comprehends technical reports and publications
representing efficient engineering solutions.
To understand the engineering and management principles and their applications to
PO11 manage projects to suite the current need so multidisciplinary industries.

To learn and invent new technologies, and use them effectively towards continuous
PO12
professional development throughout the human life.
lOMoAR cPSD| 30525524

Program Specific Outcomes (PSOs)

1. Evolve AI based efficient domain specific process for effective decision making in several
domains such as business and governance domains.
2. Arrive at actionable Foresight, Insight, and Hindsight from data solving business and
engineering problems.
3. Create, select and apply the theoretical knowledge of AI and Data analysis along with practical
industrial tools and techniques to manage and solve wicked societal problems.
4. Capable of developing data analysis, knowledge representation and knowledge
engineering, and hence capable of coordinating complex projects.
5. Able to carry out fundamental research to cater the critical needs of the society through cutting
edge technologies of AI.
lOMoAR cPSD| 30525524

AD3511 DEEP LEARNING LABORATORY LTPC


0042

COURSE OBJECTIVES:

● To understand the tools and techniques to implement deep neural networks


● To apply different deep learning architectures for solving problems
● To implement generative models for suitable applications
● To learn to build and validate different models

LIST OF EXPERIMENTS:

1. Solving XOR problem using DNN


2. Character recognition using CNN
3. Face recognition using CNN
4. Language modeling using RNN
5. Sentiment analysis using LSTM
6. Parts of speech tagging using Sequence to Sequence architecture
7. Machine Translation using Encoder-Decoder model
8. Image augmentation using GANs
9. Mini-project on real world applications

TOTAL: 60 PERIODS
COURSE OUTCOMES:
After the completion of this course, students will be able to:
CO1:Apply deep neural network for simple problems (K3)
CO2:Apply Convolution Neural Network for image processing (K3)
CO3:Apply Recurrent Neural Network and its variants for text analysis (K3)
CO4:Apply generative models for data augmentation (K3)
CO5:Develop real-world solutions using suitable deep neural networks (K4)
lOMoAR cPSD| 30525524

CO’s-PO’s & PSO’s MAPPING

PO’s PSO’s
CO’s 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3
1 3 2 1 1 1 - - - 3 2 3 2 3 3 2
2 1 3 2 2 2 - - - 3 2 2 2 1 3 1
3 3 2 1 2 1 - - - 2 3 1 1 2 3 3
4 3 3 1 2 1 - - - 1 3 2 2 3 2 2
5 3 3 3 3 2 - - - 1 2 3 1 3 3 2
AVG 2.6 2.6 1.6 2 1.4 - - - 2 2.4 2.2 1.6 2.4 2.8 2

1 - low, 2 - medium, 3 - high, ‘-' - no correlation


lOMoAR cPSD| 30525524

INDEX:

Sl.No: Date: Name of The Exercise: Pg.No: Marks: Sign:

1 Solving XOR problem using


DNN.

2 Character recognition using


CNN.

3 Face recognition using CNN.

4 Language modeling using


RNN.

5 Sentiment analysis using


LSTM.

6 Parts of speech tagging using


Sequence to Sequence
architecture.

7 Machine Translation using


Encoder-Decoder model.

8 Image augmentation using


GANs.

9 Mini-project on real world


applications.
lOMoAR cPSD| 30525524

Ex.No:01
Date: Solving XOR Problem Using DNN.

Aim:
To write a python program for solving XOR problems using DNN.

XOR logical function truth table for 2-bit binary variables, i.e, the input vector and the corresponding
output is,

X1 X2 Y

O O O

O 1 1

1 O 1

1 1 O

Procedure:
1. Import the required Python libraries
2.Define Activation Function : Sigmoid Function
3.Initialize neural network parameters (weights, bias)
4. define model hyperparameters (number of iterations, learning rate)
5.Forward Propagation
6. Backward Propagation
7. Update weight and bias parameters
8.Train the learning model
9.Plot Loss value vs Epoch
1O.Test the model performance
lOMoAR cPSD| 30525524

Program:

#import Python Libraries


import numpy as np
from matplotlib import pyplot as plt

# Sigmoid Function
def sigmoid(z):
return 1 / (1 + np.exp(-z))

# Initialization of the neural network parameters


# Initialized all the weights in the range of between O and 1
# Bias values are initialized to O
def initializeParameters(inputFeatures, neuronsInHiddenLayers, outputFeatures):
W1 = np.random.randn(neuronsInHiddenLayers, inputFeatures)
W2 = np.random.randn(outputFeatures, neuronsInHiddenLayers)
b1 = np.zeros((neuronsInHiddenLayers, 1))
b2 = np.zeros((outputFeatures, 1))

parameters = {"W1" : W1, "b1": b1,


"W2" : W2, "b2": b2}
return parameters

# Forward Propagation
def forwardPropagation(X, Y, parameters):
m = X.shape[1]
W1 = parameters["W1"]
W2 = parameters["W2"]
b1 = parameters["b1"]
b2 = parameters["b2"]

Z1 = np.dot(W1, X) + b1
A1 = sigmoid(Z1)
Z2 = np.dot(W2, A1) + b2
A2 = sigmoid(Z2)

cache = (Z1, A1, W1, b1, Z2, A2, W2, b2)


logprobs = np.multiply(np.log(A2), Y) + np.multiply(np.log(1 - A2), (1 - Y))
cost = -np.sum(logprobs) / m
return cost, cache, A2

# Backward Propagation
def backwardPropagation(X, Y, cache):
m = X.shape[1]
(Z1, A1, W1, b1, Z2, A2, W2, b2) = cache
lOMoAR cPSD| 30525524

dZ2 = A2 - Y
dW2 = np.dot(dZ2, A1.T) / m
db2 = np.sum(dZ2, axis = 1, keepdims = True)

dA1 = np.dot(W2.T, dZ2)


dZ1 = np.multiply(dA1, A1 * (1- A1))
dW1 = np.dot(dZ1, X.T) / m
db1 = np.sum(dZ1, axis = 1, keepdims = True) / m

gradients = {"dZ2": dZ2, "dW2": dW2, "db2": db2,


"dZ1": dZ1, "dW1": dW1, "db1": db1}
return gradients

# Updating the weights based on the negative gradients


def updateParameters(parameters, gradients, learningRate):
parameters["W1"] = parameters["W1"] - learningRate * gradients["dW1"]
parameters["W2"] = parameters["W2"] - learningRate * gradients["dW2"]
parameters["b1"] = parameters["b1"] - learningRate * gradients["db1"]
parameters["b2"] = parameters["b2"] - learningRate * gradients["db2"]
return parameters

# Model to learn the XOR truth table


X = np.array([[O, O, 1, 1], [O, 1, O, 1]]) # XOR input
Y = np.array([[O, 1, 1, O]]) # XOR output

# Define model parameters


neuronsInHiddenLayers = 2 # number of hidden layer neurons (2)
inputFeatures = X.shape[O] # number of input features (2)
outputFeatures = Y.shape[O] # number of output features (1)
parameters = initializeParameters(inputFeatures, neuronsInHiddenLayers, outputFeatures)
epoch = 1OOOOO
learningRate = O.O1
losses = np.zeros((epoch, 1))

for i in range(epoch):
losses[i, O], cache, A2 = forwardPropagation(X, Y, parameters)
gradients = backwardPropagation(X, Y, cache)
parameters = updateParameters(parameters, gradients, learningRate)

# Evaluating the performance


plt.figure()
plt.plot(losses)
plt.xlabel("EPOCHS")
plt.ylabel("Loss value")
lOMoAR cPSD| 30525524

plt.show()

# Testing
X = np.array([[1, 1, O, O], [O, 1, O, 1]]) # XOR input
cost, _, A2 = forwardPropagation(X, Y, parameters)
prediction = (A2 > O.5) * 1.O
# print(A2)
print(prediction)

Output:

Result:
Thus the program for solving the XOR problem using DNN was implemented and executed successfully.
lOMoAR cPSD| 30525524

Ex.No:02
Date: Character recognition using CNN.

Aim:
To write a python program to implement Character recognition using CNN.

Procedure:
1.Data Collection and Preprocessing
2.Model Architecture
3.Compile the Model
4.Model Training
5.Evaluate the Model
6. Fine-Tuning and Optimization
7.Character Recognition
8.Deployment

Program:
import numpy as np
import tensorflow as tf
from tensorflow.keras.datasets import mnist
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Flatten, Dense, Dropout
from tensorflow.keras.utils import to_categorical
import matplotlib.pyplot as plt
# Load and preprocess the MNIST dataset
(X_train, y_train), (X_test, y_test) = mnist.load_data()
X_train = X_train.reshape(-1, 28, 28, 1).astype('float32') / 255.O
X_test = X_test.reshape(-1, 28, 28, 1).astype('float32') / 255.O
y_train = to_categorical(y_train, num_classes=1O)
y_test = to_categorical(y_test, num_classes=1O)
# Build the CNN model
lOMoAR cPSD| 30525524

model = Sequential([
Conv2D(32, kernel_size=(3, 3), activation='relu', input_shape=(28, 28, 1)),
MaxPooling2D(pool_size=(2, 2)),
Conv2D(64, kernel_size=(3, 3), activation='relu'),
MaxPooling2D(pool_size=(2, 2)),
Flatten(),
Dense(128, activation='relu'),
Dropout(O.5),
Dense(1O, activation='softmax')
])
# Compile the model
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model


history = model.fit(X_train, y_train, batch_size=128, epochs=1O, validation_data=(X_test, y_test))

# Evaluate the model


test_loss, test_accuracy = model.evaluate(X_test, y_test, verbose=O)
print("Test Accuracy:", test_accuracy)

# Plot training history


plt.figure(figsize=(1O, 4))
plt.subplot(1, 2, 1)
plt.plot(history.history['accuracy'], label='Training Accuracy')
plt.plot(history.history['val_accuracy'], label='Validation Accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.legend()
lOMoAR cPSD| 30525524

plt.subplot(1, 2, 2)
plt.plot(history.history['loss'], label='Training Loss')
plt.plot(history.history['val_loss'], label='Validation Loss')
plt.xlabel('Epoch')
plt.ylabel('Loss')
plt.legend()

plt.tight_layout()
plt.show()

# Example prediction
example_index = O
example_image = X_test[example_index]
example_label = np.argmax(y_test[example_index])
predicted_label = np.argmax(model.predict(np.expand_dims(example_image, axis=O)))

plt.imshow(example_image.squeeze(), cmap='gray')
plt.title(f"True Label: {example_label}, Predicted Label: {predicted_label}")
plt.axis('off')
plt.show()

Output:
lOMoAR cPSD| 30525524

Result:
Thus a python program to implement character recognition using CNN was implemented and executed
successfully.
lOMoAR cPSD| 30525524

Ex.No:03
Date: Face recognition using CNN.

Aim:

To write a python program to implement face recognition using CNN.


Definition:
Convolutional Neural Networks are a special kind of Neural Networks which helps the machine to learn
and classify the images. A good example is Face Recognition.

Procedure:

● Import TensorFlow
● Download and prepare the dataset
● Verify the data
● Create the convolutional base
● Add Dense layers on top
● Compile and train the model
● Evaluate the model
● Print the test accuracy

Program:
import tensorflow as tf
from tensorflow.keras import datasets, layers, models
import matplotlib.pyplot as plt

(train_images, train_labels), (test_images, test_labels) = datasets.cifar1O.load_data()


# Normalize pixel values to be between O and 1
train_images, test_images = train_images / 255.O, test_images / 255.O

(train_images, train_labels), (test_images, test_labels) = datasets.cifar10.load_data() #


Normalize pixel values to be between 0 and 1 train_images, test_images = train_images /
255.0, test_images / 255.0
In [ ]:
lOMoAR cPSD| 30525524

class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(1O,1O)) for i in range(25): plt.subplot(5,5,i+1) plt.xticks([]) plt.yticks([]) plt.grid(False)
plt.imshow(train_images[i]) # The CIFAR labels happen to be arrays, # which is why you need the extra index
plt.xlabel(class_names[train_labels[i][O]]) plt.show()
model = models.Sequential() model.add(layers.Conv2D(32, (3, 3), activation

='relu',
input_shape=(32, 32, 3))) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3),
activation='relu')) model.add(layers.MaxPooling2D((2, 2))) model.add(layers.Conv2D(64, (3, 3),
activation='relu'))
In [ ]:
model.summary()
Model: "sequential"

Layer (type) Output Shape Param #


=================================================================
conv2d (Conv2D) (None, 3O, 3O, 32) 896
lOMoAR cPSD| 30525524

max_pooling2d (MaxPooling2 (None, 15, 15, 32) O D)

conv2d_1 (Conv2D) (None, 13, 13, 64) 18496

max_pooling2d_1 (MaxPoolin (None, 6, 6, 64) O g2D)

conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

=================================================================
Total params: 5632O (22O.OO KB)
Trainable params: 5632O (22O.OO KB)
Non-trainable params: O (O.OO Byte)

In [ ]:
model.add(layers.Flatten()) model.add(layers.Dense(64, activation='relu')) model.add(layers.Dense(1O))
In [ ]:
model.summary()
conv2d_2 (Conv2D) (None, 4, 4, 64) 36928

flatten (Flatten) (None, 1O24) O

dense (Dense) (None, 64) 656OO

dense_1 (Dense) (None, 1O) 65O

=================================================================
Total params: 12257O (478.79 KB)
Trainable params: 12257O (478.79 KB)
Non-trainable params: O (O.OO Byte)
lOMoAR cPSD| 30525524

In [ ]:
model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy']) history = model.fit(train_images, train_labels, epochs=1O, validation_data=(test_images,
test_labels))
Epoch 1/1O
1563/1563 [==============================] - 1Os 4ms/step - loss: 1.5733 - accuracy: O.4257 - val_loss:
1.2938 - val_accuracy: O.54O5
Epoch 2/1O
1563/1563 [==============================] - 6s 4ms/step - loss: 1.1916 - accuracy: O.5761 - val_loss:
1.112O - val_accuracy: O.6O29
Epoch 3/1O
1563/1563 [==============================] - 6s 4ms/step - loss: 1.O424 - accuracy: O.6315 - val_loss:
1.O49O - val_accuracy: O.6332
Epoch 4/1O
1563/1563 [==============================] - 6s 4ms/step - loss: O.9586 - accuracy: O.6631 - val_loss:
O.9473 - val_accuracy: O.6711
Epoch 5/1O
1563/1563 [==============================] - 6s 4ms/step - loss: O.89O3 - accuracy: O.6875 - val_loss:
O.9499 - val_accuracy: O.6693
Epoch 6/1O
1563/1563 [==============================] - 6s 4ms/step - loss: O.8352 - accuracy: O.7O77 - val_loss:
O.9962 - val_accuracy: O.6548
Epoch 7/1O
1563/1563 [==============================] - 6s 4ms/step - loss: O.79O3 - accuracy: O.7219 - val_loss:
O.9115 - val_accuracy: O.691O
Epoch 8/1O
1563/1563 [==============================] - 6s 4ms/step - loss: O.75OO - accuracy: O.7349 - val_loss:
O.8694 - val_accuracy: O.6984
lOMoAR cPSD| 30525524

Epoch 9/1O
1563/1563 [==============================] - 6s 4ms/step - loss: O.7134 - accuracy: O.7494 - val_loss:
O.8856 - val_accuracy: O.693O
Epoch 1O/1O
1563/1563 [==============================] - 6s 4ms/step - loss: O.6781 - accuracy: O.7627 - val_loss:
O.8593 - val_accuracy: O.71O5

In [ ]:
plt.plot(history.history['accuracy'], label='accuracy')
plt.plot(history.history['val_accuracy'], label = 'val_accuracy')
plt.xlabel('Epoch') plt.ylabel('Accuracy')
plt.ylim([O.5, 1])
plt.legend(loc='lower right')
test_loss, test_acc = model.evaluate(test_images, test_labels, verbose=2)
print(test_acc)
lOMoAR cPSD| 30525524

In [ ]:

Output:

Result:
Thus a python program to implement face recognition using CNN was implemented and executed
successfully.
lOMoAR cPSD| 30525524

Ex.No:04
Date: Language modeling using RNN.

Aim:
To write a python program to implement language modelling using RNN.

Procedure:
1. Convert abstracts from list of strings into list of lists of integers (sequences)

2. Create feature and labels from sequences

«. Build model with Embedding and Dense layers

4. Load in pre-trained embeddings

5. Train model to predict next work in sequence

6. Make predictions by passing in starting sequence

Program:
import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Dropout
from keras.layers import LSTM
from keras.utils import np_utils

#Read the data, turn it into lower case


data = open("Othello.txt").read().lower()
#This get the set of characters used in the data and sorts them
chars = sorted(list(set(data)))
#Total number of characters used in the data
totalChars = len(data)
lOMoAR cPSD| 30525524

#Number of unique chars

numberOfUniqueChars = len(chars)

#This allows for characters to be represented by numbers


CharsForids = {char:Id for Id, char in enumerate(chars)}

#This is the opposite to the above


idsForChars = {Id:char for Id, char in enumerate(chars)}

#How many timesteps e.g how many characters we want to process in one go
numberOfCharsToLearn = 1OO

#Since our time step sequence represents a process for every 1OO chars we omit
#the first 1OO chars so the loop runs a 1OO less or there will be index out of

#range
counter = totalChars - numberOfCharsToLearn

#Inpput data
charX = []
#output data
y = []
#This loops through all the characters in the data skipping the first 1OO
for i in range(O, counter, 1):
#This one goes from O-1OO so it gets 1OO values starting from O and stops
#just before the 1OOth value
theInputChars = data[i:i+numberOfCharsToLearn]
#With no : you start with O, and so you get the actual 1OOth value
lOMoAR cPSD| 30525524

#Essentially, the output Chars is the next char in line for those 1OO chars
#in X
theOutputChars = data[i + numberOfCharsToLearn]

#Appends every 1OO chars ids as a list into X


charX.append([CharsForids[char] for char in theInputChars])
#For every 1OO values there is one y value which is the output
y.append(CharsForids[theOutputChars])
#Len charX represents how many of those time steps we have
#Our features are set to 1 because in the output we are only predicting 1 char
#Finally numberOfCharsToLearn is how many character we process
X = np.reshape(charX, (len(charX), numberOfCharsToLearn, 1))

#This is done for normalization


X = X/float(numberOfUniqueChars)

#This sets it up for us so we can have a categorical(#feature) output format

y = np_utils.to_categorical(y)
print(y)

model = Sequential()
#Since we know the shape of our Data we can input the timestep and feature data
#The number of timestep sequence are dealt with in the fit function
model.add(LSTM(256, input_shape=(X.shape[1], X.shape[2])))
model.add(Dropout(O.2))
#number of features on the output
model.add(Dense(y.shape[1], activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(X, y, epochs=5, batch_size=128)
lOMoAR cPSD| 30525524

model.save_weights("Othello.hdf5")
#model.load_weights("Othello.hdf5")

randomVal = np.random.randint(O, len(charX)-1)


randomStart = charX[randomVal]
for i in range(5OO):

x = np.reshape(randomStart, (1, len(randomStart), 1))


x = x/float(numberOfUniqueChars)
pred = model.predict(x)
index = np.argmax(pred)
randomStart.append(index)
randomStart = randomStart[1: len(randomStart)]
print("".join([idsForChars[value] for value in randomStart]))

Output:

Result:
Thus a python program to implement language modelling using RNN was implemented and executed
successfully.
lOMoAR cPSD| 30525524

Ex.No:05
Date: Sentiment analysis using LSTM.

Aim:
To write a python program to implement sentiment analysis using LSTM.

Procedure:
1. Loading data

2. Preprocessing the data and Tokenizing

3. Building and fitting the model on data

4. Evaluate the model

5. Predicting

Program:
import tensorflow as tf
from tensorflow.keras.datasets import imdb
from tensorflow.keras.layers import Embedding, Dense, LSTM
from tensorflow.keras.losses import BinaryCrossentropy
from tensorflow.keras.models import Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.sequence import pad_sequences

additional_metrics = ['accuracy']
batch_size = 128
embedding_output_dims = 15
loss_function = BinaryCrossentropy()
max_sequence_length = 3OO
num_distinct_words = 5OOO
number_of_epochs = 5
lOMoAR cPSD| 30525524

optimizer = Adam()
validation_split = O.2O
verbosity_mode = 1
(x_train, y_train), (x_test, y_test) = imdb.load_data(num_words=num_distinct_words)
print(x_train.shape)
print(x_test.shape)
padded_inputs = pad_sequences(x_train, maxlen=max_sequence_length, value = O.O)
padded_inputs_test = pad_sequences(x_test, maxlen=max_sequence_length, value = O.O)
len(x_train[89O])
len(padded_inputs[89O])
model = Sequential()
model.add(Embedding(num_distinct_words, embedding_output_dims, input_length=max_sequence_length))
model.add(LSTM(1O))
model.add(Dense(1, activation='sigmoid'))
model.compile(optimizer=optimizer, loss=loss_function, metrics=additional_metrics)
model.summary()
lOMoAR cPSD| 30525524

Output:

Result:
Thus a python program to implement sentiment analysis using LSTM was implemented and executed
successfully.
lOMoAR cPSD| 30525524

Ex.No:06
Date: Part of speech tagging using sequence to
sequence architecture.

Aim:
To write a python program to implement Parts Of Speech tagging using Sequence to Sequence architecture.

Procedure:
1.Data Preparation:
● Prepare your POS-tagged corpus where each sentence is paired with its corresponding POS tags.
2.Vocabulary Building:
● Create vocabularies for words and POS tags present in your corpus.
● Assign unique indices to each word and POS tag.
3.Encoding:
● Tokenize sentences into word indices and POS tag indices.
● Pad or truncate sequences to a fixed length to maintain uniform input shapes.
● Create an encoder model using Embedding layers to embed word indices and POS tag indices.
● Process the input through the encoder, which will generate a context vector.
4.Decoding:
● Design a decoder model to take the context vector and generate POS tag sequences.
● You might need to use teacher forcing (feeding ground-truth POS tags as inputs) during training.
● During inference, use the decoder to generate POS tag sequences by feeding the context vector and
previously generated tags as inputs.
5.Model Compilation and Training:
● Compile the Seq2Seq model with appropriate loss functions (e.g., categorical cross-entropy) and
optimizers (e.g., Adam).
● Train the model using your POS-tagged data. The decoder will predict POS tags based on the input
words and the context vector.
6.Evaluation:
● Evaluate the model on a separate test dataset to measure its POS tagging accuracy.
7.Inference:
● For inference, input a sentence to the encoder and then generate the corresponding POS tags using
the decoder.
8.Fine-Tuning and Optimization:
● Depending on the model's performance, experiment with hyperparameters, model architecture, and
training strategies.
9.Deployment (Optional):
● If the model performs well, you can deploy it for POS tagging tasks. However, traditional models
like CRF are usually more suitable for POS tagging due to their fixed input/output sequence
lengths.
lOMoAR cPSD| 30525524

Program:

import numpy as np
import tensorflow as tf
from tensorflow.keras.preprocessing.text import Tokenizer
from tensorflow.keras.preprocessing.sequence import pad_sequences
from tensorflow.keras.layers import Input, Embedding, LSTM, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.utils import to_categorical

# Sample data for demonstration


sentences = [
"The cat is sleeping",
"A dog is barking",
"She is reading a book"
]

pos_tags = [
"DT NN VBZ VBG",
"DT NN VBZ VBG",
"PRP VBZ VBG DT NN"
]
# Tokenize sentences and POS tags
tokenizer = Tokenizer()
tokenizer.fit_on_texts(sentences + pos_tags)

input_sequences = tokenizer.texts_to_sequences(sentences)
output_sequences = tokenizer.texts_to_sequences(pos_tags)

vocab_size = len(tokenizer.word_index) + 1

# Pad sequences
max_sequence_length = max(max(len(seq) for seq in input_sequences), max(len(seq) for seq in
output_sequences))
input_sequences_padded = pad_sequences(input_sequences, maxlen=max_sequence_length, padding='post')
output_sequences_padded = pad_sequences(output_sequences, maxlen=max_sequence_length, padding='post')

# Prepare decoder input and output


decoder_input_sequences = output_sequences_padded[:, :-1]
decoder_output_sequences = output_sequences_padded[:, 1:]

decoder_output_sequences = to_categorical(decoder_output_sequences, num_classes=vocab_size)

# Define the model architecture


latent_dim = 256
lOMoAR cPSD| 30525524

# Encoder
encoder_input = Input(shape=(max_sequence_length,))
encoder_embedding = Embedding(vocab_size, latent_dim)(encoder_input)
encoder_lstm = LSTM(latent_dim, return_state=True)
encoder_outputs, state_h, state_c = encoder_lstm(encoder_embedding)
encoder_states = [state_h, state_c]

# Decoder
decoder_input = Input(shape=(max_sequence_length - 1,))
decoder_embedding = Embedding(vocab_size, latent_dim)(decoder_input)
decoder_lstm = LSTM(latent_dim, return_sequences=True, return_state=True)
decoder_outputs, _, _ = decoder_lstm(decoder_embedding, initial_state=encoder_states)
decoder_dense = Dense(vocab_size, activation='softmax')
decoder_outputs = decoder_dense(decoder_outputs)

# Build the model


model = Model([encoder_input, decoder_input], decoder_outputs)
model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])

# Train the model


model.fit([input_sequences_padded, decoder_input_sequences], decoder_output_sequences, epochs=5O,
verbose=1)

# Generate predictions
encoder_model = Model(encoder_input, encoder_states)

decoder_state_input_h = Input(shape=(latent_dim,))
decoder_state_input_c = Input(shape=(latent_dim,))
decoder_states_inputs = [decoder_state_input_h, decoder_state_input_c]

decoder_outputs, state_h, state_c = decoder_lstm(decoder_embedding, initial_state=decoder_states_inputs)


decoder_states = [state_h, state_c]
decoder_outputs = decoder_dense(decoder_outputs)

decoder_model = Model([decoder_input] + decoder_states_inputs, [decoder_outputs] + decoder_states)

# Function to predict POS tags


def predict_pos(input_text):
input_seq = tokenizer.texts_to_sequences([input_text])[O]
input_seq_padded = pad_sequences([input_seq], maxlen=max_sequence_length, padding='post')

states_value = encoder_model.predict(input_seq_padded)

target_seq = np.zeros((1, 1))


lOMoAR cPSD| 30525524

target_seq[O, O] = tokenizer.word_index['<start>'] # Start token

decoded_sentence = ''
stop_condition = False
while not stop_condition:
output_tokens, h, c = decoder_model.predict([target_seq] + states_value)
sampled_token_index = np.argmax(output_tokens[O, -1, :])
sampled_word = tokenizer.index_word[sampled_token_index]
if sampled_word != '<end>':
decoded_sentence += sampled_word + ' '

if sampled_word == '<end>' or len(decoded_sentence.split()) > max_sequence_length:


stop_condition = True

target_seq = np.zeros((1, 1))


target_seq[O, O] = sampled_token_index

states_value = [h, c]

return decoded_sentence.strip()

# Test the model


test_sentence = "A cat is sleeping"
predicted_pos = predict_pos(test_sentence)
print(f"Input Sentence: {test_sentence}")
print(f"Predicted POS Tags: {predicted_pos}")

Output:
lOMoAR cPSD| 30525524

Result:
Thus a python program to implement Parts of speech tagging using Sequence to Sequence architecture was
implemented and executed successfully.
lOMoAR cPSD| 30525524

Ex.No:07
Machine Translation using Encoder-Decoder
Date:
model.

Aim:
To write a python program for denoising image using autoencoders

Procedure:

1. Implement a deep convolutional autoencoder for image denoising, mapping noisy digits images
from the MNIST dataset to clean digits images.
2. This implementation is based on an original blog post titled Building Autoencoders in Keras
3. Setup the necessary library files.
4. Build the autoencoder
5. We can train our autoencoder using train_data as both our input data and target
6. Predict on our test dataset and display the original image together with the prediction from
our autoencoder.
7. Using the noisy data as our input and the clean data as our target, we want our autoencoder to
learn how to denoise the images.
8. Now predict on the noisy data and display the results of our autoencoder.
9. The autoencoder finally removes the noise from the input images.

PROGRAM:
import numpy as np import tensorflow
as tf
import matplotlib.pyplot as plt

from tensorflow.keras import layers


from tensorflow.keras.datasets import mnist from
tensorflow.keras.models import Model
def preprocess(array):
array = array.astype("float32") / 255.O
lOMoAR cPSD| 30525524

array = np.reshape(array, (len(array), 28, 28, 1)) return array

def noise(array):

noise_factor = O.4
noisy_array = array + noise_factor * np.random.normal( loc=O.O,
scale=1.O, size=array.shape)

return np.clip(noisy_array, O.O, 1.O)

def display(array1, array2):


n = 1O
indices = np.random.randint(len(array1), size=n) images1
=array1[indices, :]
images2 = array2[indices, :]
plt.figure(figsize=(2O, 4))
for i, (image1, image2) in enumerate(zip(images1, images2)): ax =
plt.subplot(2, n, i + 1)
plt.imshow(image1.reshape(28, 28)) plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
ax = plt.subplot(2, n, i + 1 + n)
plt.imshow(image2.reshape(28, 28)) plt.gray()
ax.get_xaxis().set_visible(False)
ax.get_yaxis().set_visible(False)
plt.show()

(train_data, _), (test_data, _) = mnist.load_data()

# Normalize and reshape the data train_data =


preprocess(train_data) test_data =
preprocess(test_data)
# Create a copy of the data with added noise
noisy_train_data= noise(train_data)
noisy_test_data = noise(test_data)
lOMoAR cPSD| 30525524

# Display the train data and a version of it with added noise display(train_data,noisy_train_data)

input = layers.Input(shape=(28, 28, 1))

# Encoder
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(input) x =
layers.MaxPooling2D((2, 2), padding="same")(x)
x = layers.Conv2D(32, (3, 3), activation="relu", padding="same")(x) x =
layers.MaxPooling2D((2, 2), padding="same")(x)

# Decoder
x = layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x) x =
layers.Conv2DTranspose(32, (3, 3), strides=2, activation="relu", padding="same")(x) x =
layers.Conv2D(1, (3, 3), activation="sigmoid", padding="same")(x)

# Autoencoder
autoencoder = Model(input, x) autoencoder.compile(optimizer="adam",
loss="binary_crossentropy") autoencoder.summary()
lOMoAR cPSD| 30525524

Output:

Result:
Thus a python program to implement Machine Translation using the Encoder-Decoder model was
implemented and executed successfully.
lOMoAR cPSD| 30525524

Ex.No:08
Date: Image augmentation using GAN.

Aim:
To write a python program for Image Augmentation using GAN.

Procedure:
1.Dataset Preparation:
● Prepare the original dataset of images that you want to augment. This could be any dataset relevant
to your task, such as images of objects, animals, or scenes.
2.Build a GAN:
● Design and build a GAN architecture. A GAN consists of a generator and a discriminator network.
● The generator network generates new images from random noise.
● The discriminator network tries to distinguish between real images from the original dataset and
fake images generated by the generator.
3.Train the GAN:
● Train the GAN on the original dataset. The generator learns to create images that are increasingly
similar to the real dataset.
● The discriminator's goal is to get better at distinguishing real from generated images.
● Train the generator and discriminator in alternating steps.
4.Generate Augmented Images:
● After training, use the trained generator to generate new images. These images will be similar to
the original dataset but might have slight variations.
5.Augment Your Dataset:
● Combine the generated images with your original dataset to create an augmented dataset.
● The augmented dataset now contains both the original images and the generated images.
6.Train a Model:
● Use the augmented dataset to train your machine learning model. This model can be a neural
network, such as a convolutional neural network (CNN), for various tasks like image classification,
object detection, etc.
7.Evaluate and Compare:
● Evaluate your model's performance using the augmented dataset and compare it with the
performance using the original dataset alone.
● Augmented data can improve the model's generalization ability, especially if the original dataset is
limited.
8.Fine-Tuning (Optional):
● Depending on the performance, you can fine-tune the GAN or the machine learning model to
achieve better results.
9.Inference:
● Use your trained model for inference on real-world data.
lOMoAR cPSD| 30525524

Program:
import numpy as np

import matplotlib.pyplot as plt

import tensorflow as tf

from tensorflow.keras.layers import Input, Dense, Reshape, Flatten

from tensorflow.keras.models import Sequential, Model

from tensorflow.keras.optimizers import Adam

from tensorflow.keras.datasets import mnist

# Define the generator

def build_generator(latent_dim, output_shape):

model = Sequential()

model.add(Dense(128, input_dim=latent_dim, activation='relu'))

model.add(Dense(256, activation='relu'))

model.add(Dense(np.prod(output_shape), activation='sigmoid'))

model.add(Reshape(output_shape))

return model

# Define the discriminator

def build_discriminator(input_shape):

model = Sequential()

model.add(Flatten(input_shape=input_shape))
lOMoAR cPSD| 30525524

model.add(Dense(256, activation='relu'))

model.add(Dense(128, activation='relu'))

model.add(Dense(1, activation='sigmoid'))

return model

# Build and compile the discriminator

input_shape = (28, 28, 1)

discriminator = build_discriminator(input_shape)

discriminator.compile(loss='binary_crossentropy', optimizer=Adam(learning_rate=O.OOO2, beta_1=O.5),


metrics=['accuracy'])

# Build and compile the generator

latent_dim = 1OO

generator = build_generator(latent_dim, input_shape)

generator.compile(loss='binary_crossentropy', optimizer=Adam(learning_rate=O.OOO2, beta_1=O.5))

# Build GAN by chaining generator and discriminator

discriminator.trainable = False

gan_input = Input(shape=(latent_dim,))

generated_image = generator(gan_input)

gan_output = discriminator(generated_image)

gan = Model(gan_input, gan_output)

gan.compile(loss='binary_crossentropy', optimizer=Adam(learning_rate=O.OOO2, beta_1=O.5))


lOMoAR cPSD| 30525524

# Load and preprocess MNIST dataset

(X_train, _), (_, _) = mnist.load_data()

X_train = X_train / 127.5 - 1.O

X_train = np.expand_dims(X_train, axis=-1)

# Training GAN

epochs = 1OOOO

batch_size = 64

sample_interval = 1OOO

for epoch in range(epochs):

# Train discriminator

idx = np.random.randint(O, X_train.shape[O], batch_size)

real_images = X_train[idx]

fake_images = generator.predict(np.random.randn(batch_size, latent_dim))

d_loss_real = discriminator.train_on_batch(real_images, np.ones((batch_size, 1)))

d_loss_fake = discriminator.train_on_batch(fake_images, np.zeros((batch_size, 1)))

d_loss = O.5 * np.add(d_loss_real, d_loss_fake)

# Train generator

noise = np.random.randn(batch_size, latent_dim)

g_loss = gan.train_on_batch(noise, np.ones((batch_size, 1)))


lOMoAR cPSD| 30525524

# Print progress and save generated images

if epoch % sample_interval == O:

print(f"Epoch {epoch}, D Loss: {d_loss[O]}, G Loss: {g_loss}")

generated_images = generator.predict(noise)

generated_images = O.5 * generated_images + O.5

fig, axs = plt.subplots(5, 5)

cnt = O

for i in range(5):

for j in range(5):

axs[i,j].imshow(generated_images[cnt, :, :, O], cmap='gray')

axs[i,j].axis('off')

cnt += 1

plt.show()

Output:

Result:
Thus a python program to implement Image augmentation using GANs was implemented and executed
successfully.
lOMoAR cPSD| 30525524

Ex.No:09
Date: Mini Project on Real World Application.

You might also like