Advanced Deep Learning Practical File
Advanced Deep Learning Practical File
SESSION: 2023-24
Advances in Deep Learning
Lab
(AIML308P)
# compute no of labels
num_labels=len(np.unique(y_train))
# Model architecture is three-layer MLP with ReLU and dropout at each layer
model=Sequential()
model.add(Dense(hidden_units,input_dim=input_size))
model.add(Activation('relu'))
model.add(Dropout(dropout))
model.add(Dense(hidden_units))
model.add(Activation('relu'))
model.add(Dropout(dropout))
model.add(Dense(num_labels))
model.add(Activation('softmax'))
# model summary
model.summary()
Output-
Conclusion -
I mplementing the multilayer perceptron algorithm for MNIST Handwritten Digit Classification has
proven highly effective. Through extensive training on the MNIST dataset, the MLP demonstrates
impressive accuracy in distinguishing between handwritten digits. By leveraging the power of deep
learning and iterative optimization techniques, such as gradient descent, the model achieves robust
performance, making it a valuable tool for various applications in digit recognition and pattern
classification tasks.
Experiment -2
im- Design a neural network for classifying movie reviews (Binary
A
Classification) using IMDB dataset
Theory
eural networks for classifying movie reviews operate on the principle of learning complex
N
patternsintextualdatatodiscernsentiment.Usingalgorithmsinspiredbythehumanbrain'sneural
structure, these networks process large amounts of labeled movie review data from the IMDb
dataset. Through iterative training, the neural network adjusts its internalparameterstominimize
prediction errors, gradually improving its ability to accurately classify reviews as positive or
negative.Thisapproachharnessesthepowerofdeeplearningtoachieveremarkableperformancein
sentiment analysistasks,makingitawidelyadoptedtechniqueinnaturallanguageprocessingand
text classification domains.
Code -
f rom tensorflow.keras.datasets import imdb
# Load the data, keeping only 10,000 of the most frequently occuring words
(train_data, train_labels), (test_data, test_labels) = imdb.load_data(num_words = 10000)
# step 2: reverse word index to map integer indexes to their respective words
reverse_word_index = dict([(value, key) for (key, value) in word_index.items()])
Output-
Conclusion
I n conclusion, employing neural networks for binary classification of movie reviews using the
IMDbdatasethasdemonstratedremarkableefficacy.Leveragingdeeplearningtechniques,suchas
recurrentorconvolutionalneuralnetworks,enablesthemodeltoeffectivelydiscernsentimentwith
high accuracy. Through extensive training on labeled data, these networkslearnintricatepatterns
within textual data, distinguishing between positive and negative sentiments with impressive
precision. As a result, neural networks offer a robust solution for sentiment analysis in movie
reviews, providing valuable insights for various applications in the film industry and beyond.
EXPERIMENT-3
im- Design a neural Network for classifying news wires (Multi class
A
classification) using Reuters
dataset.
Theory
esigning aneuralnetworkforclassifyingnewswiresusingtheReutersdatasetinvolvesutilizing
D
deep learning techniques to process and understand textual data. By employing neural networks,
particularlyarchitectureslikerecurrentorconvolutionalneuralnetworks,themodellearnsintricate
patternswithinnewsarticlestocategorizethemintomultipleclasses.Throughiterativetrainingon
the labeled Reuters dataset, the network adjusts its parameters to minimize classification errors,
optimizing its ability to accurately assign categories to news articles.
Code-
import numpy as np
from keras.datasets import reuters
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Dense, Dropout, Activation
from keras.preprocessing.text import Tokenizer
import matplotlib.pyplot as plt
%matplotlib inline
# load a dataset
(XTrain, YTrain),(XTest, YTest) = reuters.load_data(num_words=None, test_split=0.3)
p lt.figure(1)
plt.subplot(121)
p lt.hist(YTrain, bins='auto')
plt.xlabel("Classes")
plt.ylabel("Number of occurrences")
plt.title("YTrain data")
plt.subplot(122)
plt.hist(YTest, bins='auto')
plt.xlabel("Classes")
plt.ylabel("Number of occurrences")
plt.title("YTest data")
plt.show()
# The dataset_reuters_word_index() function returns a list where the names are words and the
values are integer
WordIndex = reuters.get_word_index(path="reuters_word_index.json")
print(len(WordIndex))
IndexToWord = {}
for key, value in WordIndex.items():
IndexToWord[value] = key
print(' '.join([IndexToWord[x] for x in XTrain[1]]))
print(YTrain[1])
MaxWords = 10000
# Tokenization of words.
Tok = Tokenizer(num_words=MaxWords)
XTrain = Tok.sequences_to_matrix(XTrain, mode='binary')
XTest = Tok.sequences_to_matrix(XTest, mode='binary')
# Preprocessing of labels
NumClasses = max(YTrain) + 1
YTrain = to_categorical(YTrain, NumClasses)
YTest = to_categorical(YTest, NumClasses)
p rint(XTrain[1])
print(len(XTrain[1]))
model = Sequential()
#model building
model.add(Dense(512, input_shape=(MaxWords,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(NumClasses))
model.add(Activation('relu'))
model.summary()
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
history = model.fit(XTrain, YTrain,
validation_data=(XTest, YTest),
epochs=15,
batch_size=64)
#Evaluate the model.
Scores = model.evaluate(XTest, YTest, verbose=1)
print('Test loss:', Scores[0])
print('Test accuracy:', Scores[1])
def plotmodelhistory(history):
fig, axs = plt.subplots(1,2,figsize=(15,5))
# summarize history for accuracy
axs[0].plot(history.history['accuracy'])
axs[0].plot(history.history['val_accuracy'])
axs[0].set_title('Model Accuracy')
axs[0].set_ylabel('Accuracy')
axs[0].set_xlabel('Epoch')
axs[0].legend(['train', 'validate'], loc='upper left')
# summarize history for loss
axs[1].plot(history.history['loss'])
axs[1].plot(history.history['val_loss'])
axs[1].set_title('Model Loss')
axs[1].set_ylabel('Loss')
axs[1].set_xlabel('Epoch')
axs[1].legend(['train', 'validate'], loc='upper left')
p lt.show()
# list all data in history
print(history.history.keys())
plotmodelhistory(history)
Output -
Conclusion
hisapproachoffersvaluableinsightsforinformationretrieval,newsrecommendationsystems,and
T
other applications requiring efficient organization and analysis of large volumes of news content.
EXPERIMENT 4
im: Design a neural network for predicting house prices using Boston Housing
A
Price dataset.
Theory
esigning a neural network for predicting house prices using the Boston Housing Price dataset
D
involves constructing a model that learns the complexrelationshipsbetweenvariousfeaturesofa
houseanditscorrespondingprice.Byfeedingthenetworkwithinputfeaturessuchasthenumberof
rooms,crimerate,andaccessibilitytohighways,andtrainingitoncorrespondinghouseprices,the
network adjusts its internal parameters through iterative learning to make accurate predictions.
Code-
#import necessary libraries
import pandas as pd
import numpy as np
import tensorflow as tf
from tensorflow import keras
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from keras.callbacks import EarlyStopping
#load a dataset
(train_data, train_targets), (test_data, test_targets) = boston_housing.load_data()
# separate the training and target variable
feature=house_df.iloc[:,0:13]# training variables
target=house_df.iloc[:,13]# target varible
print(feature.head())
print('\n',target.head())
#feature normalization
normalized_feature= keras.utils.normalize(feature.values)
print(normalized_feature)
# shuffle and split data into train (~80%) and test (~20%)
X_train, X_test, y_train, y_test = train_test_split(normalized_feature, target.values,
test_size=0.2, random_state=42)
print('training data shape: ',X_train.shape)
print('testing data shape: ',X_test.shape)
Conclusion -Employing a neural network for house price prediction using the Boston Housing
rice dataset proves effective. By leveraging the network's ability to capture intricate patterns in the
P
data, it can accurately estimate house prices based on diverse features.
EXPERIMENT 5
im: Build a Convolution Neural Network for MNIST Handwritten Digit
A
Classification.
Theory
onvolutional Neural Networks (CNNs) excel at MNIST handwritten digit classification due to
C
their ability to extract hierarchical features from images. By employing convolutional layers to
detect patterns and pooling layers to reduce dimensionality, CNNs can effectively learn
representationsthatcapturethedistinctivefeaturesofhandwrittendigits.Throughsubsequentfully
connected layers and softmax activation, the network can accurately classify digits into their
respective categories.
Code-
# Import important library
import numpy as np
import pandas as ps
import matplotlib.pyplot as plt
from keras.datasets import mnist
from keras.utils import to_categorical
from keras.models import Sequential
from keras.layers import Conv2D, MaxPooling2D, Dense, Flatten, Activation, Dropout
#load a dataset
(x_train, y_train),(x_test,y_test) = mnist.load_data()
# Show Data
p rint(x_train.shape)
print(y_train.shape)
print(x_test.shape)
print(y_test.shape)
path = 'dog-vs-cat-classification'
classes = os.listdir(path)
classes
fig = plt.gcf()
fig.set_size_inches(16, 16)
cat_dir = os.path.join('dog-vs-cat-classification/cats')
dog_dir = os.path.join('dog-vs-cat-classification/dogs')
cat_names = os.listdir(cat_dir)
dog_names = os.listdir(dog_dir)
pic_index = 210
cat_images = [os.path.join(cat_dir, fname) for fname in cat_names[pic_index-8:pic_index]]
dog_images = [os.path.join(dog_dir, fname) for fname in dog_names[pic_index-8:pic_index]]
for i, img_path in enumerate(cat_images + dog_images):
sp = plt.subplot(4, 4, i+1)
sp.axis('Off')
img = mpimg.imread(img_path)
plt.imshow(img)
plt.show()
base_dir = 'dog-vs-cat-classification'
# Create datasets
train_datagen = image_dataset_from_directory(base_dir,
image_size=(200,200),
s ubset='training',
seed = 1,
validation_split=0.1,
batch_size= 32)
test_datagen = image_dataset_from_directory(base_dir,
image_size=(200,200),
s ubset='validation',
seed = 1,
validation_split=0.1,
batch_size= 32)
#model building
model = tf.keras.models.Sequential([
layers.Conv2D(32, (3, 3), activation='relu', input_shape=(200, 200, 3)),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D(2, 2),
layers.Conv2D(64, (3, 3), activation='relu'),
layers.MaxPooling2D(2, 2),
layers.Flatten(),
layers.Dense(512, activation='relu'),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.Dropout(0.1),
layers.BatchNormalization(),
layers.Dense(512, activation='relu'),
layers.Dropout(0.2),
layers.BatchNormalization(),
layers.Dense(1, activation='sigmoid')
])
model.summary()
keras.utils.plot_model( model, show_shapes=True, show_dtype)
#compile a model
model.compile( loss='binary_crossentropy',
optimizer='adam',
metrics=['accuracy'] )
#fit a model
history = model.fit(train_datagen,
epochs=10,
validation_data=test_datagen)
history_df = pd.DataFrame(history.history)
history_df.loc[:, ['loss', 'val_loss']].plot()
history_df.loc[:, ['accuracy', 'val_accuracy']].plot()
plt.show()
from keras.preprocessing import image
test_image = image.load_img('1.jpg',target_size=(200,200))
#For show image
plt.imshow(test_image)
test_image = image.img_to_array(test_image)
test_image = np.expand_dims(test_image,axis=0)
# Result array
result = model.predict(test_image)
#Mapping result array with the main name list
i=0
if(result>=0.5):
print("Dog")
else:
print("Cat")
Output-
Conclusion Employing a CNN for dog vs. cat classification yields impressive results due to its
a bility to learn complex image features automatically. Through extensive training on labeled data,
the network becomes adept at discerning between the two categories with high accuracy. This
approach not only showcases the power of deep learning in image analysis tasks but also provides a
practical solution for various real-world applications, from pet identification systems to broader
image recognition tasks.