Deep Learning Notes
Deep Learning Notes
DEEP LEARNING
(R20A6610)
Prepared by
K.Chandusha
(R20A6610)DEEPLEARNING
COURSEOBJECTIVES:
1. To understand the basic concepts and techniques of Deep Learning and the need of
Deep Learningtechniques in real-world problems
2. TounderstandCNNalgorithmsandthewaytoevaluateperformanceofthe CNN
architectures.
3. ToapplyRNNandLSTMtolearn,predictandclassifythereal-worldproblems in
theparadigmsofDeepLearning.
4. Tounderstand,learnanddesignGANsfortheselectedproblems.
5. TounderstandtheconceptofAuto-encodersandenhancingGANsusingauto-encoders.
UNIT-I:
INTRODUCTIONTODEEPLEARNING:HistoricalTrendsinDeepLearning,Why
DL is Growing, Artificial Neural Network, Non-linear classification example using
Neural Networks: XOR/XNOR, Single/Multiple Layer Perceptron, Feed Forward
Network, Deep Feed- forward networks, Stochastic Gradient –Based learning, Hidden
Units, Architecture Design, Back- Propagation.
UNIT-II:
CONVOLUTION NEURAL NETWORK (CNN): Introduction to CNNs and their
applications in computer vision, CNN basic architecture, Activation functions-sigmoid,
tanh, ReLU, Softmax layer, Types of pooling layers, Training of CNN in TensorFlow,
various popular CNN architectures: VGG, Google Net, ResNet etc, Dropout,
Normalization, Data augmentation
UNIT-III
RECURRENT NEURAL NETWORK (RNN): Introduction to RNNs and their
applications in sequential data analysis, Back propagation through time (BPTT),
Vanishing Gradient Problem, gradient clipping Long Short Term Memory (LSTM)
Networks, Gated Recurrent Units, Bidirectional LSTMs, Bidirectional RNNs.
UNIT-IV
GENERATIVE ADVERSARIAL NETWORKS (GANS): Generative models, Concept
and principles of GANs, Architecture of GANs (generator and discriminator networks),
Comparison between discriminative and generative models, Generative Adversarial
Networks (GANs), Applications of GANs.
UNIT-V
AUTO-ENCODERS: Auto-encoders, Architecture and components of auto-encoders
(encoder and decoder), Training an auto-encoder for data compression and
reconstruction, Relationship between Autoencoders and GANs, Hybrid Models:
Encoder-Decoder GANs.
TEXTBOOKS:
1. DeepLearning:AnMITPressBookbyIanGoodfellowandYoshuaBengioAaron
Courville.
2. MichaelNielson,NeuralNetworksandDeepLearning,DeterminationPress,2015.
3. SatishKumar,Neuralnetworks:AclassroomApproach,TataMcGraw-HillEducation,
2004.
REFERENCES:
1. DeepLearningwithPython,FrancoisChollet,Manningpublications,2018
2. Advanced Deep Learning with Keras, Rowel Atienza, PACKT Publications,
2018
COURSEOUTCOMES:
CO1:UnderstandthebasicconceptsandtechniquesofDeepLearningandthe
needofDeepLearningtechniquesinreal-worldproblems.
CO2:UnderstandCNNalgorithmsandthewaytoevaluateperformanceof
theCNNarchitectures.
CO3:ApplyRNNandLSTMtolearn,predictandclassifythereal-world
problemsintheparadigmsofDeepLearning.
CO4:Understand,learnanddesignGANsfortheselectedproblems.
CO5:UnderstandtheconceptofAuto-encodersandenhancingGANsusingauto-
encoders.
B.Tech–CSE R-20
UNIT-I:
INTRODUCTIONTODEEPLEARNING:HistoricalTrendsin
Deep Learning, Why DL is Growing, Artificial Neural Network,Non-
linear classification example using Neural Networks: XOR/XNOR,
Single/Multiple Layer Perceptron, Feed Forward Network, Deep
Feed- forward networks, Stochastic Gradient –Based learning,
Hidden Units, Architecture Design, Back- Propagation, Deep learning
frameworks and libraries (e.g., TensorFlow/Keras, PyTorch).
INTRODUCTIONTODEEPLEARNING:
Deep learning is a branch of machine learning which is based on artificial neural
networks. It is capable of learning complex patterns and relationships within data. In deep
learning, we don’t need to explicitly program everything. It has become increasinglypopular
in recent years due to the advances in processing power and the availability oflarge
datasets. Because it is based on artificial neural networks (ANNs) also known as deep neural
networks (DNNs). These neural networks are inspired by the structure and
functionofthehumanbrain’s biologicalneurons,andtheyaredesignedtolearnfromlarge
amounts of data.
1. Deep Learning is a subfield of Machine Learning that involves the use of neural
networks to model and solve complex problems. Neural networks are modeled
after the structure and function of the human brain and consist of layers of
interconnected nodes that process and transform data.
2. The key characteristic of Deep Learning is the use of deep neural networks,which
have multiple layers of interconnected nodes. These networks can learn complex
representations of data by discovering hierarchical patterns andfeatures in the
data. Deep Learning algorithms can automatically learn and improve from data
without the need for manual feature engineering.
3. Deep Learning has achieved significant success in various fields, including image
recognition, natural language processing, speech recognition, and
recommendation systems. Some of the popular Deep Learning architectures
include Convolutional Neural Networks (CNNs), Recurrent Neural Networks
(RNNs), and Deep Belief Networks (DBNs).
4. Training deep neural networks typically requires a large amount of data and
computational resources. However, the availability of cloud computing and the
developmentofspecializedhardware,suchasGraphicsProcessingUnits (GPUs), has
made it easier to train deep neural networks.
In summary, Deep Learning is a subfield of Machine Learning that involves the useof
deep neural networks to model and solve complex problems. Deep Learning
hasachievedsignificantsuccessinvariousfields,anditsuseisexpectedtocontinuetogrow as more
data becomes available, and more powerful computing resources becomeavailable.
DeepLearning
B.Tech–CSE R-20
WhatisDeepLearning?
Deep learning is the branch of “ Machine Learning ”which is based on artificial neural
network architecture. An artificial neural network or ANN uses layers of interconnected
nodes called neurons that work together to process and learn from theinput data.
In a fully connected Deep neural network, there is an input layer and one or more
hiddenlayersconnectedoneaftertheother.Eachneuronreceivesinputfromthe previous layer
neurons or the input layer. The output of one neuron becomes the input to other neurons in
the next layer of the network, and this process continues until the final layer produces the
output of the network. The layers of the neural network transform the input data through a
series of nonlinear transformations, allowing the network to learn complex representations
of the input data.
DeepLearning
B.Tech–CSE R-20
Today, Deep learning has become one of the most popular and visible areas of
machine learning, due to its success in a variety of applications, such as computer vision,
natural language processing, and Reinforcement learning.
Deep learning can be used for supervised, unsupervised as well as reinforcement
machine learning. it uses a variety of ways to process these.
Supervised Machine Learning: Supervised machine learning is the
machinelearning technique in which the neural network learns to make
predictions or classify data based on the labeled datasets. Here we input both
input features along with the target variables. the neural network learns to make
predictions based on the cost or error that comes from the difference between
thepredictedandtheactualtarget,thisprocessisknownasbackpropagation. Deep
learning algorithms like Convolutional neural networks, Recurrent neural
networks are used for many supervised tasks like image classifications and
recognition, sentiment analysis, language translations, etc.
UnsupervisedMachineLearning: Unsupervisedmachinelearning is the machine
learning technique in which the neural network learns to discover the patterns or
to cluster the dataset based on unlabeled datasets. Here thereare no target
variables. while the machine has to self-determined the hidden patterns or
relationships within the datasets. Deep learning algorithms like autoencoders
and generative models are used for unsupervised tasks like clustering,
dimensionality reduction, and anomaly detection.
ReinforcementMachineLearning: ReinforcementMachineLearning is the
machinelearning techniqueinwhichanagentlearnstomakedecisionsin an
environment to maximize a reward signal. The agent interacts with the
environment by taking action and observing the resulting rewards. Deeplearning
can be used to learn policies, or a set of actions, that maximizes the
cumulativerewardovertime.Deepreinforcementlearningalgorithmslike Deep Q
networks and Deep Deterministic Policy Gradient (DDPG) are used to reinforce
tasks like robotics and game playing etc.
Artificialneuralnetworks:
“Artificialneuralnetworks” arebuiltontheprinciplesofthestructureand
operationofhumanneurons. Itis alsoknown as neural networks or neural nets. An artificial
neural network’s input layer, which is the first layer, receives input from external sources
and passes it on to the hidden layer, which is the second layer. Each neuron in the hidden
layer gets information from the neurons in the previous layer, computes the
weightedtotal,andthentransfersit tothe neuronsinthe nextlayer.Theseconnections are
weighted, which means that the impacts of the inputs from the preceding layer aremore or
less optimized by giving each input a distinct weight. These weights are then adjusted during
the training process to enhance the performance of the model.
DeepLearning
B.Tech–CSE R-20
FullyConnectedArtificialNeuralNetwork
Artificial neurons, also known as units, are found in artificial neural networks. The
wholeArtificialNeuralNetwork iscomposed oftheseartificialneurons,whichare arranged in a
series of layers. The complexities of neural networks will depend on the complexities of the
underlying patterns in the dataset whether a layer has a dozen units or millions of
units.Commonly, Artificial Neural Network has an inputlayer,anoutputlayer as well as
hidden layers. The input layer receives data from the outside world which the neural
network needs to analyze or learn about.
Ina fullyconnectedartificialneural network,thereis aninputlayerandone or more
hidden layers connected one after the other. Each neuron receives input from the previous
layer neurons or the input layer. The output of one neuron becomes the input to other
neurons in the next layer of the network, and this process continues until the final layer
produces the output of the network. Then, after passing through one or more hidden layers,
this data is transformed into valuable data for the output layer. Finally, the output layer
provides an output in the form of an artificial neural network’s response to the data that
comes in.
Units are linked to one another from one layer to another in the bulk of neural
networks. Each of these links has weights that control how much one-unit influences
another. The neural network learns more and more about the data as it moves from oneunit
to another, ultimately producing an output from the output layer.
DifferencebetweenMachineLearningandDeepLearning:
Machine learningand deep learning both are subsets of artificial intelligence but
there are many similarities and differences between them.
DeepLearning
B.Tech–CSE R-20
Requiresthelargervolumeofdataset
Canworkonthesmalleramountof dataset
compared to machine learning
Lesscomplexandeasytointerprettheresult. Morecomplex,itworksliketheblackbox
interpretationsoftheresultarenoteasy.
Typesofneuralnetworks:
DeepLearningmodelsareabletoautomaticallylearnfeaturesfromthedata,
whichmakesthemwell-suitedfortaskssuchasimagerecognition,speechrecognition,
andnaturallanguageprocessing.Themostwidelyusedarchitecturesindeeplearningare
DeepLearning
B.Tech–CSE R-20
feedforward neural networks, convolutional neural networks (CNNs), and recurrent neural
networks (RNNs).
Feedforward neural networks (FNNs)are the simplest type of ANN, with a linear flow of
information through the network. FNNs have been widely used for tasks such as image
classification, speech recognition, and natural language processing.
Convolutional Neural Networks (CNNs)are specifically for image and video recognitiontasks.
CNNs are able to automatically learn features from the images, which makes them well-
suitedfortaskssuchasimageclassification,objectdetection,andimage segmentation.
Recurrent Neural Networks (RNNs)are a type of neural network that is able to process
sequential data, such as time series and natural language. RNNs are able to maintain an
internalstatethatcapturesinformationaboutthepreviousinputs,whichmakesthem well-
suitedfortaskssuchasspeechrecognition,naturallanguageprocessing,and language
translation.
ApplicationsofDeepLearning:
The main applications of deep learning can be divided into computer vision, natural
language processing (NLP), and reinforcement learning.
Computervision
In computer vision, Deep learning models can enable machines to identify and
understand visual data. Some of the main applications of deep learning in computer vision
include:
Object detection and recognition: Deep learning model can be used to identify
and locate objects within images and videos, making it possible for machines to
perform tasks such as self-driving cars, surveillance, and robotics.
Image classification: Deep learning models can be used to classify images into
categories such as animals, plants, and buildings. This is used in applicationssuch
as medical imaging, quality control, and image retrieval.
DeepLearning
B.Tech–CSE R-20
ChallengesinDeepLearning:
Deep learning has made significant advancements in various fields, but there are still
some challenges that need to be addressed. Here are some of the main challenges in
deep learning:
1.Data availability: It requires large amounts of data to learn from. For using deep
learning it’s a big concern to gather as much data for training.
DeepLearning
B.Tech–CSE R-20
AdvantagesofDeepLearning:
DisadvantagesofDeepLearning:
1. Highcomputationalrequirements:DeepLearningmodelsrequirelarge amounts of
data and computational resources to train and optimize.
2. Requires large amounts of labeled data: Deep Learning models often require a
large amount of labeled data for training, which can be expensive and time-
consuming to acquire.
3. Interpretability:DeepLearningmodelscanbechallengingtointerpret,making
itdifficulttounderstandhowtheymakedecisions. Overfitting: Deep Learning
models can sometimes overfit to the training data, resulting in poor performance
on new and unseen data.
4. Black-box nature: Deep Learning models are often treated as black boxes,making
it difficult to understand how they work and how they arrived at their
predictions.
In summary, while Deep Learning offers many advantages,including
high accuracy and scalability, it also has some disadvantages, such as high
computational requirements, the need for large amounts of labeled data,
andinterpretabilitychallenges.Theselimitationsneedtobecarefully considered
when deciding whether to use Deep Learning for a specific task.
DeepLearning
B.Tech–CSE R-20
HistoricalTrendsinDeepLearning:
Deep learning has experienced significant historical trends since its
inception. Here are some key milestones and trends that have
shaped the field:
5. BigDataandGPUs:Theearly2010smarkedaturningpointfor
deeplearningwiththeadventofbigdataandtheavailabilityofpowerful
Graphics Processing Units (GPUs).
• Theabundanceoflabeleddata,combinedwithGPUacceleration,
enabled the training of large-scale deep neuralnetworks and
significantly improved performance.
6. ImageNetandDeepLearningRenaissance:TheImageNetLargeScale
VisualRecognitionChallengein2012,wonbyadeepneuralnetworkknown as
AlexNet, brought deep learning into the spotlight.
• This event sparked a renaissance in the field, encouraging
researcherstoexploredeeplearningarchitecturesandtechniques
across various domains.
7. DeepLearninginNaturalLanguageProcessing(NLP):Deeplearning
DeepLearning
B.Tech–CSE R-20
10. ExplainabilityandInterpretability:Asdeeplearningmodels
have become increasingly complex, researchers havefocused on
improving their explainability and interpretability.
• Techniques like attention mechanisms, saliency maps, and
model-agnosticinterpretabilitymethodsaimtoshedlightonthe
decision-making processes of deep learning models.
Why DLisGrowing:
• ProcessingpowerneededforDeeplearningisreadilybecomingavailable
using GPUs, Distributed Computing and powerful CPUs.
• Moreover,asthedataamountgrows,DeepLearningmodelsseemto
outperform Machine Learning models.
•Focusoncustomizationandrealtime decision.
DeepLearning
B.Tech–CSE R-20
Processin ML/DL:
ArtificialNeuralNetworks:
Artificial Neural Networks contain artificial neurons which are called units. These
units are arranged in a series of layers that together constitute the whole Artificial Neural
Network in a system.
A layer can have only a dozen units or millions of units as this depends on how the
complex neural networks will be required to learn the hidden patterns in the dataset.
Commonly, Artificial Neural Network has an input layer, an output layer as well as hidden
layers.
The input layer receives data from the outside world which the neural network needs
to analyze or learn about. Then this data passes through one or multiple hidden layers that
transform the input into data that is valuable for the output layer. Finally, the output layer
provides an output in the form of a response of the Artificial Neural Networks to input data
provided.
In the majority of neural networks, units are interconnected from one layer toanother.
Each of these connections has weights that determine the influence of one unit on another
unit. As the data transfers from one unit to another, the neural network learns more and more
about the data which eventually results in an output from the output layer.
DeepLearning
B.Tech–CSE R-20
Thestructuresandoperationsofhumanneuronsserveasthebasisforartificial neural
networks. It is also known as neural networks or neural nets. The input layer of an artificial
neural network is the first layer, and it receives input from external sources and releases it to
the hidden layer, which is the second layer. In the hidden layer, each neuron receives input
from the previous layer neurons, computes the weighted sum, and sends it to the neurons in
the next layer.
These connections are weighted means effects of the inputs from the previous layer
are optimized more or less by assigning different-different weights to each input and it is
adjusted during the training process by optimizing these weights for improved model
performance.
ArtificialneuronsvsBiologicalneurons
The concept of artificial neural networks comes from biological neurons found in
animal brains So they share a lot of similarities in structure and function wise.
Structure: The structure of artificial neural networks is inspired by biological
neurons. A biological neuron has a cell body or soma to process the impulses,
dendrites to receive them, and an axon that transfers them to other neurons.The
input nodes of artificial neural networks receive input signals, the hidden layer
nodes compute these input signals, and the output layer nodes compute the final
output by processing the hidden layer’s results using activation functions.
BiologicalNeuron ArtificialNeuron
Dendrite Inputs
DeepLearning
B.Tech–CSE R-20
BiologicalNeuron ArtificialNeuron
CellnucleusorSoma Nodes
Synapses Weights
Axon Output
Synapses: Synapses are the links between biological neurons that enable the
transmission of impulses from dendrites to the cell body. Synapses are
theweightsthatjointheone-layernodestothenext-layernodesinartificial neurons. The
strength of the links is determined by the weight value.
Learning: In biological neurons, learning happens in the cell body nucleus or
soma,whichhasanucleusthathelpstoprocesstheimpulses.Anaction potential is
produced and travels through the axons if the impulses are powerful enough to
reach the threshold. This becomes possible by synaptic plasticity,which represents
the ability of synapses to become stronger or weaker over timein reaction to
changes in their activity. In artificial neural networks, backpropagation is a
technique used for learning, which adjusts the weights
betweennodesaccordingtotheerror ordifferences betweenpredictedand actual
outcomes.
BiologicalNeuron ArtificialNeuron
Synapticplasticity Backpropagations
DeepLearning
B.Tech–CSE R-20
HowdoArtificialNeuralNetworkslearn?
Artificial neural networks are trained using a training set. For example, suppose you
want to teach an ANN to recognize a cat. Then it is shown thousands of different images of
catssothatthenetworkcanlearntoidentifyacat.Oncetheneuralnetworkhasbeen trained enough
using images of cats, then you need to check if it can identify cat images correctly. This is
done by making the ANN classify the images it is provided by deciding whether they are cat
images or not.The output obtained by the ANN is corroborated by a human-provided
description of whether the image is a cat image or not.
If the ANN identifies incorrectly then back-propagation is used to adjust whatever it
has learned during training. Backpropagationis done by fine-tuning the weights of the
connections in ANN units based on the error rate obtained. This process continues until the
artificial neural network can correctly recognize a cat in an image with minimal possibleerror
rates.
WhatarethetypesofArtificialNeuralNetworks?
DeepLearning
B.Tech–CSE R-20
ApplicationsofArtificialNeuralNetworks
1. Social Media: Artificial Neural Networks are used heavily in Social Media. For
example,let’stakethe ‘Peopleyoumayknow’ featureonFacebookthat
suggestspeoplethatyoumightknowinreallifesothatyoucansendthem friend requests.
Well, this magical effect is achieved by using Artificial Neural Networks that
analyze your profile, your interests, your current friends, and also their friends and
various other factors to calculate the people you mightpotentially know. Another
common application of Machine Learningin social media is facial recognition.
This is done by finding around 100 reference points on the person’s face and then
matching them with those already available in the database using convolutional
neural networks.
2. Marketing and Sales: When you log onto E-commerce sites like Amazon and
Flipkart, they will recommend your products to buy based on your previous
browsing history. Similarly, suppose you love Pasta, then Zomato, Swiggy, etc.
will show you restaurant recommendations based on your tastes and previous
orderhistory.Thisistrueacrossallnew-agemarketingsegmentslikeBook
sites,Movieservices,Hospitalitysites,etc.anditisdonebyimplementing personalized
marketing. This uses Artificial Neural Networks to identify the customer likes,
dislikes, previous shopping history, etc., and thentailor the marketing campaigns
accordingly.
3. Healthcare: Artificial Neural Networks are used in Oncology to train algorithms
thatcanidentify canceroustissueatthemicroscopiclevelatthesameaccuracy as trained
physicians. Various rare diseases may manifest in physical characteristics and can
be identified in their premature stages by using Facial Analysis on the patient
photos. So the full-scale implementation of Artificial Neural Networks in the
healthcare environment can only enhance the diagnostic abilities of medical
experts and ultimately lead to the overall improvement in the quality of medical
care all over the world.
4. Personal Assistants: Applications like Siri, Alexa, Cortana, etc., and also heard
thembasedonthephonesyouhave!!!Thesearepersonalassistantsandan
DeepLearning
B.Tech–CSE R-20
NeuralNetwork,Non-linearclassificationexampleusingNeural
Networks: XOR/XNOR:
XORproblemwithneuralnetworks:
X Y Output
0 0 0
0 1 1
1 0 1
1 1 0
Output=X.Y’+X’.Y
Thelinearseparabilityofpoints
So here we can see that the pink dots and red triangle
points in the plot do not overlap each other and the linear line
is easily separating the two classes where the upper boundary
of the plot can be considered as one classification and the
below region can be considered as the other region of
classification.
DeepLearning
B.Tech–CSE R-20
Needforlinearseparabilityinneuralnetworks
HowtosolvetheXORproblemwithneuralnetworks:
DeepLearning
B.Tech–CSE R-20
Example:ForX1=0andX2=0weshouldgetaninputof0.Letussolveit.
Solutio
n: ConsideringX1=0andX
2=0
H1=RELU(0.1+0.1+0
)=0
H2=RELU(0.1+0.1+0
)=0
So now we have obtained the weights that were
propagated from the input layertothehidden layer. Now,letus
propagate fromthehiddenlayer to the output layer.
Y=RELU(0.1+0.(-2))=0
So, amongthevariouslogicaloperations,XORlogical
operationisone such problem wherein linear separability of
data points is not possible using single neurons or perceptrons.
So, for solving the XOR problem for neural networks it is
necessary to use multiple neurons in the neural network
architecture with certain weights and appropriate activation
functions to solve the XOR problem with neural networks.
Aregularneuralnetworklookslikethis:
DeepLearning
B.Tech–CSE R-20
Theperceptronconsistsof4parts.
o InputvalueorOneinputlayer:Theinputlayeroftheperceptronismad
eof artificial input neurons and takes the initial data into the
system for further processing.
o WeightsandBias:
Weight: It represents the dimension or strength of the
connection between
units.Iftheweighttonode1tonode2hasahigherquantity,thenneuron1
has a more considerable influence on the neuron.
Bias: It is the same as the intercept added in a linear equation. It
is an
additionalparameterwhichtaskistomodifytheoutputalongwiththe
weighted sum of the input to the other neuron.
o Netsum:Itcalculatesthetotalsum.
o ActivationFunction:
Aneuroncanbeactivatedornot,isdeterminedbyan activation
function. The activation function calculates a weighted sum and
further adding bias with it to give the result.
Astandardneuralnetworklookslikethebelowdiagram.
DeepLearning
B.Tech–CSE R-20
How doesitwork?
Theperceptronworksonthesesimplestepswhicharegiven below:
a.Inthefirststep,alltheinputsxaremultipliedwiththeirweightsw.
DeepLearning
B.Tech–CSE R-20
b.Inthisstep,addalltheincreasedvaluesandcallthemtheWeightedsum.
c.Inthelaststep,applytheweightedsumtoacorrectActivationFun
AUnitStepActivationFunction,
Therearetwotypesofarchitecture.Thesetypesfocusonthefunctionalityof
artificial neural networks as follows-
o SingleLayer Perceptron
DeepLearning
B.Tech–CSE R-20
o Multi-LayerPerceptron
SingleLayerPerceptron
The single-layer perceptron was the first neural network model,
proposed in 1958 by Frank Rosenbluth. It is one of the earliest models for
learning. Our goal is to find a linear decision function measured by the
weight vector w and the bias parameter b.
This is the first proposal when the neural model is built. The
content of the neuron's local memory contains a vector of weight. The
single vector perceptron is
calculatedbycalculatingthesumoftheinputvectormultipliedbythecorrespondi
ng element of the vector, with each increasing the amount of the
corresponding component of the vector by weight. The value that is
displayed in the output is the input of an activation function.
Now,wehavetodothefollowingnecessary stepsoftraininglogisticregression-
DeepLearning
B.Tech–CSE R-20
o For each element of the training set, the error is calculated with the
difference between the desired output and the actual output. The
calculated error isused to adjust the weight.
o The process is repeated until the fault made on the entire training
set is less than the specified limit until the maximum number of
iterations has been reached.
A multi-layer perceptron has one input layer and for each input,
there is one neuron (or node), it has one output layer with a single node
for each output andit can have any number of hidden layers and each
hidden layer can have any numberofnodes.AschematicdiagramofaMulti-
LayerPerceptron(MLP)isdepicted below.
In the multi-layer perceptron diagram above, we can see that there are three inputsand
thus three input nodes and the hidden layer has three nodes. The output layer gives two
outputs, therefore there are two output nodes. The nodes in the input layer take input and
forward it for further process, in the diagram above the nodes in the input layer forwardstheir
output to each of the three nodes in the hidden layer, and in the same way, the hidden layer
processes the information and passes it to the output layer.
DeepLearning
B.Tech–CSE R-20
Every node in the multi-layer perception uses a sigmoid activation function. The
sigmoidactivationfunctiontakesrealvaluesasinputandconvertsthemtonumbers between 0 and 1
using the sigmoid formula.
FeedForwardNetwork:
Whyareneuralnetworks used?
Machine learning models are built on assumptions such as the one where X and Y are
related. An Inductive Bias of linear regression is the linear relationship between X and Y. In
this way, a line or hyperplane gets fitted to the data.
When X and Y have a complex relationship, it can get difficult for a LinearRegression
method to predict Y. For this situation, the curve must be multi-dimensional or approximate
to the relationship.
Feed forward neural networks are artificial neural networksin which nodes do not
form loops. This type of neural network is also known as a multi-layer neural network as all
information is only passed forward.
During data flow, input nodes receive data, which travel through hidden layers, and
exit output nodes. Nolinks exist in the network that could get used to bysending information
back from the output node.
Afeed forwardneuralnetworkapproximatesfunctionsinthefollowingway:
Feed forward neural networks serve as the basis for object detection in photos, as
shown in the Google Photos app.
DeepLearning
B.Tech–CSE R-20
Whatistheworkingprincipleofafeedforwardneuralnetwork?
When the feed forward neural network gets simplified, it can appear as a single layer
perceptron.
This model multiplies inputs with weights as they enter the layer. Afterward, the
weighted input values get added together to get the sum. As long as the sum of the values
rises aboveacertain threshold, set at zero,theoutput valueis usually1, whileifit falls below the
threshold, it is usually -1.
As a feed forward neural network model, the single-layer perceptron often gets used
for classification. Machine learning can also get integrated into single-layer perceptrons.
Through training, neural networks can adjust their weights based on a property called the
delta rule, which helps them compare their outputs with the intended values.
Layersof feedforwardneuralnetwork
DeepLearning
B.Tech–CSE R-20
Inputlayer:
The neurons of this layer receive input and pass it on to the other
layers of the network. Feature or attribute numbers in the dataset must
match the number of neurons in the input layer.
Outputlayer:
According to the type of model getting built, this layer represents the
forecasted feature.
Hiddenlayer:
There are several neurons in hidden layers that transform the input
beforeactually transferring it to the next layer. This network gets
constantly updated with weights in order to make it easier to predict.
Neuronweights:
Neurons:
Artificial neurons get used in feed forward networks, which later get
adapted from biological neurons. A neural network consists of artificial
neurons. Neurons functionin two ways: first, they create weighted input
sums, and second, they activate the sums to make them normal.
ActivationFunction:
a) Sigmoid:
DeepLearning
B.Tech–CSE R-20
b) Tanh:
c) RectifiedLinearUnit:
Onlypositivevaluesareallowedtoflowthroughthisfunction.Nega
tive values get mapped to 0.
Functioninfeedforwardneuralnetwork:
Cost function
Followingisa definitionofthemeansquareerrorcostfunction:
Where,
w=theweightsgatheredinthenetwork
b = biases
n= numberofinputsfortraining
DeepLearning
B.Tech–CSE R-20
a=outputvectors x
= input
‖v‖=vectorv'snormallength
Lossfunction
Gradientlearning algorithm
DeepLearning
B.Tech–CSE R-20
Output units
In the output layer, output units are those units that provide the
desired output or prediction, thereby fulfilling the task that the neural
network needs to complete.
AdvantagesoffeedforwardNeuralNetworks
Machinelearningcanbeboostedwithfeedforwardneuralnetworks'simplified
architecture.
Multi-networkinthefeedforwardnetworksoperateindependently,witha
moderated intermediary.
Complextasksneedseveralneuronsinthenetwork.
Neural networks can handle and process nonlinear data easily comparedto
perceptrons and sigmoid neurons, which are otherwise complex.
A neural network deals with the complicated problem of decision
boundaries.
Depending on the data, the neural network architecture can vary. For
example, convolutional neural networks (CNNs) perform exceptionally
well in image processing, whereas Recurrent Neural Networks(RNNs)
perform well in text and voice processing.
Neural networks need Graphics Processing Units (GPUs) to handle large
datasets for massive computational and hardware performance. Several
GPUs get used widely in the market, including Kaggle Notebooks and
Google Collab Notebooks.
Applicationsoffeedforwardneuralnetworks:
Therearemanyapplicationsfortheseneuralnetworks.Thefollowingareafewof them.
DeepLearning
B.Tech–CSE R-20
A) Physiologicalfeedforwardsystem
Itispossibletoidentifyfeedforwardmanagementinthissituationbecausethecentral
involuntary regulates the heartbeat before exercise.
B) Generegulationandfeedforward
Detectingnon-
temporarychangestotheatmosphereisafunctionofthismotifasafeed forward
system. You can find the majority of this pattern in the illustrious networks.
C) Automationandmachinemanagement
Automationcontrolusingfeedforwardisoneofthedisciplinesinautomation.
D) Parallelfeedforwardcompensationwithderivative
Understandingthemathbehindneuralnetworks
DeepFeed-forwardnetworks:
DeepLearning
B.Tech–CSE R-20
Thefollowingisasimplifiedvisualization:
DeepLearning
B.Tech–CSE R-20
Thearchitectureofthenetwork:
Research is still ongoing, and for now, the only way to determine
this configuration is by experimenting with it. While it is challenging to
find theappropriate architecture, we need to try many configurations
before finding the one that can represent the target function.
Whatisbackpropagationinfeedforwardneuralnetwork?
The goal is to reduce the cost function given the training data while
learning a neural network. Network weights and biases of all neurons in
each layer determine the cost function. Backpropagation gets used to
calculate the gradient of the cost function iteratively. And then update
weights and biases in the opposite direction to reduce the gradient.
In backpropagationformulas,theerrorisdefinedasabove:
DeepLearning
B.Tech–CSE R-20
L stands for the output layer, g for the activation function, ∇ the gradient,
Below is the full derivation of the formulas. For each formula below,
The first equation shows how to calculate the error at the output
layer for sample j. Following that, we can use the second equation to
calculate the error in the layer just before the output layer.
Based on the error values for the next layer, the second equation
cancalculate the error in any layer. Because this algorithm calculates
errors backward, it is known as backpropagation. For sample j, we
calculate the gradient of the loss function by taking the third and fourth
equations and dividing them by the biases and weights.
StochasticGradientDescent(SGD):
Gradient Descent is an iterative optimization process that searches for an objective
function’soptimumvalue(Minimum/Maximum).Itisoneofthemostusedmethodsfor
DeepLearning
B.Tech–CSE R-20
1. StochasticGradientDescent(SGD):
Stochastic Gradient Descent(SGD) isa variant of the GradientDescentalgorithm that is
used for optimizing machine learning models. It addresses the computational inefficiency of
traditional Gradient Descent methods when dealing with large datasets in machine learning
projects.
In SGD, instead of using the entire dataset for each iteration, only a single random
trainingexample(orasmallbatch)isselectedtocalculatethegradientandupdatethe model
parameters. This random selection introduces randomness into the optimization process,
hence the term “stochastic” in stochastic Gradient Descent.
TheadvantageofusingSGDisitscomputationalefficiency,especiallywhen dealing with
large datasets. By using a single example or a small batch, the computational cost per
iteration is significantly reduced compared to traditional Gradient Descent methods that
require processing the entire dataset.
StochasticGradientDescentAlgorithm:
Initialization:Randomlyinitializetheparametersofthemodel.
SetParameters:Determinethenumberofiterationsandthelearningrate (alpha) for
updating the parameters.
Stochastic Gradient Descent Loop: Repeat the following steps until the model
converges or reaches the maximum number of iterations:
a. Shufflethetrainingdatasettointroducerandomness.
b. Iterateovereachtrainingexample(orasmallbatch)intheshuffledorder.
c. Computethegradientofthecostfunctionwithrespecttothemodel
parameters using the current training example (or batch).
d. Update the model parameters by taking a step in the direction of
the negativegradient, scaled by the learning rate.
e. Evaluate the convergence criteria, such as the difference in the cost
function between iterations of the gradient.
ReturnOptimizedParameters:Oncetheconvergencecriteriaaremet
orthemaximumnumberofiterationsisreached,returntheoptimizedmodel parameters.
DeepLearning
B.Tech–CSE R-20
In SGD, since only one sample from the dataset is chosen at random for eachiteration,
the path taken by the algorithm to reach the minima is usually noisier than your typical
Gradient Descent algorithm. But that doesn’t matter all that much because the path taken by
the algorithm does not matter, as long as we reach the minimum and with a significantly
shorter training time.
HiddenUnits:
Inneural networks, a hidden layer is located between the input and output of the
algorithm,inwhichthefunctionappliesweightstotheinputsanddirectsthemthrough anactivation
function as the output. In short, the hidden layers perform nonlinear transformations of the
inputs entered into the network. Hidden layers vary depending on the function of the neural
network, and similarly, the layers may vary depending on their associated weights.
HowdoesaHiddenLayerwork?
Hidden layers, simply put, are layers of mathematical functions each designed to
produce an output specific to an intended result. For example, some forms of hidden layers
are known as squashing functions. These functions are particularly useful when the intended
output of the algorithm is aprobabilitybecause they take an input and produce an output value
between 0 and 1, the range for defining probability.
Hidden layers allow for the function of a neural network to be broken down into
specific transformations of the data. Each hidden layer function is specialized to produce a
defined output. For example, a hidden layer functions that are used to identify human eyes
and ears may be used in conjunction by subsequent layers to identify faces in images. While
the functions to identify eyes alone are not enough to independently recognize objects, they
can function jointly within a neural network.
HiddenLayersandMachine Learning:
Hidden layers are very common in neural networks, however their use andarchitecture
often vary from case to case. As referenced above, hidden layers can beseparated by their
functional characteristics. For example, in a CNN used for object recognition, a hidden layer
that is used to identify wheels cannot solely identify a car, however when placed in
conjunction with additional layers used to identify windows, a large metallic body, and
headlights, the neural network can then make predictions and identify possible cars within
visual data.
DeepLearning
B.Tech–CSE R-20
ChoosingHidden Layers
1. Wellifthedataislinearlyseparablethen
youdon'tneedanyhidden layers at all.
3. Ifdataishavinglargedimensionsorfeaturesthentogetan
optimum solution, 3 to 5 hidden layers can be used.
ChoosingNodesinHidden Layers
Once hidden layers have been decided the next task is to
choose the number of nodes in each hidden layer.
2. Themostappropriatenumberofhiddenneuronsis
DeepLearning
B.Tech–CSE R-20
Sqrt(inputlayernodes*outputlayernodes)
The above algorithms are only a general use case and they can
be moulded according to use case.Sometimes the number of nodes
in hidden layers can increase also in subsequent layers and the
number of hidden layers can also be more than the ideal case.
This whole depends upon the use case and problem statement
that we are dealing with.
ArchitectureDesign:
Typesofneuralnetworksmodelsarelistedbelow:
Perceptron
FeedForwardNeural Network
MultilayerPerceptron
ConvolutionalNeuralNetwork
RadialBasisFunctionalNeuralNetwork
RecurrentNeuralNetwork
LSTM– LongShort-Term Memory
SequencetoSequenceModels
ModularNeural Network
DeepLearning
B.Tech–CSE R-20
AnIntroductiontoArtificialNeuralNetwork
Artificial neural networks are inspired by the biological neurons within the human
body which activate under certain circumstances resulting in a related action performed bythe
body in response. Artificial neural nets consist of various layers of interconnected artificial
neurons powered by activation functions that help in switching them ON/OFF. Like
traditionalmachine algorithms, here too, there are certain values that neural nets learn in the
training phase.
Briefly, each neuron receives a multiplied version of inputs and random weights,
which is then added with a static bias value (unique to each neuron layer); this is then passed
to an appropriate activation function which decides the final value to be given out of the
neuron. There are various activation functions available as per the nature of input values.
Once the output is generated from the final neural net layer, loss function (input vs output) is
calculated,andbackpropagationisperformedwheretheweightsareadjustedtomaketheloss
minimum. Finding optimal values of weights is what the overall operation focuses around.
Please refer to the following for better understanding.
Weightsare numeric values that are multiplied by inputs. In backpropagation, they are
modified to reduce the loss. In simple words, weights are machine learned values fromNeural
Networks. They self-adjust depending on the difference between predicted outputs vs training
inputs.
ActivationFunctionisamathematicalformulathathelpstheneurontoswitchON/OFF.
DeepLearning
B.Tech–CSE R-20
Inputlayer representsdimensionsoftheinputvector.
Hidden layer represents the intermediary nodes that divide the input space into
regions with (soft) boundaries. It takes in a set of weighted input and produces
output through an activation function.
Outputlayer representstheoutputoftheneural network.
Backpropagation:
BackpropagationProcessinDeepNeural Network:
DeepLearning
B.Tech–CSE R-20
Inputvalues
X1=0.
05
X2=0.
10
Initialweight
W1=0.1 W5=0.40
W2=0.20 W6=0.45
W3=0.25 W7=0.50
W4=0.30 W8=0.55
BiasValues
b1=0.35 b2=0.60
TargetValues
T1=0.
01
T2=0.
99
Now,wefirstcalculatethevaluesofH1andH2byaforward pass.
ForwardPass
TofindthevalueofH1wefirstmultiplytheinputvaluefromtheweightsas
DeepLearning
B.Tech–CSE R-20
H1=x1×w1+x2×w2+b1
H1=0.05×0.15+0.10×0.2
0+0.3
H1=0.3775
TocalculatethefinalresultofH1,weperformedthesigmoid functionas
WewillcalculatethevalueofH2in thesamewayas H1
H2=x1×w3+x2×w4+b1
H2=0.05×0.25+0.10×0.30
+0.35 H2=0.3925
TocalculatethefinalresultofH1,weperformedthesigmoid functionas
DeepLearning
B.Tech–CSE R-20
y1=H1×w5+H2×w6+b2
y1=0.593269992×0.40+0.596884378×0.
45+0.60 y1=1.10590597
Tocalculatethefinalresultofy1weperformedthesigmoidfunctionas
y2=H1×w7+H2×w8+b2
y2=0.593269992×0.50+0.596884378×0.
55+0.60 y2=1.2249214
TocalculatethefinalresultofH1,weperformedthesigmoid functionas
Our target values are 0.01 and 0.99. Our y1 and y2 value is not
matched with our target values T1 and T2. Now, we will find the total
error, which is simply the difference between the outputs from the target
outputs. The total error is calculated as
DeepLearning
B.Tech–CSE R-20
So,thetotalerror is
Now,wewillbackpropagatethiserror toupdatetheweightsusingabackward
pass.
Backwardpassattheoutputlayer
To update the weight, we calculate the error correspond to each
weight with the help of a total error. The error on weight w is calculated
by differentiating total error with respect to w.
Weperformbackwardprocesssofirstconsiderthelastweightw5as
DeepLearning
B.Tech–CSE R-20
Now,wecalculateeachtermonebyonetodifferentiateEtotalwithrespectto
w5a
s
Puttingthevalueofe-yin equation(5)
DeepLearning
B.Tech–CSE R-20
Now,wewillcalculatetheupdatedweightw5newwiththehelpofthefollowi
ng formula
In the same way, we calculate w6new, w7new, and w8newand this will
give us the following values
w5new=0.358916
48
w6new=4086661
86
w7new=0.511301
270
w8new=0.561370
121
BackwardpassatHiddenlayer
Now, we will backpropagate to our hidden layer and update the
weight w1, w2, w3, and w4 as we have done with w5, w6, w7, and w8
weights. We will calculate the error at w1 as
Now,wecalculateeachtermonebyonetodifferentiateEtotalwithrespectto
w1a
s
willagainsplitbecauseinE1andE2thereisnoH1term.
Splittingisdoneas
Now,wefindthevalueof byputtingvaluesinequation(18)and(19)as
From equation (18)
DeepLearning
B.Tech–CSE R-20
Fromequation(8)
Fromequation(19)
Puttingthevalueofe-y2in equation(23)
DeepLearning
B.Tech–CSE R-20
Fromequation(21)
Nowfromequation(16)and(17)
DeepLearning
B.Tech–CSE R-20
Puttingthevalueofe-H1in equation(30)
DeepLearning
B.Tech–CSE R-20
Now,wewillcalculatetheupdatedweightw1newwiththehelpofthefollowi
ng formula
0.291027924.Afterrepeatingthisprocess
DeepLearning
B.Tech–CSE R-20
10,000,thetotalerrorisdownto0.0000351085.Atthispoint,theoutputsneurons
DeepLearning
B.Tech–CSE R-20
Deeplearningframeworksandlibraries:
DeepLearningFrameworks:
Keras, TensorFlow and PyTorch are among the top three frameworks
that are preferred by Data Scientists as well as beginners in the field of
Deep Learning. This comparison on Keras vs TensorFlow vs PyTorch
will provide you with acrisp knowledge about the top Deep Learning
Frameworks and help you find out which one is suitable for you. In this
blog you will get a complete insight into the above three frameworks in
the following sequence:
IntroductiontoKeras,TensorFlow&PyTorch
ComparisonFactors
FinalVerdict
Introduction
Keras
DeepLearning
B.Tech–CSE R-20
PyTorch
ComparisonFactors
All the three frameworks are related to each other and also have
certain basic differences that distinguishes them from one another.
Theparametersthatdistinguishthem:
LevelofAPI
Speed
Architecture
Debugging
Dataset
Popularity
LevelofAPI
TensorFlow is a framework that provides both high and low level APIs.
Pytorch, on the other hand, is a lower-level API focused on direct work with
array
expressions.Ithasgainedimmenseinterestinthelastyear,becomingapreferred
DeepLearning
B.Tech–CSE R-20
solutionforacademicresearch,andapplicationsofdeeplearningrequiring
optimizing custom expressions.
Speed
Architecture
DeepLearning
B.Tech–CSE R-20
Debugging
Dataset
Popularity
DeepLearning
B.Tech–CSE R-20
These were the parameters that distinguish all the three frameworks
but there is no absolute answer to which one is better. The choice
ultimately comes down to
Technicalbackground
Requirementsand
Ease ofUse
FinalVerdict
Now coming to the final verdict of Keras vs TensorFlow vs PyTorch
let’s have a look at the situations that are most preferablefor each one of
these three deep learning frameworks
Kerasismost suitablefor:
RapidPrototyping
SmallDataset
Multipleback-endsupport
TensorFlowismostsuitable for:
LargeDataset
HighPerformance
Functionality
ObjectDetection
DeepLearning
B.Tech–CSE R-20
PyTorchismostsuitable for:
Flexibility
ShortTrainingDuration
Debuggingcapabilities
UNIT-II:
CONVOLUTIONNEURALNETWORK(CNN):IntroductiontoCNNs
and their applications in computer vision, CNN basic architecture,
Activation functions-sigmoid, tanh, ReLU, Softmax layer, Types of
pooling layers, Training of CNN in TensorFlow, various popular CNN
architectures:VGG, GoogleNet,ResNetetc, Dropout,Normalization,
Data augmentation
IntroductiontoCNNsandtheirapplicationsincomputervision:
DeepLearning
B.Tech–CSE R-20
Sincethe1950s,the earlydaysofAI,researchershavestruggledtomake
asystemthatcanunderstandvisualdata.Inthefollowingyears,thisfieldcame to be
known as Computer Vision. In 2012, computer vision took a quantum leap
when a group of researchers from the University of Toronto developed an AI
model that surpassed the best image recognition algorithms, and that tooby a
large margin.
The AI system, which became known as AlexNet (named after its main
creator, Alex Krizhevsky), won the 2012 ImageNet computer vision contestwith
an amazing 85 percent accuracy. The runner-up scored a modest 74 percent on
the test.
BackgroundofCNNs
CNN’s were first developed and used around the 1980s. The most that a
CNNcoulddoatthattimewasrecognizehandwrittendigits.Itwasmostlyused
inthepostalsectorstoreadzipcodes,pincodes,etc.Theimportantthingto
DeepLearning
B.Tech–CSE R-20
remember about any deep learning model is that it requires a large amount of
data to train and also requires a lot of computing resources. This was a major
drawback for CNNs at that period and hence CNNs were only limited to the
postal sectors and it failed to enter the world of machine learning.
In the past few decades, Deep Learning has proved to be a very powerful
tool because of its ability to handle large amounts of data. The interest to use
hidden layers has surpassed traditional techniques, especially in pattern
recognition. One of the most popular deep neural networks is Convolutional
Neural Networks (also known as CNN or ConvNet) in deep learning, especially
when it comes to Computer Vision applications.
Since the 1950s, the early days of AI, researchers have struggled to make a
systemthatcan understand visualdata.In the following years, thisfield came to be
known as Computer Vision. In 2012, computer vision took a quantum leap when a
group of researchers from the University of Toronto developed an AI model that
surpassed the best image recognition algorithms, and that too by a large margin.
The AI system, which became known as AlexNet (named after its main
creator, Alex Krizhevsky), won the 2012 ImageNet computer vision contest withan
amazing 85 percent accuracy. The runner-up scored a modest 74 percent on the
test.
DeepLearning
B.Tech–CSE R-20
BackgroundofCNNs
WhatIsaCNN?
Howdoesitwork?
WhatIsaPoolingLayer?
Limitationsof CNNs
BackgroundofCNNs
CNN’s were first developed and used around the 1980s. The most that a
CNN could do at that time was recognize handwritten digits. It was mostly used in
the postal sectors to read zip codes, pin codes, etc. The important thing to
remember about any deep learning model is that it requires a large amount of data
to train and also requires a lot of computing resources. This was a major drawback
for CNNs at that period and hence CNNs were only limited to the postal sectors
and it failed to enter the world of machine learning.
In 2012, Alex Krizhevsky realized that it was time to bring back the branch
of deep learning that uses multi-layered neural networks. The availability of large
sets of data, to be more specific ImageNet datasets with millions of labeled images
and an abundance of computing resources enabled researchers to revive CNNs.
WhatIsa CNN?
DeepLearning
B.Tech–CSE R-20
Now when we think of a neural network we think about matrix multiplications but
that is not the case with ConvNet. It uses a special technique called Convolution.
Now in mathematics convolution is a mathematical operation on two functionsthat
produces a third function that expresses how the shape of one is modified by the
other.
Howdoesitwork?
DeepLearning
B.Tech–CSE R-20
Forsimplicity,considergrayscaleimagestounderstandhowCNNs
work.
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
WhatIsaPoolingLayer?
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
BenefitsofUsingCNNsforMachineandDeepLearning
Deep learning is a form of machine learning that requires a neural network with a minimum of
three layers. Networks with multiple layers are more accurate than single-layer networks. Deep learning
applications often use CNNs or RNNs (recurrent neural networks).
The CNN architecture is especially useful for image recognition and image classification, as well
as other computer vision tasks becausetheycan processlarge amounts of data andproducehighlyaccurate
predictions. CNNs can learn the features of an object through multiple iterations, eliminating the need for
manual feature engineering tasks like feature extraction.
It is possible to retrain a CNN for a new recognition task or build a new model based on an
existing network with trained weights. This is known as transfer learning. This enables ML model
developers to apply CNNs to different use cases without starting from scratch.
WhatAreConvolutionalNeuralNetworks(CNNs)?
The connectivity pattern in CNNs is inspired by the visual cortex in the human brain, where
neurons respond to specific regions or receptive fields in the visual space. This architecture enables CNNs
to effectively capture spatial relationships and patterns in images. By stacking multiple convolutional and
pooling layers,CNNscanlearn increasinglycomplex features, leading tohigh accuracyin taskslike image
classification, object detection, and segmentation.
ConvolutionalNeuralNetworkArchitectureModel
Convolutional neural networks are known for their superiority over other artificial neural
networks, given their ability to process visual, textual, and audio data. The CNN architecture comprises
three main layers: convolutional layers, pooling layers, and a fully connected (FC) layer.
There can be multiple convolutional and pooling layers. The more layers in the network, the
greaterthecomplexityand(theoretically)theaccuracyofthemachinelearningmodel.Eachadditional
DeepLearning
B.Tech–CSE R-20
layerthatprocessestheinputdataincreasesthemodel’sabilitytorecognizeobjectsandpatternsinthe data.
TheConvolutional Layer
Convolutional layers are the key building block of the network, where most of the computations
are carried out. It works by applying a filter to the input data to identify features. This filter, known as a
feature detector, checks the image input’s receptive fields for a given feature. This operation is referred to
as convolution.
The filter is a two-dimensional array of weights that represents part of a 2-dimensional image. A
filter is typically a 3×3 matrix, although there are other possible sizes. The filter is applied to a region
withintheinput imageandcalculatesadotproductbetweenthe pixels,whichisfedto anoutputarray.The filter
then shifts and repeats the process until it has covered the whole image. The final output of all the filter
processes is called the feature map.
The CNN typically applies the ReLU (Rectified Linear Unit) transformation to each feature map
after every convolution to introduce nonlinearity to the ML model. A convolutional layer is typically
followed by a pooling layer. Together, the convolutional and pooling layers make up a convolutionalblock.
Additional convolution blocks will follow the first block, creating a hierarchical structure with
later layers learning from the earlier layers. For example, a CNN model might train to detect cars inimages.
Cars can be viewed as the sum of their parts, including the wheels, boot, and windscreen. Each feature of a
car equates to a low-level pattern identified by the neural network, which then combines these parts to
create a high-level pattern.
ThePoolingLayers
A pooling or down sampling layer reduces the dimensionality of the input. Like a convolutional
operation, pooling operations use a filter to sweep the whole input image, but it doesn’t use weights. The
filter instead uses an aggregation function to populate the output array based on the receptive field’svalues.
Therearetwokeytypesof pooling:
Averagepooling:Thefiltercalculatesthereceptivefield’saveragevaluewhenitscanstheinput.
DeepLearning
B.Tech–CSE R-20
Max pooling:The filter sends the pixel with the maximum value to populate the output array.This
approach is more common than average pooling.
Pooling layers are important despite causing some information to be lost, because they help reduce the
complexity and increase the efficiency of the CNN. It also reduces the risk of overfitting.
TheFullyConnected(FC)Layer
ThefinallayerofaCNNisafullyconnectedlayer.
The FC layer performs classification tasks using the features that the previous layers and filters
extracted. Instead of ReLu functions, the FC layer typically uses a softmax function that classifies inputs
more appropriately and produces a probability score between 0 and 1.
BasicArchitectureof CNN:
BasicArchitecture
TherearetwomainpartstoaCNNarchitecture
ConvolutionLayers
There are three types of layers that make up the CNN which are the
convolutionallayers,poolinglayers,andfully-connected(FC)layers.When
DeepLearning
B.Tech–CSE R-20
1. ConvolutionalLayer
This layer is the first layer that is used to extract the various features
from the input images. In this layer, the mathematical operation
ofconvolutionisperformedbetweentheinputimageandafilterofa
particularsizeMxM.Byslidingthefilterovertheinputimage,thedot product is
taken between the filter and the parts of the input image with respect to the
size of the filter (MxM).
TheconvolutionlayerinCNNpassestheresulttothenextlayer
onceapplyingtheconvolutionoperationintheinput.Convolutional
layersinCNNbenefitalotastheyensurethespatialrelationship between the
pixels is intact.
2. Pooling Layer
InMaxPooling,thelargestelementistakenfromfeaturemap. Average
Pooling calculates the average of the elements in a predefined sized
Imagesection.Thetotalsumoftheelementsinthepredefinedsectionis
DeepLearning
B.Tech–CSE R-20
3. FullyConnectedLayer
In this, the input image from the previous layers are flattened and fedto
the FC layer. The flattened vector then undergoes few more FC
layerswherethemathematicalfunctionsoperationsusuallytakeplace.Inthis
stage, the classification process begins to take place. The reason two layersare
connected is that two fully connected layers will perform better than a single
connected layer. These layers in CNN reduce the human supervision
4. Dropout
DeepLearning
B.Tech–CSE R-20
5. ActivationFunctions
Finally, one of the most important parameters of the CNN model is the
activation function. They are used to learn and approximate any kind of
continuous and complex relationship between variables of the network. In
simple words, it decides which information of the model should fire in the
forward direction and which ones should not at the end of the network.
Itaddsnon-linearitytothenetwork.Thereareseveralcommonly used
activation functions such as the ReLU, Softmax, tanH and the Sigmoid
functions. Each of these functions have a specific usage. For a binary
classificationCNNmodel,sigmoidandsoftmaxfunctionsarepreferredafor
a
multi-class classification, generally softmax us used. In simple terms,
activation functions in a CNN model determine whether a neuron should be
activatedornot.Itdecideswhethertheinputtotheworkisimportantor not to
predict using mathematical operations.
TypesofNeuralNetworks
Activation Functions
Thepopularactivationfunctionsare
a) BinaryStepFunction
Binarystepfunctiondependsonathresholdvaluethatdecideswhe
ther
aneuronshouldbeactivatedornot.Theinputfedtotheactivationfunctio
nis
comparedtoacertainthreshold;iftheinputisgreaterthanit,thentheneu
ronis
DeepLearning
B.Tech–CSE R-20
activated,elseitisdeactivated,meaningthatitsoutputisnotpassedonto
the
DeepLearning
B.Tech–CSE R-20
Mathematically,itcanberepresentedas:
DeepLearning
B.Tech–CSE R-20
b) LinearActivationFunction:
Thelinearactivationfunction,alsoknownas"noactivation,"o
r"identity
function"(multipliedx1.0),iswheretheactivationisproportionalto
theinput.
Mathematically,itcanberepresentedas:
However,alinearactivationfunctionhas twomajorproblems:
It’snotpossibletousebackpropagationasthederivativeofthefunction
isaconstantandhasnorelationtotheinputx.
Alllayersoftheneuralnetworkwillcollapseintooneifalineara
ctivation
DeepLearning
B.Tech–CSE R-20
functionisused.Nomatterthenumberoflayersintheneuraln
etwork,
DeepLearning
B.Tech–CSE R-20
thelastlayerwillstillbealinearfunctionofthefirstlayer.So,es
sentially,
alinearactivationfunctionturnstheneuralnetworkintojust
onelayer.
Non-LinearActivationFunctions
Thelinearactivationfunctionshownaboveissimplyalinearregression
model.Becauseof its limited power, this does not allow the model to
create complexmappingsbetweenthenetwork’sinputsandoutputs.
Belowaretendifferentnon-linearneuralnetworksactivationfunctionsand
their characteristics.
a)Sigmoid/LogisticActivationFunction
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
Mathematically,itcanberepresentedas:
Thelimitationsofsigmoidfunctionarediscussedbelow:
Thederivativeofthefunctionisf'(x)=sigmoid(x)*(1-sigmoid(x)).
DeepLearning
B.Tech–CSE R-20
FromtheaboveFigure,thegradientvaluesareonlysignificantforrange
-3 to 3, and the graph gets much flatter in other regions.It
implies that for values greater than 3 or less than -3, the
function will have very small
gradients.Asthegradientvalueapproacheszero,thenetworkceasestol
earn andsuffersfromtheVanishinggradientproblem.
Theoutputofthelogisticfunctionisnotsymmetricaroundzero.So
the
outputofalltheneuronswillbeofthesamesign.Thismakesthetrain
ingofthen euralnetworkmoredifficultandunstable.
b)TanhFunction(HyperbolicTangent)
Tanhfunctionisverysimilartothesigmoid/
logisticactivationfunction, andevenhasthesameS-
shapewiththedifferenceinoutputrangeof-1to1.
InTanh,thelargertheinput(morepositive),theclosertheoutputvaluewillbe
to1.0,whereasthesmallertheinput(morenegative),theclosertheoutputwill be
to
-1.0.
Mathematically,itcanberepresentedas:
DeepLearning
B.Tech–CSE R-20
Advantagesofusingthisactivationfunctionare:
TheoutputofthetanhactivationfunctionisZerocentered;hencew
ecan easily map the output values as strongly negative,
neutral, or strongly positive.
Usually used in hidden layers of a neural network as its
values lie between-
1to;therefore,themeanforthehiddenlayercomesouttobe
0orveryclosetoit.Ithelpsincenteringthedataandmakeslearningf
or
activationfunction.Plusthegradientofthetanhfunctionismuchsteeperas
comparedtothesigmoidfunction.
DeepLearning
B.Tech–CSE R-20
Note Althoughbothsigmoidandtanhfacevanishinggradientissue,
tanhiszerocentered,andthegradientsarenotrestrictedtomoveina certain
direction. Therefore, in practice, tanh nonlinearity is always preferred
to sigmoid nonlinearity.
c) ReLUFunction
ReLUstandsforRectifiedLinearUnit.Althoughitgivesanimpressi
onof a linear function, ReLU has a derivative function and allows
for backpropagation whilesimultaneouslymaking
itcomputationallyefficient.
ThemaincatchhereisthattheReLUfunctiondoesnotactivateallt
he neurons at the same time.
Theneuronswillonlybedeactivatediftheoutputoft
helinear transformationislessthan0.
Mathematically,itcanberepresentedas:
DeepLearning
B.Tech–CSE R-20
Thelimitationsfacedbythisfunctionare:
TheDyingReLUproblem.
Note:ForbuildingthemostreliableMLmodels,splityourdataintotrain,validation
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
d) LeakyReLU Function
LeakyReLUisanimprovedversionofReLUfunctiontosolvetheDying
ReLUproblemasithasasmallpositiveslopeinthenegativearea.
Mathematically,itcanberepresentedas:
TheadvantagesofLeakyReLUaresameasthatofReLU,inadditiont
o
thefactthatitdoesenablebackpropagation,evenfornegativeinputvalu
es.By
makingthisminormodificationfornegativeinputvalues,thegradientoft
heleft sideofthegraphcomesouttobeanon-zerovalue.Therefore,we
wouldno longerencounterdeadneuronsinthatregion.
`HereisthederivativeoftheLeakyReLUfunction.
DeepLearning
B.Tech–CSE R-20
Thelimitationsthatthisfunctionfacesinclude:
Thepredictionsmaynotbeconsistentfornegativeinputvalues.
Thegradientfornegativevaluesisasmallvaluethatmakesth
elearning ofmodelparameterstime-consuming.
d)ParametricReLUFunction
Parametric ReLU is another variant of ReLU that aims to
solve the
problemofgradient’sbecomingzeroforthelefthalfoftheaxis.Thisfuncti
on provides the slope of the negative part of the function as an
argumenta. By
performingbackpropagation,themostappropriatevalueofaislearnt.
DeepLearning
B.Tech–CSE R-20
Mathematically,itcanberepresentedas:
Where"a"is theslopeparameterfornegativevalues.
TheparameterizedReLUfunctionisusedwhentheleakyReLUf
unction
stillfailsatsolvingtheproblemofdeadneurons,andtherelevantinfo
rmationis notsuccessfullypassedtothenextlayer.
DeepLearning
B.Tech–CSE R-20
TypesofpoolingLayers:
DeepLearning
B.Tech–CSE R-20
AConvolutionalneuralnetwork(CNN)isaspecialtypeofArtificialNeuralNetworkthat is
usually used for image recognition and processing due to its ability to recognize patterns in
images. It eliminates the need to extract features from visual data manually. It learns images
by sliding a filter of some size on them and learning not just the features from the data but
also keeps Translation invariance.
Pooling layers are one of the building blocks of Convolutional Neural Networks.
Where Convolutional layers extract featuresfrom images, Pooling layers consolidate the
featureslearned by CNNs. Its purpose is to gradually shrink the representation’s spatial
dimension to minimize the number of parameters and computations in the network.
WhyarePoolinglayersneeded?
ThefeaturemapproducedbythefiltersofConvolutionallayersislocation-dependent. For
example, If an object in an image has shifted a bit it might not be recognizable by the
Convolutional layer. So, it means that the feature map records the precise positions offeatures
in the input. What pooling layers provide is “Translational Invariance” which makes the CNN
invariant to translations, i.e., even if the input of the CNN is translated, the CNN will still be
able to recognize the features in the input.
HowdoPoolinglayersachieve that?
A Pooling layer is added after the Convolutional layer(s), as seen in the structure of a
CNN above. It down samples the output of the Convolutional layers by sliding the filter of
some size with some stride size and calculating the maximum or average of the input.
1. Max pooling: This works by selecting the maximum value from every pool. Max Pooling retains
themost prominentfeatures of the feature map, and the returned image is sharper than the original
image.
2. Average pooling: This pooling layer works by getting the average of the pool. Average pooling
retains theaverage valuesof features of the feature map. It smoothes the image while keeping the
essence of the feature in an image.
DeepLearning
B.Tech–CSE R-20
MaxPooling
Create a MaxPool2D layer with pool_size=2 and strides=2. Apply the MaxPool2D
layer to the matrix, and you will get the MaxPooled output in the tensor form. By applying it
tothematrix,theMax poolinglayerwillgothroughthematrix bycomputingthemax ofeach
2×2poolwithajumpof2.Printtheshapeofthetensor.Usetf.squeezetoremovedimensions of size 1
from the shape of a tensor.
Average Pooling
Create an AveragePooling2D layer with the same 2 pool_size and strides. Apply the
AveragePooling2Dlayer tothematrix. Byapplyingit tothematrix,theaveragepoolinglayer will
go through the matrix by computing the average of 2×2 for each pool with a jump of 2. Print
the shape of the matrix and Use tf.squeeze to convert the output into a readable form by
removing all 1 size dimensions.
The GIF here explains how these pooling layers go through the input matrix and
computes the maximum or average for max pooling and average pooling, respectively.
DeepLearning
B.Tech–CSE R-20
GlobalPooling Layers
Global Pooling Layers often replace the classifier’s fully connected or Flatten layer.
The model instead ends with a convolutional layer that produces as many feature maps as
there are target classes and performs global average pooling on each of the feature maps to
combine each feature map into a single value.
Create the same NumPy array but with a different shape. By keeping the same shape
as above, the Global Pooling layers will reduce them to one value.
GlobalAverage Pooling
Considering a tensor of shapeh*w*n, the output of the Global Average Pooling layer
is a single value across h*w that summarizes the presence of the feature. Instead of
downsizingthepatchesoftheinputfeaturemap,theGlobalAveragePoolinglayerdownsizes the
whole h*w into 1 value by taking the average.
GlobalMaxPooling
With the tensor of shape h*w*n, the output of the Global Max Pooling layer is a
single value acrossh*wthat summarizes the presence of a feature. Instead of downsizing the
patchesoftheinputfeaturemap,theGlobalMaxPoolinglayerdownsizesthe whole h*w into 1
value by taking the maximum.
TrainingofCNNinTensorFlow
DeepLearning
B.Tech–CSE R-20
Theseare thestepsusedtotrainingtheCNN.
Steps:
Step3:Convolutionallay
Step5:ConvolutionallayerandPoolingLayer
Step6:Denselayer
Step7:Logit Layer
DeepLearning
B.Tech–CSE R-20
Step1:UploadDataset
The MNIST dataset is available with scikit for learning in this URL
(Unified ResourceLocator).Wecandownloaditandstoreitinourdownloads.We
canupload it with fetch_mldata ('MNIST Original').
Createatest/trainset
Weneed tosplitthedatasetintotrain_test_split.
Scalethefeatures
Finally,wescalethefunctionwiththehelpof MinMaxScaler.
1. importnumpyasnp
2. importtensorflowastf
3. fromsklearn.datasetsimportfetch_mldata
4. #ChangeUSERNAMEbytheusernameofthe machine
5. ##WindowsUSER
6. mnist=fetch_mldata('C:\\Users\\USERNAME\\Downloads\\MNISToriginal')
7. ##MacUser
8. mnist=fetch_mldata('/Users/USERNAME/Downloads/MNISToriginal')
9. print(mnist.data.shape)
10. print(mnist.target.shape)
11. fromsklearn.model_selectionimporttrain_test_split
DeepLearning
B.Tech–CSE R-20
12. A_train,A_test,B_train,B_test=train_test_split(mnist.data,mnis
t.target,test_siz e=0.2, random_state=45)
13. B_train= B_train.astype(int)
14. B_test=B_test.astype(int)
15. batch_size=len(X_train)
16. print(A_train.shape,B_train.shape,B_test.shape)
17. ##rescale
18. fromsklearn.preprocessingimportMinMaxScaler
19. scaler=MinMaxScaler()
20. #Trainthe Dataset
21. X_train_scaled=scaler.fit_transform(A_train.astype(np.float65))
1. #testthedataset
2. X_test_scaled=scaler.fit_transform(A_test.astype(np.float65))
3. feature_columns=[tf.feature_column.numeric_column('x',shape=A_
train_scale d.shape[1:])]
4. X_train_scaled.shape[1:]
DefiningtheCNN(ConvolutionalNeuralNetwork)
CNN uses filters on the pixels of any image to learn detailed patterns
comparedto global patterns with a traditional neural network. To create
CNN, we have to define:
CNNArchitecture
o ConvolutionalLayer:Itapplies145x5filters(extracting5x5-pixelsub-regions),
DeepLearning
B.Tech–CSE R-20
o Pooling Layer:This will perform max pooling with a 2x2 filter and stride
of 2 (which specifies that pooled regions do not overlap).
o ConvolutionalLayer:Itapplies365x5filters,withReLUactivationfunction
o PoolingLayer:Again,performsmaxPoolingwitha2x2filterandstrideof 2.
o 1,764 neurons,with the dropout regularization rate of 0.4 (where the
probability of 0.4 that any given element will be dropped in training)
o Dense Layer (LogitsLayer):Thereare tenneurons, oneforeachdigittargetclass(0-
9).
ImportantmodulestouseincreatingaCNN:
1. Conv2d().Constructatwo-
dimensionalconvolutionallayerwiththenumberoffilters, filter kernel size,
padding, and activation function like arguments.
2. max_pooling2d (). Construct a two-dimensional pooling layer using the
max-pooling algorithm.
3. Dense().Constructadenselayerwiththehiddenlayersand units
Wecandefinea functiontobuildCNN.
Step2:Inputlayer
1. #Inputlayer
2. defcnn_model_fn(mode,features, labels):
3. input_layer=tf.reshape(tensor=features["x"],shape=[-1,26,26,1])
Weneedtodefineatensorwiththeshapeofthedata.Forthat,wecanuse
themodule tf.reshape. In this module, we need to declare the tensor to
reshapeand to shape the tensor. The first argument is the feature of the
data, that is defined in the argument of a function.
Step3:ConvolutionalLayer
1. #firstCNNLayer
DeepLearning
B.Tech–CSE R-20
2. conv1=tf.layers.conv2d(
3. inputs=input_layer,
4. filters=18,
5. kernel_size=[7,7],
6. padding="same",
7. activation=tf.nn.relu)
The first convolutional layer has 18 filters with the kernel size of 7x7
with equal padding. The same padding has both the output tensor and
input tensor have the same width and height. TensorFlow will add zeros in
the rowsand columns to ensure the same size. We use the ReLu activation
function. The output size will be [28, 28, and 14].
Step4:Pooling layer
The next step after the convolutional is pooling computation. The pooling
computation will reduce the extension of the data. We can use the module
max_pooling2d with a size of 3x3 and stride of 2. We use the previous
layer as input. The output size can be [batch_size, 14, 14, and 15].
1. ##firstPoolingLayer
2. pool1=tf.layers.max_pooling2d(inputs=conv1,pool_size=[3,3],strides=2)
Step5:PoolingLayerand SecondConvolutionalLayer
The second CNN has exactly 32 filters, with the output size of
[batch_size, 14, 14, 32]. The size of the pooling layer has the same as
ahead, and output shape is [batch_size, 14, 14, and18].
1. conv2= tf.layers.conv2d(
2. inputs=pool1,
3. filters=36,
4. kernel_size=[5,5],
5. padding="same",
6. activation=tf.nn.relu)
7. pool2=tf.layers.max_pooling2d(inputs=conv2,pool_size=[2,2],strides=2).
DeepLearning
B.Tech–CSE R-20
1. pool2_flat=tf.reshape(pool2, [-1,7*7*36])
2. dense=tf.layers.dense(inputs=pool2_flat,units=7*7*36,activation=tf.nn.relu)
3. dropout=tf.layers.dropout(inputs=dense,rate=0.3,training=mod
e==tf.esti mator.ModeKeys.TRAIN)
Step7:Logits Layer
Finally,wedefinethelastlayerwiththepredictionofmodel.Theoutputsha
pe is equal to the batch size 12, equal to the total number of images in
the layer.
1. #LogitLayer
2. logits=tf.layers.dense(inputs=dropout,units=12)
PopularCNNarchitectures-VGG,GoogleNet,ResNet:
DeepLearning
B.Tech–CSE R-20
TypesofConvolutionalNeuralNetworkAlgorithms
LeNet
LeNet is a pioneering CNN designed for recognizing handwritten characters. It was proposed by
Yann LeCun, Leon Bottou, Yoshua Bengio, and Patrick Haffner in the late 1990s. LeNet consists of a
series of convolutional and pooling layers, as well as a fully connected layer and softmax classifier. It was
among the first successful applications of deep learning for computer vision. It has been used by banks to
identify numbers written on cheques in grayscale input images.
VGG
VGG (Visual GeometryGroup) is a research group within the Department of Engineering Science
at the Universityof Oxford. The VGG group is well-known for its work in computer vision, particularlyin
the area of convolutional neural networks (CNNs).
One of the most famous contributions from the VGG group is the VGG model, also known as
VGGNet. The VGG model is a deep neural network that achieved state-of-the-art performance on the
ImageNet Large Scale Visual Recognition Challenge in 2014, and has been widely used as a benchmarkfor
image classification and object detection tasks.
The VGG model is characterized by its use of small convolutional filters (3×3) and deep
architecture (up to 19 layers), which enables it to learn increasingly complex features from input images.
The VGG model also uses max pooling layers to reduce the spatial resolution of the feature maps and
increase the receptive field, which can improve its ability to recognize objects of varying scales and
orientations.
The VGG model has inspired many subsequent research efforts in deep learning, including the
development of even deeper neural networks and the use of residual connections to improve gradient flow
and training stability.
ResNet
ResNet (short for “Residual Neural Network”) is a family of deep convolutional neural networks
designed to overcome the problem of vanishing gradients that are common in very deep networks. Theidea
behind ResNet is to use “residual blocks” that allow for the direct propagation of gradients throughthe
network, enabling the training of very deep networks.
DeepLearning
B.Tech–CSE R-20
A residual block consists of two or more convolutional layers followed by an activation function,
combined with a shortcut connection that bypasses the convolutional layers and adds the original input
directly to the output of the convolutional layers after the activation function.
This allows the network to learn residual functions that represent the difference between the
convolutional layers’ input and output, rather than trying to learn the entire mapping directly. The use of
residual blocks enables the training of very deep networks, with hundreds or thousands of layers,
significantly alleviating the issue of vanishing gradients.
GoogLeNet
GoogLeNet is notable for its use of the Inception module, which consists of multiple parallel
convolutional layers with different filter sizes, followed by a pooling layer, and concatenation of the
outputs. This design allows the network to learn features at multiple scales and resolutions, while keeping
the computational cost manageable. The network also includes auxiliary classifiers at intermediate layers,
which encourage the network to learn more discriminative features and prevent overfitting.
GoogLeNet builds upon the ideas of previous convolutional neural networks, including LeNet,
which was one of the first successful applications of deep learning in computer vision. However,
GoogLeNet is much deeper and more complex than LeNet.
DeepLearning
B.Tech–CSE R-20
Dropout:
DeepLearning
B.Tech–CSE R-20
The term “dropout” refers to dropping out the nodes (input and
hidden layer) in a neural network (as seen in Figure 1). All the
forward and backwards connections with a dropped node are
temporarily removed, thus creating a
newnetworkarchitectureoutoftheparentnetwork.Thenodesaredroppe
dby a dropout probability of p.
Considergiveninputx:
{1,2,3,4,5}tothefullyconnectedlayer.Wehave
Generally, for the input layers, the keep probability, i.e. 1- drop
probability, is closer to 1, 0.8 being the best as suggested by the
authors. For the hidden layers, the greater the drop probability more
sparse the model, where 0.5 is the most optimised keep probability,
that states dropping 50% of the nodes.
HowdoesDropoutsolvetheOverfittingproblem?
In the overfitting problem, the model learns the statistical
noise. To be precise, the main motive of training is to decrease the
loss function, given all the units (neurons). So in overfitting, a unit
may change in a way that fixes up themistakesoftheother
units.Thisleadstocomplexco-adaptations,whichin
DeepLearning
B.Tech–CSE R-20
Figure2:(a)Hiddenlayerfeatureswithoutdropout;
(b)Hiddenlayerfeatureswithdropout
Fromfigure2,wecaneasilymakeoutthatthehiddenlayerwithdropo
ut is learning more of the generalised features than the co-
adaptations in the layer without dropout. It is quite apparent, that
dropout breaks such inter-unit relations and focuses more on
generalisation.
DeepLearning
B.Tech–CSE R-20
DropoutImplementation
Figure3:(a)Aunit(neuron)duringtrainingispresentwitha probability
p and is
connected to the next layer with weights ‘w’;
Inthestandardneuralnetwork,duringtheforwardpropagationweh
ave the following equations:
Figure4:Forwardpropagationofastandardneuralnetwork
where:
z:denotethevectorofoutputfromlayer(l+1)beforeacti
vation y: denote the vector of outputs from layer l
w:weightofthelayerl b:
bias of the layer l
DeepLearning
B.Tech–CSE R-20
Figure5:Forwardpropagationofalayerwithdropout
DeepLearning
B.Tech–CSE R-20
Learning.ItimprovesthelearningspeedofNeuralNetworksandprovides
regularization, avoiding overfitting.
Normalization:
Normalization is a pre-processing technique used to standardize data.In
other words, having different sources of data inside the same range. Not
normalizing the data before training can cause problems in our network, making it
drastically harder to train and decrease its learning speed.
There are two main methods to normalize our data. The moststraightforward
method is to scale it to a range from 0 to 1. The data point to normalize,the mean
of the data set,the highest value, andthe lowest value. This technique is generally
used in the inputs of the data. The non- normalized data points with wide ranges
can cause instability in Neural Networks. The relatively large inputs can cascade
down to the layers, causing problems such as exploding gradients.
beingthe data point to normalize,the mean of the data set, andthe standard
deviation of the data set. Now, each data point mimics a standard normal
DeepLearning
B.Tech–CSE R-20
distribution.Havingallthefeaturesonthisscale,noneofthemwillhaveabias, and
therefore, our models will learn better.
InBatchNorm,weusethislasttechniquetonormalizebatchesofdata inside
the network itself.
BatchNormalization
Batch Norm is a normalization technique done between the layers of a
NeuralNetwork instead of in the raw data. It isdone along mini-batches instead
of the full data set. It serves to speed up training and use higher learning rates,
making learning easier.
beingmzthe mean of the neurons’ output and szthe standard deviation of the
neurons’ output.
HowIs ItApplied?
Thefollowingimagerepresentsaregularfeed-forwardNeural
Network:are the inputs,the output of the neurons,the output of the
activation functions, andthe output of the network:
DeepLearning
B.Tech–CSE R-20
Batch Norm–in the image represented with a red line–is applied to the
neurons’outputjustbeforeapplyingtheactivationfunction.Usually,aneuronwithout
BatchNormwouldbecomputedasfollows:
beingthelineartransformationofthe neuron,theweightsoftheneuron,
thebiasoftheneurons,and
theactivationfunction.Themodellearnsthe parameters
and. Adding Batch Norm, it looks as:
DataAugmentation:
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
helps to address issues like overfitting and data scarcity, and it makes the
model robust with better performance. Data Augmentation provides many
possibilities to alter the original image and can be useful to add enough
data for larger models.
DataAugmentationinaCNN:
Convolutional Neural Networks (CNNs) can do amazing things if there is
sufficient data. However, selecting the correct amount of training data for allof
the features that need to be trained is a difficult question. If the user does not
have enough, the networkcanoverfiton the trainingdata.Realisticimages
contain a variety of sizes, poses, zoom, lighting, noise, etc.
DataAugmentationTechniques DataAugmentationFactor
Flipping 2-4x(ineachdirection)
Rotation Arbitrar
y
Translation Arbitrar
y
Scaling Arbitrar
y
DeepLearning
B.Tech–CSE R-20
Atableoutliningthefactorbywhichdifferentmethodsmultiplytheexistingtraining data.
DataAugmentationTechniques:
Some libraries use Data Augmentation by actually copying the training
images and saving these copies as part of the total. This produces new training
examples to feed to the machine learning model. Other libraries simply define
a set of transformsto perform on the input training data. These transforms are
appliedrandomly.Asa result,the space the optimizer issearchingis increased.
Thishastheadvantagethatitdoesnotrequireextra diskspacetoaugmentthe
training.
ImageDataAugmentationinvolvesthetechniquessuchas
a) Flips:
By Flipping images, the optimizer will not become biased that particular
features of an image are only on one side. To do this augmentation, theoriginal
training image is flipped vertically or horizontally over one axis of the image. As
a result, the features continually change directions.
StellathePuppysittingonacarseat StellathePuppyFlippedovertheverticalaxis.
b) Rotation:
DeepLearning
B.Tech–CSE R-20
data is great. For rotation, the background color is commonly fixed so that it
can blend when the image is rotated. Otherwise, the model can assume the
background change is a distinct feature. This works best when the background
is the same in all rotated images.
c) Translation:
DeepLearning
B.Tech–CSE R-20
StellathePuppysittingonacarseat Stella
thePuppytranslatedandcroppedsoshe’s onlypartlyvisible.
d) Scaling:
e) Salt andPepperNoiseAddition
Salt and pepper noise addition is the addition of black and white
dots (looking like salt and pepper) to the image. This simulates dust and
imperfections in real photos. Even if the cameraof thephotographeris
blurryorhasspots on it, the image would be better recognized by the
model. The training data set is augmented to train the model with more
realistic images.
StellathePuppysittingonacarseat StellathePuppywithSaltandPeppernoiseadded
totheimage
DeepLearning
B.Tech–CSE R-20
BenefitsofDataAugmentationinaCNN
DrawbacksofDataAugmentation:
DeepLearning
B.Tech–CSE R-20
UNIT-III
RECURRENT NEURAL NETWORK (RNN): Introduction to
RNNs and their applications in sequential data analysis, Back
propagation through time (BPTT), Vanishing Gradient Problem,
gradient clipping Long Short-Term Memory (LSTM) Networks,
Gated Recurrent Units, Bidirectional LSTMs, Bidirectional RNNs.
IntroductiontoRNNsandtheirapplicationsin sequentialdataanalysis:
ADeepLearningapproachformodellingsequentialdataisRNN:
RNNs were the standard suggestion for working with sequential data
beforetheadventofattentionmodels.Specificparametersforeach
DeepLearning
B.Tech–CSE R-20
WhatisaRecurrentNeuralNetwork(RNN)?
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
networkscananticipatesequentialdatainawaythatotheralgorithmscan’t.
TheArchitectureofaTraditionalRNN
DeepLearning
B.Tech–CSE R-20
BelowaresomeexamplesofRNNarchitectures.
DeepLearning
B.Tech–CSE R-20
HowdoesRecurrentNeuralNetworkswork?
The input layer xreceives and processes the neural network’s input
before passing it on to the middle layer.
DeepLearning
B.Tech–CSE R-20
CommonActivationFunctions:
Thefollowingaresomeofthemostcommonlyutilizedfunctions:
ApplicationsofRNNNetworks:
DeepLearning
B.Tech–CSE R-20
1. MachineTranslation:
RNN can be used to build a deep learning model that can translatetext
from one language to another without the need for human intervention. You
can, for example, translate a text from your native language to English.
2. Text Creation:
RNNs can also be used to build a deep learning model for text
generation. Based on the previous sequence of words/characters used in the
text, a trained modellearns the likelihoodofoccurrenceofa word/character. A
model can be trained at the character, n-gram, sentence, or paragraph level.
3. Captioningofimages:
4. RecognitionofSpeech:
DeepLearning
B.Tech–CSE R-20
5. ForecastingofTimeSeries:
RecurrentNeuralNetworkVsFeedforwardNeuralNetwork:
A feed-forward neural network has only one route
ofinformation flow: from the input layer to the output layer,
passing through the hidden layers. The data flows across the
network in a straight route, never going through the same node
twice.
DeepLearning
B.Tech–CSE R-20
BackpropagationThroughTime-RNN:
Backpropagation is a training algorithm that we use for training neural
networks. When preparing a neural network, we are tuning the network's
weights to minimize the error concerning the available actual values with the
help of the Backpropagation algorithm. Backpropagation is a supervised learning
algorithm as we find errors concerning already given values.
The backpropagation training algorithm aims to modify the weights of a
neural network to minimize the error of the network results compared to some
expected output in response to corresponding inputs.
DeepLearning
B.Tech–CSE R-20
ThegeneralalgorithmofBackpropagationisasfollows:
1. We first train input data and propagate it through the network to
get an output.
2. Compare the predicted outcomes to the expected results and
calculate the error.
3. Then,wecalculatethederivativesoftheerrorconcerningthenetwork
weights.
4. We use these calculated derivatives to adjust the weights to
minimize the error.
5. Repeattheprocessuntiltheerrorisminimized.
UnrollingTheRecurrentNeuralNetwork
Recurrent Neural Network deals with sequential data. RNN predicts
outputs using not only the current inputs but also by considering those that
occurred before it. In other words, the current outcome depends on the current
production and a memory element (which evaluates the past inputs).
ThebelowfiguredepictsthearchitectureofRNN.
DeepLearning
B.Tech–CSE R-20
x1, x2, x3are the inputs at time t1, t2, t3, respectively, and Wxis the associated
weight matrix.
Y1, Y2, Y3are the outcomes at time t1, t2, t3, respectively, and Wyis the
associated weight matrix.
At time t0, we feed input x0 to the network and output y0. At time t1, we
provideinputx1tothenetworkandreceiveanoutputy1.Fromthefigure,wecan see
that to calculate the outcome. The network uses input x and the cell state from
the previous timestamp. To calculate specific Hidden state and output at each
step, here is the formula:
BackpropagationThroughTime
Ws, Wx, and Wy do not change across the timestamps, which means
thatfor all inputs in a sequence, the values of these weights are the same.
Theerrorfunctionisdefinedas:
Thepointstoconsiderare:
Whatisthetotallossforthisnetwork?
Howdoweupdatetheweights,Ws,Wx,andWy?
The total loss we have to calculate is the sum in overall timestamps, i.e.,
E0+E1+E2+E3+...Now tocalculatetheerrorgradientconcerning Ws,Wx,andWy. It is
relatively easy to calculate the loss derivative concerning Wy as the derivative
only depends on the current timestamp values.
Formula:
DeepLearning
B.Tech–CSE R-20
ThencalculatingthederivativeoflossconcerningWsandWx,becomescomplex.
Formula:
Thegeneralexpressioncanbewrittenas:
Similarly,forWx,itcanbewrittenas:
Wefeedasequenceoftimestampsofinputandoutputpairstothe network.
Then,weunrollthenetworkthencalculateandaccumulateerrors
across each timestamp.
DeepLearning
B.Tech–CSE R-20
Finally,werollupthenetworkandupdateweights.
Repeattheprocess.
Limitationsof BPTT:
BPTT has difficulty with local optima. Local optima are a more significant
issue with recurrent neural networks than feed-forward neural networks. The
recurrent feedback in such networks creates chaotic responses in the error
surface,which causeslocal optima to occur frequentlyandinthe wrong locations
on the error surface.
When using BPTT in RNN, we face problems such as exploding gradient
and vanishing gradient. To avoid issues such as exploding gradient, we use a
gradient clipping method to check if the gradient value is greater than the
threshold or not at each timestamp. If it is, we normalize it. This helps to tackle
exploding gradient.
We can use BPTT up to a limited number of steps like 8 or 10. If we
backpropagate further, the gradient becomes too negligible and is a Vanishing
gradient problem. To avoid the vanishing gradient problem, some of the possible
solutions are:
Using ReLU activation function in place of tanh or sigmoid activation
function.
Properinitializingthe weightmatrixcanreducetheeffectofvanishing
gradients. For example, using an identity matrix helps us tackle this
problem.
UsinggatedcellssuchasLSTMorGRUs.
VanishingGradientProblem:
DeepLearning
B.Tech–CSE R-20
Basically, during the training, your cost function compares your outcomes (red
circles on the image below) to your desired output. As a result, you have these values
throughout the time series, for every single one of these red circles.
The focus is on one error term e t. We calculate the cost function e t and then
propagate the cost function back through the network because of the need to updatethe
weights.
The problem relates to updating wrec (weight recurring) – the weight that isused
to connect the hidden layers to themselves in the unrolled temporal loop.
For instance, to get from xt-3 to xt-2 we multiply xt-3 by wrec. Then, to get from
xt-2 to xt-1 we again multiply xt-2 by wrec. So, we multiply with the same exact weight
multipletimes, andthisiswherethe problemarises:when we multiplysomethingbya small
number, the value decreases very quickly.
DeepLearning
B.Tech–CSE R-20
As we know, weights are assigned at the start of the neural network with the
random values, which are close to zero, and from there the network trains them up.
But, when you start with wrec close to zero and multiply x t, xt-1, xt-2, xt-3, … by this
value, your gradient becomes less and less with each multiplication.
Whatdoesthismeanforthenetwork?
The lower the gradient is, the harder it is for the network to update the weights
and the longer it takes to get to the final result.
For instance, 1000 epochs might be enough to get the final weight for the time
point t, but insufficient for training the weights for the time point t-3 due to a verylow
gradient at this point. However, the problem is not only that half of the network is not
trained properly.
The output of the earlier layers is used as the input for the further layers. Thus,
the training for the time point t is happening all along based on inputs that are coming
fromuntrained layers. So, because of the vanishing gradient, the whole network is not
being trained properly.
To sum up, if wrec is small, you have vanishing gradient problem, and if wrec
is large, you have exploding gradient problem. For the vanishing gradient problem,the
further you go through the network, the lower your gradient is and the harder it is to
train the weights, which has a domino effect on all of the further weightsthroughout
the network.
DeepLearning
B.Tech–CSE R-20
That was the main roadblock to using Recurrent Neural Networks. However,
the possible solutions to this problem are as follows:
Solutionstothevanishinggradientproblem
Incaseofexplodinggradient,youcan:
Stopbackpropagatingafteracertainpoint,whichisusuallynotoptimal because
not all of the weights get updated.
Penalizeorartificiallyreducegradient.
Putamaximumlimitonagradient.
Incaseofvanishinggradient,youcan:
Initializeweightssothatthepotentialforvanishinggradientisminimized.
HaveEchoStateNetworksthataredesignedtosolvethevanishinggradient problem.
HaveLongShort-TermMemoryNetworks(LSTMs).
GradientclippingLongShort-TermMemory(LSTM)Networks:
Training a neural network can become unstable given the choice of error
function, learning rate, or even the scale of the target variable. Large updates to
weightsduringtrainingcancausea numericaloverfloworunderflowoften referred to as
“Exploding Gradients.”
A common and relatively easy solution to the exploding gradients problem isto
change the derivative of the error before propagating it backward through the network
and using it to update the weights. Two approaches include rescaling the gradients
given a chosen vector norm and clipping gradient values that exceed a preferred range.
Together, these methods are referred to as “Gradient Clipping.”
Trainingneuralnetworkscanbecomeunstable,leadingtoanumerica
l overflow or underflow referred to as exploding gradients.
DeepLearning
B.Tech–CSE R-20
ExplodingGradientsandClipping
Neural networks are trained using the stochastic gradient
descentoptimization algorithm. This requires first the estimation of the
loss on one or more training examples, then the calculation of the
derivative of the loss, which is propagated backward through the network
in order to update the weights. Weights are updated using a fraction of
the back propagated error controlled by the “LearningRate”.
The difficulty that arises is that when the parameter gradient is very
large, a gradient descent parameter update could throw the parameters
very far, into aregion where the objective function is larger, undoing much
of the work that hadbeen done to reach the current solution.
DeepLearning
B.Tech–CSE R-20
Poorchoiceoflearningratethatresultsinlargeweight updates.
DeepLearning
B.Tech–CSE R-20
One difficulty when training LSTM with the full gradient is that the
derivatives sometimes become excessively large, leading to numerical
problems. To prevent this, [we] clipped the derivative of the loss with
respect to the network inputs to the LSTM layers (before the sigmoid and
tanh functions are applied) to lie within a predefined range.
Therearetwo mainmethodsforupdatingtheerrorderivativeasfollows:
GradientScaling.
GradientClipping.
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
The value for the gradient vector norm or preferred range can be
configuredby trial and error, by using common values used in the
literature or by first observing common vector norms or ranges via
experimentation and then choosing a sensible value.
Experimental analysis reveals that for a given task and model size,
training is not very sensitive to this [gradient norm] hyperparameter and
the algorithm behaves well even for rather small thresholds.
GatedRecurrentUnit(GRU):
A Gated Recurrent Unit (GRU) is a Recurrent Neural Network (RNN) architecture
type. Like other RNNs, a GRU can process sequential data such as time series, natural
language, and speech. The main difference between a GRU and other RNN architectures,
such as the Long Short-Term Memory (LSTM) network, is how the network handles
information flow through time.
Example:
my friends."
DeepLearning
B.Tech–CSE R-20
As it can be observed from the above sentence, words that affect each other can be
further apart. For example, "bicycle" and "go biking" are closely related but are placed
further apart in the sentence. An RNN network finds tracking the state with such a long
context difficult. It needs to find out what information is important. However, a GRU cell
greatly alleviates this problem.
GRUnetworkwasinventedin2014.Itsolvesproblemsinvolvinglongsequenceswith
contextsplacedfurtherapart,liketheabovebikingexample.Thisispossiblebecauseofhow the
GRU cell in the GRU architecture is built.
UnderstandingtheGRUCell:
The GRU cell is the basic building block of a GRU network. It comprises three main
components: an update gate, a reset gate, and a candidate hidden state.
One of the key advantages of the GRU cell is its simplicity. Since it has fewer
parameters than a long short-term memory (LSTM) cell, it is faster to train and run and less
prone to overfitting.
Additionally, one thing to remember is that the GRU cell architecture is simple, the
cell itself is a black box, and the final decision on how much we should consider the past
state and how much should be forgotten is taken by this GRU cell.
GRUvsLSTM
GRU LSTM
Simplerstructurewithtwogates More complexstructurewiththree gates
Structure (update and reset gate) (input, forget, and output gate)
Fewer parameters (3 weight
Parameters Moreparameters (4weight matrices)
matrices)
DeepLearning
B.Tech–CSE R-20
GRU LSTM
Training Fastertotrain Slow to train
Inmostcases,GRUtendtouse LSTMhasamorecomplexstructureand
Space fewermemoryresourcesduetoits alargernumberofparameters,thusmight
Complexity simpler structure and fewer require more memory resources and
parameters,thusbettersuitedfor couldbelesseffectiveforlargedatasets
largedatasetsorsequences. orsequences.
Generallyperformedsimilarlyto
LSTM generally performs well on many
LSTMonmanytasks,butinsome
tasks but is more computationally
cases,GRUhasbeenshownto
expensive and requires more memory
Performance outperformLSTMandviceversa.
It'sbettertotrybothandseewhich resources. LSTM has advantages over
worksbetterforyourdatasetand GRU in natural language understanding
task. and machine translation tasks.
TheArchitectureofGRU
AGRUcellkeepstrackoftheimportantinformationmaintainedthroughoutthe network. A
GRU network achieves this with the following two gates:
ResetGate
UpdateGate.
GivenbelowisthesimplestarchitecturalformofaGRUcell.AGRUcelltakestwo
inputs:
1. Theprevioushiddenstate
2. Theinputinthecurrenttimestamp.
The cell combinestheseandpasses them through the update and reset gates. To get
the output in the current timestep, we must pass this hidden state through a dense layer
with softmax activation to predict the output. Doing so, a new hidden state is obtained and
then passed on to the next time step.
DeepLearning
B.Tech–CSE R-20
Updategate
An update gate determines what current GRU cell will pass information to the next
GRU cell. It helps in keeping track of the most important information.
Obtainingtheoutput oftheUpdateGateinaGRUcell:
The input to the update gate is the hidden layer at the previous timestep, h(t−1) and
the current input (xt). Both have their weights associated with them which are learned
during the training process. Let us say that the weights associated withh(t−1) isU(z), and that
of xtis Wz. The output of the update gate Ztis given by,
zt=σ(W(z)xt+U(z)h(t−1)
Resetgate
The input to the reset gate is the hidden layer at the previous timestep h(t−1)andthe
current input xt. Both have their weights associated withthem whichare learned during
thetrainingprocess.Letussaythattheweights associatedwith h(t−1)isUr,andthatof xt is Wr. The
output of the update gate rt is given by,
rt=σ(W(r)xt+U(r)h(t−1))
It is important to note that the weights associated with the hidden layer at the
previous timestep and the current input are different for both gates. The values for these
weights are learned during the training process.
HowDoesGRU Work?
Gated Recurrent Unit (GRU)networks process sequential data, such as time series or
natural language, bypassing the hidden state from one time step to the next. The hidden
state is a vector that captures the information from the past time steps relevant to the
currenttimestep.ThemainideabehindaGRUistoallowthenetworktodecidewhat
DeepLearning
B.Tech–CSE R-20
information from the last time step is relevant to the current time step and what
information can be discarded.
CandidateHiddenState
A candidate's hidden state is calculated from the reset gate. This is used todetermine
the information stored from the past. This is generally called the memory component in a
GRU cell. It is calculated by,
ht′=tanh(Wxt+rt⊙Uht−1)
Here,W-weightassociatedwiththecurrentinput
rt-Outputoftheresetgate
U-Weightassociatedwiththehiddenlayeroftheprevious timestep
ht-Candidatehiddenstate.
Hidden state
The following formula gives the new hidden state and depends on the update gate
and candidate hidden state.
ht=zt⊙ht−1+(1−zt)⊙ht′
Here,zt-OutputofupdategateKaTeXparseerror Expected'EOF'got'’'atposition2: h’t -
ht−1-Hiddenstateattheprevious timestep
It can be observed that whenever ztis 0, the information at the previously hidden
layer gets forgotten. It is updated with the value of the new candidate hidden layer
(as1−ztwillbe1).If ztis1,thentheinformationfromthepreviously hidden layerismaintained.This
is how the most relevant information is passed from one state to the next.
ForwardPropagationinaGRUCell
InaGatedRecurrentUnit(GRU)cell,theforwardpropagationprocessincludes several
steps:
DeepLearning
B.Tech–CSE R-20
Calculatetheoutput oftheupdategate(zt)usingtheupdategateformula:
Calculatetheoutputoftheresetgate(rt)usingtheresetgateformula:
Calculatethecandidate'shiddenstate.
DeepLearning
B.Tech–CSE R-20
Calculatethenewhiddenstate.
This is how forward propagation happens in a GRU cell at a GRU network. Next, the
process of how the weights is learnt in a GRU network to make the right prediction have to
be understood.
BackpropagationinaGRUCell
In the above image, it is observed that whenever the network predicts wrongly, the
network compares it with the original label, and the loss is then propagated throughout the
network.Thishappensuntilalltheweights'valuesareidentifiedsothatthevalueof theloss
function used to compute the loss is minimum. During this time, the weights and biases
associated with the hidden layers and the input are fine-tuned.
DeepLearning
B.Tech–CSE R-20
AnalogybetweenLSTMandGRUintermsofarchitectureandperformance:
LSTM and GRU are two types of recurrent neural networks (RNNs) that can handle
sequential data, such as text, speech, or video. They are designed to overcome the problem of
vanishing or exploding gradients that affect the training of standard RNNs. However, they
have different architectures and performance characteristics that make them suitable for
different applications. In this article, you will learn about the differences and similarities
between LSTM and GRU in terms of architecture and performance.
LSTMArchitecture
LSTM stands for long short-term memory, and it consists of a series of memory cells
that can store and update information over long time steps. Each memory cell has three
gates: an input gate, an output gate, and a forget gate. The input gate decides what
information to add to the cell state, the output gate decides what information to output
from the cell state, and the forget gate decides what information to discard from the cell
state. The gates are learned by the network based on the input and the previous hidden
state.
GRU Architecture
GRU standsfor gated recurrentunit, and it is asimplified versionof LSTM. It hasonly
two gates: a reset gate and an update gate. The reset gate decides how much of the
previous hidden state to keep, and the update gate decides how much of the new input to
incorporate into the hidden state. The hidden state also acts as the cell state and theoutput,
so there is no separate output gate. The GRU is easier to implement and requires fewer
parameters than the LSTM.
PerformanceComparison
The performance of LSTM and GRU depends on the task, the data, and the
hyperparameters. Generally, LSTM is more powerful and flexible than GRU, but it is also
more complex and prone to overfitting. GRU is faster and more efficient than LSTM, but it
may not capture long-term dependencies as well as LSTM. Some empirical studies have
shownthatLSTMandGRUperformsimilarlyonmanynaturallanguageprocessingtasks,
DeepLearning
B.Tech–CSE R-20
such as sentiment analysis, machine translation, and text generation. However, some tasks
may benefit from the specific features of LSTM or GRU, such as image captioning, speech
recognition, or video analysis.
SimilaritiesBetweenLSTMandGRU
Despite their differences, LSTM and GRU share some common characteristics that
makethembotheffectiveRNNvariants.Theybothusegatestocontroltheinformationflow and to
avoid the vanishing or exploding gradient problem. They both can learn long-term
dependencies and capture sequential patterns in the data. They both can be stacked into
multiple layers to increase the depth and complexity of the network.
They both can be combined with other neural network architectures, such as
convolutional neural networks (CNNs) or attention mechanisms, to enhance their
performance.
DifferencesBetweenLSTMandGRU
The main differences between LSTM and GRU lie in their architectures and their
trade-offs. LSTM has more gates and more parameters than GRU, which gives it more
flexibility and expressiveness, but also more computational cost and risk of overfitting. GRU
has fewer gates and fewer parameters than LSTM, which makes it simpler and faster, but
also less powerful and adaptable.
LSTM has a separate cell state and output, which allows it to store and output
different information, while GRU has a single hidden state that serves both purposes, which
may limit its capacity. LSTM and GRU may also have different sensitivities to the
hyperparameters, such as the learning rate, the dropout rate, or the sequence length.
BidirectionalLSTM
Introduction:
To understand the working of Bi-LSTM first, the working of the unit cell of LSTM
and LSTM network has to be understood. LSTM stands for long short-term memory. In
1977, Hochretier and Schmidhuber introduced LSTM networks. These are the most
commonly used recurrent neural networks.
DeepLearning
B.Tech–CSE R-20
NeedofLSTM
As the sequential data is better handled by recurrent neural networks, but
sometimes it is also necessary to store the result of the previous data. For example, “I
will play cricket” and “I can play cricket” are two different sentences with different
meanings. The meaning of the sentence depends on a single word so, it is necessary to
store the data of previous words. But no such memory is available in simple RNN. To
solve this problem, LSTM is adopted.
TheArchitectureoftheLSTMUnit
TheLSTMunithasthreegates.
a) Input gate
First, the current state x(t) and previous hidden state h(t-1) are passed into the
input gate, i.e., the second sigmoid function. The x(t) and h(t-1) values are transformed
between0and1,where 0isimportant,and1is notimportant.Furthermore,thecurrent and
hidden state information will be passed through the tanh function. The output from the
tanh function will range from -1 to 1, and it will help to regulate the network. The
output values generated from the activation functions are ready for point-by-point
multiplication.
b) Forgetgate
The forget gate decides which information needs to be kept for further
processing and which can be ignored. The hidden state h(t-1) and current input X(t)
informationarepassedthroughthesigmoidfunction.Afterpassingthevaluesthrough
DeepLearning
B.Tech–CSE R-20
thesigmoidfunction,itgeneratesvaluesbetween0and1thatconcludewhetherthe part of
the previous output is necessary (by giving the output closer to 1).
c) Output gate
The output gate helps in deciding the value of the next hidden state. This state
contains information on previous inputs. First, the current and previously hidden state
values are passed into the third sigmoid function. Then the new cell state generated
from the cell state is passed through the tanh function. Both these outputs aremultiplied
point-by-point. Based upon the final value, the network decides which information the
hidden state should carry. This hidden state is used for prediction.
Finally, the new cell state and the new hidden state are carried over to the next
step. To conclude, the forget gate determines which relevant information from the prior
steps is needed. The input gate decides what relevant information can be added fromthe
current step, and the output gates finalize the next hidden state.
HowdoLSTMwork?
TheLengthyShortTermMemoryarchitecture wasinspiredbyanexaminationof
error flow in current RNNs, which revealed that long time delays were inaccessible to
existing designs due to backpropagated error, which either blows up or decays
exponentially.
An LSTM layer is made up of memory blocks that are recurrently linked. These
blocks can be thought of as a differentiable version of a digital computer's memory
chips. Each one has recurrently connected memory cells as well as three multiplicative
units – the input, output, and forget gates – that offer continuous analogs of the cells'
write, read, and reset operations.
WhatisBi-LSTM?
Bidirectional LSTM networks function by presenting each training sequence
forward and backward to two independent LSTM networks, both of which are coupled
to the same output layer. This means that the Bi-LSTM contains comprehensive,
sequential information about all points before and after each point in a particular
sequence.
In other words, rather than encoding the sequence in the forward direction only,
weencodeitinthebackwarddirectionaswellandconcatenatetheresultsfromboth
DeepLearning
B.Tech–CSE R-20
WorkingofBi-LSTM:
Consider the sentence “I will swim today”. The below image represents the
encoded representation of the sentence in the Bi-LSTM network.
So, when forward LSTM occurs, “I” will be passed into the LSTM network at timet
= 0, “will” at t = 1, “swim” at t = 2, and “today” at t = 3. In backward LSTM “today” will be
passedinto the network at time t = 0, “swim” at t = 1, “will” at t = 2, and“I” at t = 3. In this
way, both the results of forward and backward LSTM at each time step are calculated.
DeepLearning
B.Tech–CSE R-20
UNIT-IV
GENERATIVEADVERSARIALNETWORKS(GANS):
Generativemodels,Conceptandprinciplesof GANs,Architecture of
GANs (generator and discriminator networks), Comparison
between discriminative and generative models, Generative
Adversarial Networks (GANs), Applicationsof GANs
GenerativeAdversarialNetworksanditsmodels
Introduction:
Adversarial –Thetrainingofthemodelisdoneinanadversarialsetting.
Networks–Usedeepneuralnetworksfortrainingpurposes.
between real and generated samples. It is trained with real samples from
the training data and generated samples from the generator. The
that fool the discriminator, while the discriminator tries to improve its
quality samples that are difficult for the discriminator to distinguish from
real data.
They have been used for tasks like generating realistic images, creating
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
WhyGANs wasDeveloped?
something that neural networks can start visualizing new patterns like
sample train data. Thus, GANs were built that generate new fake results
ComponentsofGenerativeAdversarialNetworks(GANs):
WhatisGeometricIntuitionbehindtheworkingofGANs?
The role of the generator is like a thief to generate the fake samples
DeepLearning
B.Tech–CSE R-20
a model that estimates the probability that the sample it receives is from
training data not from the generator and tries to classify it accurately and
Thebelowfigureaddressestheconstr
DeepLearning
B.Tech–CSE R-20
of GAN?
DeepLearning
B.Tech–CSE R-20
Howtwoneuralnetworksarebuildandtrainingandpredictionis done?
Training&PredictionofGenerativeAdversarialNetworks(GANs):
first step is to define the problem. GANs work with a different set of
problems you are aiming so you need to define What you are creating like
Step-2)SelectArchitectureofGAN
There are many different types of GAN & based on the scenario(s), a
Step-3)TrainDiscriminatoronRealDataset
DeepLearning
B.Tech–CSE R-20
forwardpath.NobackpropagationisthereinthetrainingoftheDiscriminatorinne
pochs.
DeepLearning
B.Tech–CSE R-20
And the provided Data is without Noise and only contains real images,
DiscriminatorTraining:
Itclassifiesbothrealandfakedata.
Thediscriminatorlosshelpsimproveitsperformanceandpenalizeitwhenit
weightsofthediscriminatorareupdatedthroughdiscriminatorloss.
Step-4)Train Generator
Provide some Fake inputs for the generator (Noise) and it will use
some random noise and generate some fake outputs. when Generator is
the generator takes time and runs under many epochs. Steps to train a
Predictgeneratoroutputfromdiscriminatorasoriginalorfake.
Calculatediscriminatorloss.
Performbackpropagationthroughdiscriminator,andgeneratorbothtocalculategradients
.
Usegradientstoupdategenerator weights.
Step-5)TrainDiscriminatoronFakeData
Discriminator and It will predict the data passed to it is Fake or real and
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
Step-6)TrainGeneratorwiththeoutputofDiscriminator
and continues running until the Generator is not successful in making the
discriminator fool.
GenerativeAdversarialNetworks(GANs)LossFunction:
process. The generator tries to minimize the following loss function while
D(x)isthediscriminator’sestimateoftheprobabilitythatrealdatainstancexis real.
D(G(z))isthediscriminator’sestimateoftheprobabilitythatafakeinstanceis real.
Ez
istheexpectedvalueoverallrandominputstothegenerator(ineffect,theexpec
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
horse.
DifferentTypesofGenerativeAdversarialNetworks(GANs):
1) DC GAN –It is a Deep convolutional GAN. It is one of the most used,
discriminator to classify the input correctly and not easily full by the
generator.
3) Least Square GAN (LSGAN) –It is a type of GAN that adopts the least-
DeepLearning
B.Tech–CSE R-20
square lossfunctionforthediscriminator.Minimizingtheobjectivefunctionof
DeepLearning
B.Tech–CSE R-20
advanced version of it. It says that the Discriminator should not only
classify the image as real or fake but should also provide the source or
network for video generation built upon the BigGAN architecture. DVD-
Discriminator.
Transformation.
TopGenerativeAdversarialNetworksApplications:
DeepLearning
B.Tech–CSE R-20
renderedimagescanbeusedtoaugmentexistingimagedatasetsortocreatee
ntirely new datasets.
8) Face Frontal View Generation: GANs can generate frontal views of faces
from images that show the face at an angle. We can use it to improve face
recognition algorithm’s performance or synthesize pictures for use in
other applications.
DeepLearning
B.Tech–CSE R-20
DifferencesBetweenDiscriminativeandGenerativeModels
1) Core Idea
2) MathematicalIntuition
3) Applications
4) Outliers
DeepLearning
B.Tech–CSE R-20
Generativemodelshavemoreimpactonoutliersthandiscriminativemodels.
DeepLearning
B.Tech–CSE R-20
5) ComputationalCost
ComparisonBetweenDiscriminativeandGenerative Models:
1) Based on Performance
2) BasedonMissingData
3) Basedonthe AccuracyScore
4) Based onApplications
GenerativeModelsvsDiscriminativeModels:
Machine learning (ML) and Deep Learning (DL) are two of the most
exciting andconstantlychangingfieldsofstudyofthe21stcentury.Usingthese
DeepLearning
B.Tech–CSE R-20
technologies,machinesaregiventheabilitytolearnfrompastdataandpredict
or make decisions from future, unseen data.
The inspiration comes from the human mind, how we use past
experiences to help us make informed decisions in the present and the
future. And while there are already many applications of ML and DL, the
future possibilities are endless.
Quintillions of data are generated all over the world almost daily, so
getting fresh data is easy. But in order to work with this gigantic amount
of data, we need new algorithms or we need to scale up existing ones.
1. Discriminativemodels
2. Generativemodels
DeepLearning
B.Tech–CSE R-20
Discriminativemodel
Here are some examples and a brief description of the widely used
discriminative models:
Generativemodel
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
TocalculateP(Y|X),they firstestimatethepriorprobability
P(Y)andthelikelihood probability P(X|Y) from the data provided.
Someexamplesaswellasadescriptionofgenerativemodelsareasfollows:
SomeotherexamplesincludeNaiveBayes,Markovrandomfield,hiddenMarko
v model (HMM), latent Dirichlet allocation (LDA), etc.
Discriminativevsgenerative:WhichisthebestfitforDeepLearning?
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
UNIT-V
Auto-encoders:
Autoencoders are a type of deep learning algorithm that are
designed to receive an input and transform it into a different
representation. They play an important part in image construction.
Artificial Intelligence encircles a wide range of technologies and
techniques that enable computer systems to solve problems like Data
Compression which is used in computer vision, computer networks,
computer architecture, and many other fields.
Autoencoders:ItsEmergence
AutoencodersarepreferredoverPCAbecause:
DeepLearning
B.Tech–CSE R-20
ApplicationsofAutoencoders
1) ImageColoring
Autoencoders are used for converting any black and white picture
into a colored image. Depending on what is in the picture, it is possible to
tell what thecolor should be.
2) Featurevariation
It extracts only the required features of an image and generates the
output by removing any noise or unnecessary interruption.
DeepLearning
B.Tech–CSE R-20
3) DimensionalityReduction
The reconstructed image is the same as our input but with reduced
dimensions. It helps in providing the similar image with a reduced pixel
value.
4) DenoisingImage
The input seen by the autoencoder is not the raw input but a
stochastically corrupted version. A denoising autoencoder is thus trained
to reconstruct the original input from the noisy version.
DeepLearning
B.Tech–CSE R-20
5) WatermarkRemoval
It is also used for removing watermarks from images or to remove any object
while filming a video or a movie.
ArchitectureofAutoencoders
AnAutoencoderconsistofthreelayers:
1. Encoder
2. Code
3. Decoder
DeepLearning
B.Tech–CSE R-20
Thelayerbetweentheencoderanddecoder,ie.thecodeisalsoknown
Compactnessofrepresentation,measuredasthecompressibility.
Itretainssomebehaviourallyrelevant variablesfromtheinput.
Traininganauto-encoderfordatacompressionandreconstruction:
The encoder network takes the input data and maps it to a lower-
dimensional representation. This lower-dimensional representation is the
compressed data. The decoder network takes this compressed data and
maps it back to the original input data. The decoder network is essentially
the inverse of the encoder network.
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
Image CompressionwithAutoencoders
DeepLearning
B.Tech–CSE R-20
images to the original images. The most common evaluation metric is the
peak signal-to-noise ratio (PSNR), which measures the amount of noise
introduced by the compression algorithm. Higher PSNR values indicate
better compression quality.
DeepLearning
B.Tech–CSE R-20
ImageReconstructionwithAutoencoders
Explanationofimagereconstructionfromcompressed data:
Howautoencoderscanbeusedforimagereconstruction:
Examplesofimagereconstructionusingautoencoders:
Autoencoder-basedreconstructiontechniquesefficiencyevaluation:
and Structural
SIMilarityindex(SSIM).PSNRmeasuresthequalityofthereconstructedimageby
DeepLearning
B.Tech–CSE R-20
comparingittotheoriginalimage,whileSSIMmeasuresthestructuralsimilarit
y between the reconstructed and original images.
VariationsofAutoencodersforImageCompressionandReconstruction
1) Denoisingautoencoders:
2) Variationalautoencoders:
3) Convolutionalautoencoders:
Real-TimeExamples:
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
ApplicationsofAutoencodersforImageCompressionandReconstruction
1) MedicalImaging:
2) VideoCompression:
DeepLearning
B.Tech–CSE R-20
DeepLearning
B.Tech–CSE R-20
3) AutonomousVehicles:
4) SocialMediaandWebApplications:
DeepLearning
B.Tech–CSE R-20
RelationshipbetweenAutoencodersandGANs:
DeepLearning
B.Tech–CSE R-20
HybridModels:Encoder-DecoderGANs:
HowcanyoucombineGANsandautoencoderstocreatehybridmodelsforvarious tasks?
Generativeadversarialnetworks(GANs)andautoencodersaretwopowerfultypesof
artificial neural networks that can learn from data and generate new samples. But what if
you could combine them to create hybrid models that can perform various tasks, such as
image synthesis, anomaly detection, or domain adaptation.
GANsandautoencoders
DeepLearning
B.Tech–CSE R-20
Hybridmodels
Image synthesis
DeepLearning
B.Tech–CSE R-20
Anomalydetection
Domainadaptation
DeepLearning