Lecture 2.3.1 - Autoencoders
Lecture 2.3.1 - Autoencoders
Auto Encoders:
A typical use of a Neural Network is a case of supervised learning. It involves
training data that contains an output label. The neural network tries to learn the
mapping from the given input to the given output label. But what if the output
label is replaced by the input vector itself? Then the network will try to find the
mapping from the input to itself. This would be the identity function which is a
trivial mapping. But if the network is not allowed to simply copy the input,
then the network will be forced to capture only the salient features. This
constraint opens up a different field of applications for Neural Networks which
was unknown. The primary applications are dimensionality reduction and specific
data compression. The network is first trained on the given input. The network
tries to reconstruct the given input from the features it picked up and gives an
approximation to the input as the output. The training step involves the
computation of the error and backpropagating the error. The typical architecture
of an Auto-encoder resembles a bottleneck. The schematic structure of an
autoencoder is as follows:
The encoder part of the network is used for encoding and sometimes even for
data compression purposes although it is not very effective as compared to
other general compression techniques like JPEG. Encoding is achieved by
the encoder part of the network which has a decreasing number of hidden
units in each layer. Thus this part is forced to pick up only the most significant
and representative features of the data. The second half of the network performs
the Decoding function. This part has an increasing number of hidden units in
each layer and thus tries to reconstruct the original input from the encoded data.
Thus Auto-encoders are an unsupervised learning technique.
Example: See the below code, in autoencoder training data, is fitted to itself.
That’s why instead of fitting X_train to Y_train we have used X_train in both
places.
Step 2: Decoding the input data The Auto-encoder tries to reconstruct the original
input from the encoded data to test the reliability of the encoding
Step 3: Backpropagating the error After the reconstruction, the loss function is
computed to determine the reliability of the encoding. The error generated is
backpropagated.
Book link
https://www.amazon.in/Principles-Soft-Computing-2ed-WIND/dp/8126527412
2. Link : https://analyticsindiamag.com/6-types-of-artificial-neural-
networks-currently-being-used-in-todays-technology/
3. Vedio Link: https://www.youtube.com/watch?
v=K9gjuXjJeEM&list=PLJ5C_6qdAvBFqAYS0P9INAogIMklG8E-9