0% found this document useful (0 votes)
1K views3 pages

NPTEL Week 3 Deep Learning Assignment

This document is an assessment submission for a Deep Learning course on NPTEL. It consists of 10 multiple choice questions related to feedforward neural networks, activation functions, loss functions, and backpropagation. The assessment is submitted and the submission link is provided.

Uploaded by

Dhivya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
1K views3 pages

NPTEL Week 3 Deep Learning Assignment

This document is an assessment submission for a Deep Learning course on NPTEL. It consists of 10 multiple choice questions related to feedforward neural networks, activation functions, loss functions, and backpropagation. The assessment is submitted and the submission link is provided.

Uploaded by

Dhivya
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

Assessment submitted.

([Link]

([Link]
X

dhivs37@[Link] 

NPTEL ([Link]
»
Deep Learning - IIT Ropar (course)


Register for
Certification
exam
Thank you for taking the Week 3:
Assignment 3.
([Link]

Course
outline Week 3: Assignment 3
Your last recorded submission was on 2022-08-16, 14:38 Due date: 2022-08-17, 23:59 IST.
How does an IST
NPTEL
online 1) Assume you are developing a model to predict the probability as an output. Pick out 1 point
course the appropriate
Activation function.
work? ()

linear

Week 0 ()
sigmoid

tanh
Week 1 ()
Relu

Week 2 () 2) The pre-activation at layer i can be best described as the 1 point

Week 3 () weighted sum of all the inputs at layer i

Feedforward sum of all the the inputs at layer i


Neural
Networks
weighted sum of all the inputs at layer i + 1
(a.k.a
multilayered
sum of all the inputs at layer i + 1
network of
neurons)
weighted sum of all the inputs at layer i − 1
(unit?
unit=46&lesson=47)
sum of all the inputs at layer i − 1
Learning
Paramters of 3) Consider a Machine Learning model that is applied to a specific set of inputs. Actual output
Feedforward being yi =
[10, 5, 7, 8, 6] and the predicted output being y
^ = [9, 6, 5, 7, 5], Compute Mean
i

Neural Squared error loss.


Networks
(Intuition) 8
(unit? 1 point
unit=46&lesson=48)
Output 4) Consider a Classification problem with k classes. The output being a probability 1 point
Assessment submitted. distribution, which of the
following is the best output function?
functions and
X Loss functions
(unit?
Linear
unit=46&lesson=49)
Sigmoid
Backpropagation
tanh
(Intuition)
softmax
(unit?
unit=46&lesson=50) 5) Given the output yj = O(al )j  and al = [2.5, 3.6, 4.2, 5] . If ‘O’ is the softmax 1 point
function, compute the value of
y
^ ^ , y
= [y
1
^ , y
2
^ , y
3
^ ]?
4
Backpropagation:
Computing
[0.046, 0.139, 0.253, 0.562]
Gradients w.r.t.
the Output
[0.046, 0.253, 0.562, 0.139]
Units (unit?
[0.253, 0.046, 0.139, 0.562]
unit=46&lesson=51)

[0.562, 0.046, 0.139, 0.253]
Backpropagation:
6) The information content is high for an event when the probability of the event is 1 point
Computing
Gradients w.r.t.

high
Hidden Units
(unit?
low
unit=46&lesson=52)
1

Backpropagation:
maximum
Computing
7) Assume you have four inputs to a Feed Forward neural network, the first hidden 1 point
Gradients w.r.t.
layer also has four neurons,
and there are three output classes, what is the dimension of the
Parameters
(unit? weight matrix, W1 between the input layer
and the first hidden layer, given that there is only one
unit=46&lesson=53) hidden layer?

Backpropagation:
3×3
Pseudo code R

(unit?
4×3
unit=46&lesson=54) R

Derivative of R
4×4

the activation
3×4
function (unit? R

unit=46&lesson=55)
8) In a Feed Forward Neural Network, if the outputs take real values then which of the 1 point
Information following output activation
function and error function do you prefer?
content,
Entropy &
Linear, cross entropy
cross entropy
Softmax, cross entropy
(unit?

Linear, Squared error
unit=46&lesson=56)

Softmax, Squared error
Lecture
Material for 9) The activation layer at any layer i is given by 1 point
Week 3 (unit?
unit=46&lesson=57)
h i (x) = bi + Wi h i−1 (x)

Week 3
Feedback h i (x) = g(ai (x))

Form: Deep
Learning - IIT h i (x) = O(aL)

Ropar (unit?
unit=46&lesson=58) h i (x) = ai + Wi h i−1 (x)
10) Identify the loss function for a classification problem to choose one out of K Classes.1 point
Quiz:submitted.
Assessment Week 3:
Assignment 3
X
Squared error
(assessment?
name=185)
Absolute error

week 4 () ^l )
MinimizeL(θ) = − log(y
θ

Download ^l )
MaximizeL(θ) = − log(y

Videos () θ

You may submit any number of times before the due date. The final submission will be
Books () considered for grading.
Submit Answers
Text
Transcripts ()

Problem
Solving
Session ()

You might also like