Deep Neural Network Application
Deep Neural Network Application
1
– Exercise 1 - two_layer_model
– 4.1 - Train the model
• 5 - L-layer Neural Network
– Exercise 2 - L_layer_model
– 5.1 - Train the model
• 6 - Results Analysis
• 7 - Test with your own image (optional/ungraded exercise)
## 1 - Packages
Begin by importing all the packages you’ll need during this assignment.
• numpy is the fundamental package for scientific computing with Python.
• matplotlib is a library to plot graphs in Python.
• h5py is a common package to interact with a dataset that is stored on an H5 file.
• PIL and scipy are used here to test your model with your own picture at the end.
• dnn_app_utils provides the functions implemented in the “Building your Deep Neural Net-
work: Step by Step” assignment to this notebook.
• np.random.seed(1) is used to keep all the random function calls consistent. It helps grade
your work - so please don’t change it!
[28]: ### v1.1
%matplotlib inline
plt.rcParams['figure.figsize'] = (5.0, 4.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
%load_ext autoreload
%autoreload 2
np.random.seed(1)
2
images. Hopefully, your new model will perform even better!
Problem Statement: You are given a dataset (“data.h5”) containing: - a training set of m_train
images labelled as cat (1) or non-cat (0) - a test set of m_test images labelled as cat and non-cat
- each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB).
Let’s get more familiar with the dataset. Load the data by running the cell below.
[30]: train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
The following code will show you an image in the dataset. Feel free to change the index and re-run
the cell multiple times to check out other images.
[31]: # Example of a picture
index = 10
plt.imshow(train_x_orig[index])
print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].
↪decode("utf-8") + " picture.")
3
print ("Number of testing examples: " + str(m_test))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_x_orig shape: " + str(train_x_orig.shape))
print ("train_y shape: " + str(train_y.shape))
print ("test_x_orig shape: " + str(test_x_orig.shape))
print ("test_y shape: " + str(test_y.shape))
4
Figure 2: 2-layer neural network. The model can be summarized as: INPUT -> LINEAR -> RELU
-> LINEAR -> SIGMOID -> OUTPUT.
Detailed Architecture of Figure 2: - The input is a (64,64,3) image which is flattened to a vector
of size (12288, 1). - The corresponding vector: [𝑥0 , 𝑥1 , ..., 𝑥12287 ]𝑇 is then multiplied by the weight
matrix 𝑊 [1] of size (𝑛[1] , 12288). - Then, add a bias term and take its relu to get the following
[1] [1] [1]
vector: [𝑎0 , 𝑎1 , ..., 𝑎𝑛[1] −1 ]𝑇 . - Multiply the resulting vector by 𝑊 [2] and add the intercept (bias).
- Finally, take the sigmoid of the result. If it’s greater than 0.5, classify it as a cat.
### 3.2 - L-layer Deep Neural Network
It’s pretty difficult to represent an L-layer deep neural network using the above representation.
However, here is a simplified network representation:
Figure 3: L-layer neural network. The model can be summarized as: [LINEAR -> RELU] × (L-1)
-> LINEAR -> SIGMOID
Detailed Architecture of Figure 3: - The input is a (64,64,3) image which is flattened to a vector
of size (12288,1). - The corresponding vector: [𝑥0 , 𝑥1 , ..., 𝑥12287 ]𝑇 is then multiplied by the weight
matrix 𝑊 [1] and then you add the intercept 𝑏[1] . The result is called the linear unit. - Next, take the
relu of the linear unit. This process could be repeated several times for each (𝑊 [𝑙] , 𝑏[𝑙] ) depending
on the model architecture. - Finally, take the sigmoid of the final linear unit. If it is greater than
0.5, classify it as a cat.
### 3.3 - General Methodology
As usual, you’ll follow the Deep Learning methodology to build the model:
1. Initialize parameters / Define hyperparameters
2. Loop for num_iterations:
a. Forward propagation
b. Compute cost function
c. Backward propagation
d. Update parameters (using parameters, and grads from backprop)
3. Use trained parameters to predict labels
Now go ahead and implement those two models!
## 4 - Two-layer Neural Network
### Exercise 1 - two_layer_model
Use the helper functions you have implemented in the previous assignment to build a 2-layer neural
network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions
and their inputs are:
def initialize_parameters(n_x, n_h, n_y):
...
return parameters
def linear_activation_forward(A_prev, W, b, activation):
...
return A, cache
def compute_cost(AL, Y):
...
5
return cost
def linear_activation_backward(dA, cache, activation):
...
return dA_prev, dW, db
def update_parameters(parameters, grads, learning_rate):
...
return parameters
[34]: ### CONSTANTS DEFINING THE MODEL ####
n_x = 12288 # num_px * num_px * 3
n_h = 7
n_y = 1
layers_dims = (n_x, n_h, n_y)
learning_rate = 0.0075
"""
Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 1 if cat, 0 if non-cat), of shape (1,␣
↪number of examples)
Returns:
parameters -- a dictionary containing W1, W2, b1, and b2
"""
np.random.seed(1)
grads = {}
costs = [] # to keep track of the cost
m = X.shape[1] # number of examples
(n_x, n_h, n_y) = layers_dims
6
# YOUR CODE ENDS HERE
# Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X,␣
↪W1, b1, W2, b2". Output: "A1, cache1, A2, cache2".
# Compute cost
#(� 1 line of code)
# cost = ...
# YOUR CODE STARTS HERE# Compute cost
cost = compute_cost(A2, Y)
# YOUR CODE ENDS HERE
7
# Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2,␣
↪grads['db2'] to db2
grads['dW1'] = dW1
grads['db1'] = db1
grads['dW2'] = dW2
grads['db2'] = db2
# Update parameters.
#(approx. 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters = update_parameters(parameters, grads, learning_rate)
two_layer_model_test(two_layer_model)
8
Cost after iteration 1: 0.6915746967050506
Cost after iteration 2: 0.6524135179683452
All tests passed.
Expected output:
cost after iteration 1 must be around 0.69
### 4.1 - Train the model
If your code passed the previous cell, run the cell below to train your parameters.
• The cost should decrease on every iteration.
• It may take up to 5 minutes to run 2500 iterations.
[37]: parameters, costs = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h,␣
↪n_y), num_iterations = 2500, print_cost=True)
plot_costs(costs, learning_rate)
9
Expected Output:
Cost after iteration 0
0.6930497356599888
Cost after iteration 100
0.6464320953428849
…
…
Cost after iteration 2499
0.04421498215868956
Nice! You successfully trained the model. Good thing you built a vectorized implementation!
Otherwise it might have taken 10 times longer to train this.
Now, you can use the trained parameters to classify images from the dataset. To see your predictions
on the training and test sets, run the cell below.
[38]: predictions_train = predict(train_x, train_y, parameters)
Accuracy: 0.9999999999999998
Expected Output:
10
Accuracy
0.9999999999999998
[39]: predictions_test = predict(test_x, test_y, parameters)
Accuracy: 0.72
Expected Output:
Accuracy
0.72
1.2.1 Congratulations! It seems that your 2-layer neural network has better per-
formance (72%) than the logistic regression implementation (70%, assignment
week 2). Let’s see if you can do even better with an 𝐿-layer model.
Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy
on the test set. This is called “early stopping” and you’ll hear more about it in the next course.
Early stopping is a way to prevent overfitting.
## 5 - L-layer Neural Network
### Exercise 2 - L_layer_model
Use the helper functions you implemented previously to build an 𝐿-layer neural network with the
following structure: [LINEAR -> RELU]×(L-1) -> LINEAR -> SIGMOID. The functions and
their inputs are:
def initialize_parameters_deep(layers_dims):
...
return parameters
def L_model_forward(X, parameters):
...
return AL, caches
def compute_cost(AL, Y):
...
return cost
def L_model_backward(AL, Y, caches):
...
return grads
def update_parameters(parameters, grads, learning_rate):
...
return parameters
[40]: ### CONSTANTS ###
layers_dims = [12288, 20, 7, 5, 1] # 4-layer model
11
def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations =␣
↪3000, print_cost=False):
"""
Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID.
Arguments:
X -- input data, of shape (n_x, number of examples)
Y -- true "label" vector (containing 1 if cat, 0 if non-cat), of shape (1,␣
↪number of examples)
layers_dims -- list containing the input size and each layer size, of␣
↪length (number of layers + 1).
Returns:
parameters -- parameters learnt by the model. They can then be used to␣
↪predict.
"""
np.random.seed(1)
costs = [] # keep track of cost
# Parameters initialization.
#(� 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters = initialize_parameters_deep(layers_dims)
# YOUR CODE ENDS HERE
# Compute cost.
#(� 1 line of code)
# cost = ...
# YOUR CODE STARTS HERE
cost = compute_cost(AL, Y)
12
# YOUR CODE ENDS HERE
# Backward propagation.
#(� 1 line of code)
# grads = ...
# YOUR CODE STARTS HERE
grads = L_model_backward(AL, Y, caches)
# Update parameters.
#(� 1 line of code)
# parameters = ...
# YOUR CODE STARTS HERE
parameters = update_parameters(parameters, grads, learning_rate)
L_layer_model_test(L_layer_model)
13
• The cost should decrease on every iteration.
• It may take up to 5 minutes to run 2500 iterations.
[43]: parameters, costs = L_layer_model(train_x, train_y, layers_dims, num_iterations␣
↪= 2500, print_cost = True)
14
Accuracy: 0.9856459330143539
Expected Output:
Train Accuracy
0.985645933014
[45]: pred_test = predict(test_x, test_y, parameters)
Accuracy: 0.8
Expected Output:
Test Accuracy
0.8
1.2.2 Congrats! It seems that your 4-layer neural network has better performance
(80%) than your 2-layer neural network (72%) on the same test set.
This is pretty good performance for this task. Nice job!
In the next course on “Improving deep neural networks,” you’ll be able to obtain even higher
accuracy by systematically searching for better hyperparameters: learning_rate, layers_dims, or
num_iterations, for example.
## 6 - Results Analysis
First, take a look at some images the L-layer model labeled incorrectly. This will show a few
mislabeled images.
[46]: print_mislabeled_images(classes, test_x, test_y, pred_test)
A few types of images the model tends to do poorly on include: - Cat body in an unusual
position - Cat appears against a background of a similar color - Unusual cat color and species -
Camera Angle - Brightness of the picture - Scale variation (cat is very large or small in image)
15
## 7 - Test with your own image (optional/ungraded exercise) ##
From this point, if you so choose, you can use your own image to test the output of your model.
To do that follow these steps:
1. Click on “File” in the upper bar of this notebook, then click “Open” to go on your Coursera
Hub.
2. Add your image to this Jupyter Notebook’s directory, in the “images” folder
3. Change your image’s name in the following code
4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
Accuracy: 1.0
y = 1.0, your L-layer model predicts a "cat" picture.
16
17