ML:Introduction: Week 1 Lecture Notes
ML:Introduction: Week 1 Lecture Notes
ML:Introduction
What is Machine Learning?
Two de nitions of Machine Learning are o ered. Arthur Samuel described it as: "the eld of study that gives computers the ability to learn without
being explicitly programmed." This is an older, informal de nition.
Tom Mitchell provides a more modern de nition: "A computer program is said to learn from experience E with respect to some class of tasks T and
performance measure P, if its performance at tasks in T, as measured by P, improves with experience E."
P = the probability that the program will win the next game.
In general, any machine learning problem can be assigned to one of two broad classi cations:
supervised learning, OR
unsupervised learning.
Supervised Learning
In supervised learning, we are given a data set and already know what our correct output should look like, having the idea that there is a
relationship between the input and the output.
Supervised learning problems are categorized into "regression" and "classi cation" problems. In a regression problem, we are trying to predict
results within a continuous output, meaning that we are trying to map input variables to some continuous function. In a classi cation problem, we
are instead trying to predict results in a discrete output. In other words, we are trying to map input variables into discrete categories. Here is a
description on Math is Fun on Continuous and Discrete Data.
Example 1:
Given data about the size of houses on the real estate market, try to predict their price. Price as a function of size is a continuous output, so this is a
regression problem.
We could turn this example into a classi cation problem by instead making our output about whether the house "sells for more or less than the
asking price." Here we are classifying the houses based on price into two discrete categories.
Example 2:
(a) Regression - Given a picture of Male/Female, We have to predict his/her age on the basis of given picture.
(b) Classi cation - Given a picture of Male/Female, We have to predict Whether He/She is of High school, College, Graduate age. Another Example
for Classi cation - Banks have to decide whether or not to give a loan to someone on the basis of his credit history.
Unsupervised Learning
Unsupervised learning, on the other hand, allows us to approach problems with little or no idea what our results should look like. We can derive
structure from data where we don't necessarily know the e ect of the variables.
We can derive this structure by clustering the data based on relationships among the variables in the data.
With unsupervised learning there is no feedback based on the prediction results, i.e., there is no teacher to correct you.
Example:
Clustering: Take a collection of 1000 essays written on the US Economy, and nd a way to automatically group these essays into a small number
that are somehow similar or related by di erent variables, such as word frequency, sentence length, page count, and so on.
Non-clustering: The "Cocktail Party Algorithm", which can nd structure in messy data (such as the identi cation of individual voices and music
from a mesh of sounds at a cocktail party (https://en.wikipedia.org/wiki/Cocktail_party_e ect) ). Here is an answer on Quora to enhance your
understanding. : https://www.quora.com/What-is-the-di erence-between-supervised-and-unsupervised-learning-algorithms ?
https://www.coursera.org/learn/machine-learning/resources/JXWWS 1/5
27/09/2018 Machine Learning - Home | Coursera
Linear regression with one variable is also known as "univariate linear regression."
Univariate linear regression is used when you want to predict a single output value y from a single input value x. We're doing supervised
learning here, so that means we already have an idea about what the input/output cause and e ect should be.
y^ = hθ (x) = θ0 + θ1 x
Note that this is like the equation of a straight line. We give to hθ (x) values for θ0 and θ1 to get our estimated output y
^. In other words, we are
trying to create a function called hθ that is trying to map our input data (the x's) to our output data (the y's).
Example:
Suppose we have the following set of training data:
input x output y
0 4
1 7
2 7
3 8
Now we can make a random guess about our hθ function: θ0 = 2 and θ1 = 2. The hypothesis function becomes hθ (x) = 2 + 2x.
So for input of 1 to our hypothesis, y will be 4. This is o by 3. Note that we will be trying out various values of θ0 and θ1 to try to nd values which
provide the best possible " t" or the most representative "straight line" through the data points mapped on the x-y plane.
Cost Function
We can measure the accuracy of our hypothesis function by using a cost function. This takes an average (actually a fancier version of an average)
of all the results of the hypothesis with inputs from x's compared to the actual output y's.
m m
1 1
∑ (y^i − yi ) = ∑ (hθ (xi ) − yi )
2 2
J(θ0 , θ1 ) =
2m i=1 2m i=1
1
This function is otherwise called the "Squared error function", or "Mean squared error". The mean is halved ( 2m ) as a convenience for the
computation of the gradient descent, as the derivative term of the square function will cancel out the 12 term.
Now we are able to concretely measure the accuracy of our predictor function against the correct results we have so that we can predict new
results we don't have.
If we try to think of it in visual terms, our training data set is scattered on the x-y plane. We are trying to make straight line (de ned by hθ (x)) which
passes through this scattered set of data. Our objective is to get the best possible line. The best possible line will be such so that the average
squared vertical distances of the scattered points from the line will be the least. In the best case, the line should pass through all the points of our
training data set. In such a case the value of J(θ0 , θ1 ) will be 0.
ML:Gradient Descent
So we have our hypothesis function and we have a way of measuring how well it ts into the data. Now we need to estimate the parameters in
hypothesis function. That's where gradient descent comes in.
Imagine that we graph our hypothesis function based on its elds θ0 and θ1 (actually we are graphing the cost function as a function of the
parameter estimates). This can be kind of confusing; we are moving up to a higher level of abstraction. We are not graphing x and y itself, but the
parameter range of our hypothesis function and the cost resulting from selecting particular set of parameters.
We put θ0 on the x axis and θ1 on the y axis, with the cost function on the vertical z axis. The points on our graph will be the result of the cost
function using our hypothesis with those speci c theta parameters.
https://www.coursera.org/learn/machine-learning/resources/JXWWS 2/5
27/09/2018 Machine Learning - Home | Coursera
We will know that we have succeeded when our cost function is at the very bottom of the pits in our graph, i.e. when its value is the minimum.
The way we do this is by taking the derivative (the tangential line to a function) of our cost function. The slope of the tangent is the derivative at that
point and it will give us a direction to move towards. We make steps down the cost function in the direction with the steepest descent, and the size
of each step is determined by the parameter α, which is called the learning rate.
θj := θj − α ∂θ∂ j J(θ0 , θ1 )
where
m
1
θ1 := θ1 − α ∑ ((hθ (xi ) − yi )xi )
m
i=1
where m is the size of the training set, θ0 a constant that will be changing simultaneously with θ1 and xi , yi are values of the given training set
(data).
Note that we have separated out the two cases for θj into separate equations for θ0 and θ1 ; and that for θ1 we are multiplying xi at the end due to
the derivative.
The point of all this is that if we start with a guess for our hypothesis and then repeatedly apply these gradient descent equations, our hypothesis
will become more and more accurate.
a b c
⎡ ⎤
d e f
⎢ ⎥
⎢ ⎥
⎢g h i ⎥
⎣ ⎦
j k l
The above matrix has four rows and three columns, so it is a 4 x 3 matrix.
https://www.coursera.org/learn/machine-learning/resources/JXWWS 3/5
27/09/2018 Machine Learning - Home | Coursera
w
⎡ ⎤
x
⎢ ⎥
⎢ ⎥
y
⎣ ⎦
z
Aij refers to the element in the ith row and jth column of matrix A.
A vector with 'n' rows is referred to as an 'n'-dimensional vector
Matrices are usually denoted by uppercase names while vectors are lowercase.
a b w x a + w b + x
[ ] + [ ] = [ ]
c d y z c + y d + z
a b a ∗ x b ∗ x
[ ] ∗ x = [ ]
c d c ∗ x d ∗ x
Matrix-Vector Multiplication
We map the column of the vector onto each row of the matrix, multiplying each element and summing the result.
a b a ∗ x + b ∗ y
⎡ ⎤ ⎡ ⎤
x
c d ∗ [ ] = ⎢c ∗ x + d ∗ y⎥
⎣ ⎦ y
⎣ ⎦
e f e ∗ x + f ∗ y
The result is a vector. The vector must be the second term of the multiplication. The number of columns of the matrix must equal the number of
rows of the vector.
Matrix-Matrix Multiplication
We multiply two matrices by breaking it into several vector multiplications and concatenating the result
a b a ∗ w + b ∗ y a ∗ x + b ∗ z
⎡ ⎤ ⎡ ⎤
w x
c d ∗ [ ] = ⎢c ∗ w + d ∗ y c ∗ x + d ∗ z⎥
⎣ ⎦ y z
⎣ ⎦
e f e ∗ w + f ∗ y e ∗ x + f ∗ z
An m x n matrix multiplied by an n x o matrix results in an m x o matrix. In the above example, a 3 x 2 matrix times a 2 x 2 matrix resulted in a 3 x
2 matrix.
To multiply two matrices, the number of columns of the rst matrix must equal the number of rows of the second matrix.
https://www.coursera.org/learn/machine-learning/resources/JXWWS 4/5
27/09/2018 Machine Learning - Home | Coursera
Associative. (A∗B)∗C=A∗(B∗C)
The identity matrix, when multiplied by any matrix of the same dimensions, results in the original matrix. It's just like multiplying numbers by 1.
The identity matrix simply has 1's on the diagonal (upper left to lower right diagonal) and 0's elsewhere.
1 0 0
⎡ ⎤
0 1 0
⎣ ⎦
0 0 1
When multiplying the identity matrix after some matrix (A∗I), the square identity matrix should match the other matrix's columns. When
multiplying the identity matrix before some other matrix (I∗A), the square identity matrix should match the other matrix's rows.
A non square matrix does not have an inverse matrix. We can compute inverses of matrices in octave with the pinv(A) function [1] and in matlab
with the inv(A) function. Matrices that don't have an inverse are singular or degenerate.
The transposition of a matrix is like rotating the matrix 90° in clockwise direction and then reversing it. We can compute transposition of matrices
in matlab with the transpose(A) function or A':
a b
⎡ ⎤
A = c d
⎣ ⎦
e f
T a c e
A = [ ]
b d f
In other words:
Aij = ATji
https://www.coursera.org/learn/machine-learning/resources/JXWWS 5/5