0% found this document useful (0 votes)
96 views38 pages

ML Module1 Chapter1

This document provides information about a Machine Learning course including: - The textbook and lecture notes links - Five modules that will be covered: concept learning, decision trees, neural networks, Bayesian learning, and instance-based learning/reinforcement learning. - An assignment to watch YouTube videos about checkers games. It then provides details about the first module which includes an example partial design of a checkers-playing learning program. It discusses choosing the task, performance measure, training experience, target function, representation of the target function, and learning mechanism. The target function is represented as a linear combination of board features to evaluate board states.

Uploaded by

pooja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
0% found this document useful (0 votes)
96 views38 pages

ML Module1 Chapter1

This document provides information about a Machine Learning course including: - The textbook and lecture notes links - Five modules that will be covered: concept learning, decision trees, neural networks, Bayesian learning, and instance-based learning/reinforcement learning. - An assignment to watch YouTube videos about checkers games. It then provides details about the first module which includes an example partial design of a checkers-playing learning program. It discusses choosing the task, performance measure, training experience, target function, representation of the target function, and learning mechanism. The target function is represented as a linear combination of board features to evaluate board states.

Uploaded by

pooja
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 38

Machine Learning (15CS73)

Text Book :
Tom M. Mitchell, Machine Learning, India Edition 2013, McGraw Hill
Lecture Notes :https://thyagumachinelearning.blogspot.com/

ಡಾ|| ತ್ಾಾಗರಾಜು ಜಿ.ಎಸ್


Head Dept of CSE
SDMIT Ujire
Modules
• Module1 : Well posed learning problems, Designing a Learning system, Perspective and
Issues in Machine Learning. Concept Learning: Concept learning task, Concept learning
as search, Find-S algorithm, Version space, Candidate Elimination algorithm, Inductive
Bias.
• Module2: Decision tree representation, Appropriate problems for decision tree learning,
Basic decision tree learning algorithm, hypothesis space search in decision tree learning,
Inductive bias in decision tree learning, Issues in decision tree learning
• Module3: Artificial Neural Networks: Introduction, Neural Network representation,
Appropriate problems, Perceptron's, Backpropagation algorithm.
• Module4: Introduction, Bayes theorem, Bayes theorem and concept learning, ML and LS
error hypothesis, ML for predicting probabilities, MDL principle, Naive Bayes classifier,
Bayesian belief networks, EM algorithm
• Module5: Motivation, Estimating hypothesis accuracy, Basics of sampling theorem,
General approach for deriving confidence intervals, Difference in error of two
hypothesis, Comparing learning algorithms. Instance Based Learning: Introduction, k-
nearest neighbor learning, locally weighted regression, radial basis function, cased-based
reasoning, Reinforcement Learning: Introduction, Learning Task, Q Learning .

SDMIT ,Dept of CSE


Assignment
• Watch You tube videos and understand the following
• What is Checkers Game ?
• History of Checkers Game.
• Rules of Checkers Game.
• How to play the Checkers Game.

SDMIT ,Dept of CSE


Module 1 Chapter1
Text Book : Machine Learning by Tom M Mitchell (Chapter 1)

Dr Thyagaraju G S
Head Dept of CSE
SDMIT Ujire
Syllabus
1. Introduction
2. Well Posed Problems
3. Designing a Learning System
4. Perspective and Issues in Machine Learning
5. Summary

SDMIT ,Dept of CSE


1.Introduction
• How to program computers to learn?
Learning: Improving automatically with experience
• Example: Computers learning from medical records which treatments
are most effective for new diseases.

SDMIT ,Dept of CSE


State of Art / Some successful applications of
machine learning
• Learning to recognize spoken words (Lee, 1989; Waibel, 1989).
• Learning to drive an autonomous vehicle (Pomerleau, 1989).
• Learning to classify new astronomical structures (Fayyad et al., 1995).
• Learning to play world-class backgammon (Tesauro 1992, 1995).
• Predicting Recovery rates of pneumonia patients
• Detecting Fraudulent use of credit cards
• Playing games at the level of humans

SDMIT ,Dept of CSE


Some disciplines of their influence on machine
learning
• Artificial intelligence
• Bayesian methods
• Computational complexity theory
• Control theory
• Information theory
• Philosophy
• Psychology
• Neurobiology
• Statistics

SDMIT ,Dept of CSE


What is Machine Learning ?

• Machine learning is an application of artificial intelligence (AI) that


provides systems the ability to automatically learn and improve from
experience without being explicitly programmed.
• Machine learning focuses on the development of computer
programs that can access data and use it learn for themselves.

SDMIT ,Dept of CSE


2.Well-Posed Learning Problems
• The study of Machine learning is about writing software that
improves its own performance with experience
• Definition [Mitchell]: A computer program is said to learn from
experience E with respect to some class of tasks T and performance
measure P, if its performance at tasks in T, as measured by P,
improves with experience E.

SDMIT ,Dept of CSE


Example1 : A Checkers Learning Problem
• Task T = Playing checkers
• Performance Measure P
Percentage of games won
against opponent
• Training Experience E =
Playing practice games
against itself
SDMIT ,Dept of CSE
Example2: A Handwriting
Recognition Learning Problem
• Task T = Recognizing and classifying
handwritten words
• Performance Measure P Percentage
of words correctly classified
• Training Experience E = A database of
handwritten words with given
classification

SDMIT ,Dept of CSE


Example3 : A Robot driving
learning program
• Task T = Driving on public four lane highways
using vision sensors
• Performance Measure P = Average distance
handled before error ( as judged by human
overseas)
• Training Experience E = A sequences of
images and steering commands recorded
while observing a human driver .

SDMIT ,Dept of CSE


3. DESIGNING A LEARNING SYSTEM
Steps to design a learning system
1. Problem Description
2. Choosing the Training Experience
3. Choosing the Target Function
4. Choosing a Representation for the Target Function
5. Choosing a Function Approximation Algorithm
1. ESTIMATING TRAINING VALUES
2. ADJUSTING THE WEIGHTS
6. The Final Design
SDMIT ,Dept of CSE
3.1 Problem Description: A Checker Learning
Problem

• Task T: Playing Checkers


• Performance Measure P: Percent of games won against opponents
• Training Experience E: To be selected ==> Games Played against itself

SDMIT ,Dept of CSE


3.1 Choosing the Training Experience (1)
• Training experience impacts on success or failure of the learners. The attributes
of training experience
• Key Attributes –Will the training experience provide direct or indirect feedback?
• Direct Feedback: system learns from examples of individual checkers board states and the
correct move for each
• Indirect Feedback: Move sequences and final outcomes of various games played
• Credit assignment problem: Value of early states must be inferred from the outcome

• Second Attribute : Degree to which the learner controls the sequence of training
examples
• Teacher selects informative boards and gives correct move
• Learner proposes board states that it finds particularly confusing. Teacher provides correct
moves
• Learner controls board states and (indirect) training classifications

SDMIT ,Dept of CSE


3.1 Choosing the Training Experience (2)
• Third Attribute : How well the training experience represents the
distribution of examples over which the final system performance P
will be measured
• If training the checkers program consists only of experiences played against
itself, it may never encounter crucial board states that are likely to be played
by the human checkers champion
• Most theory of machine learning rests on the assumption that the
distribution of training examples is identical to the distribution of test
examples

SDMIT ,Dept of CSE


Partial Design of Checkers Learning Program
• A checkers learning problem:
• Task T: playing checkers
• Performance measure P: percent of games won in the world tournament
• Training experience E: games played against itself
• Remaining choices
• The exact type of knowledge to be learned
• A representation for this target knowledge
• A learning mechanism

SDMIT ,Dept of CSE


3.2 Choosing the Target Function (1)
• Target function: ChooseMove : B → M
• Alternative target function
• An evaluation function that assigns a numerical score to any given
board state
• V : B →  ( where is the set of real numbers)
• V(b) for an arbitrary board state b in B
• if b is a final board state that is won, then V(b) = 100
• if b is a final board state that is lost, then V(b) = -100
• if b is a final board state that is drawn, then V(b) = 0
• if b is not a final state, then V(b) = V(b '), where b' is the best
final board state that can be achieved starting from b and
playing optimally until the end of the game
SDMIT ,Dept of CSE
Choosing the Target Function (2)
• V(b) gives a recursive definition for board state b
• Not usable because not efficient to compute except is first three trivial cases
• nonoperational definition
• Goal of learning is to discover an operational description of V
• Learning the target function is often called function approximation
• Referred to as Vˆ

SDMIT ,Dept of CSE


3.3 Choosing a Representation for the Target
Function -1

• Choice of representations involve trade offs


• Pick a very expressive representation to allow close
approximation to the ideal target function V

• More expressive, more training data required to


choose among alternative hypotheses

SDMIT ,Dept of CSE


3.3 Choosing a Representation for the Target
Function -2
• Use linear combination of the following board features:
• x1: the number of black pieces on the board
• x2: the number of red pieces on the board
• x3: the number of black kings on the board
• x4: the number of red kings on the board
• x5: the number of black pieces threatened by red (i.e.
which can be captured on red's next turn)
• x6: the number of red pieces threatened by black

SDMIT ,Dept of CSE


3.3 Choosing a Representation for the Target
Function -3

Vˆ (b) = w0 + w1x1 + w2 x2 + w3 x3 + w4 x4 + w5 x5 + w6 x6

SDMIT ,Dept of CSE


Partial Design of Checkers Learning Program
• A checkers learning problem:
• Task T: playing checkers
• Performance measure P: percent of games won in the world tournament
• Training experience E: games played against itself
• Target Function: V: Board → 
• Target function representation
Vˆ (b) = w0 + w1 x1 + w2 x2 + w3 x3 + w4 x4 + w5 x5 + w6 x6

SDMIT ,Dept of CSE


3.4. Choosing a Function Approximation
Algorithm
• To learn we require a set of training examples describing the board
b and the training value Vtrain(b)
• Ordered pair b,Vtrain (b )

x1 = 3, x2 = 0, x3 = 1, x4 = 0, x5 = 0, x6 = 0 ,+100

SDMIT ,Dept of CSE


3.4.1 Estimating Training Values
• Need to assign specific scores to intermediate board states
• Approximate intermediate board state b using the learner's current
approximation of the next board state following b
Vtrain (b)  Vˆ (Successor (b))
• Simple and successful approach
• More accurate for states closer to end states

SDMIT ,Dept of CSE


3.4.1 Adjusting the Weights
• Choose the weights wi to best fit the set of training examples
• Minimize the squared error E between the train values and the values predicted
by the hypothesis
 (V (b) − Vˆ (b))
2
E train
b ,Vtrain (b ) trainingexamples

• Require an algorithm that


• will incrementally refine weights as new training examples become available
• will be robust to errors in these estimated training values
• Least Mean Squares (LMS) is one such algorithm

SDMIT ,Dept of CSE


LMS Weight Update Rule
• For each train example b,Vtrain (b )
• Use the current weights to calculate
• For each weight wi, update it as Vˆ b ()
(
wi  wi +  Vtrain (b) − Vˆ (b) xi)
• where
•  is a small constant (e.g. 0.1)

SDMIT ,Dept of CSE


3.5 Final Design
Experiment
New problem Hypothesis
(Vˆ )
Generator
(initial game board)

Performance Generalizer
System

Solution trace Training examples


(game history)  b ,V (b ) , b ,V (b ), 
1 train 1 2 train 2
Critic

SDMIT ,Dept of CSE


Summary of
choices in
designing the
checkers
learning program

SDMIT ,Dept of CSE


4.0 Perspectives and Issues in Machine
Learning

SDMIT ,Dept of CSE


4.1 Perspectives in Machine Learning

•One useful perspective on machine learning


is that it involves searching a very large
space of possible hypotheses to determine
one that best fits the observed data and any
prior knowledge held by the learner.

SDMIT ,Dept of CSE


4.2 Issues in Machine Learning (i.e., Generalization)
1. What algorithms exist for learning general
target functions from specific training
examples?
2. In what settings will particular algorithms
converge to the desired function, given
sufficient training data?
3. Which algorithms perform best for which
types of problems and representations?
SDMIT ,Dept of CSE
4.2 Issues in Machine Learning (i.e., Generalization)
4. How much training data is sufficient?
5. What general bounds can be found to relate
the confidence in learned hypotheses?
6. When and how can prior knowledge held by
the learner guide the process of generalizing
from examples?
7. Can prior knowledge be helpful even when it is
only approximately correct?
SDMIT ,Dept of CSE
4.2 Issues in Machine Learning (i.e., Generalization)
8. What is the best strategy for choosing a useful next
training experience, and how does the choice of this
strategy alter the complexity of the learning
problem?
9. What is the best way to reduce the learning task to
one or more function approximation problems?
10. How can the learner automatically alter its
representation to improve its ability to represent
and learn the target function?
SDMIT ,Dept of CSE
How to play checkers Game?

SDMIT ,Dept of CSE


Summary (Continued)
1. Introduction
2. Well Posed Problems
3. Designing a Learning System
4. Perspective and Issues in Machine Learning

SDMIT ,Dept of CSE


Question Bank M1.1
1. Define Machine Learning. Discuss with examples Why Machine Learning is
Important.
2. Discuss with examples some useful applications of machine learning
3. Explain how some areas/disciplines have influenced the Machine learning.
4. Define Learning Program for a given Problem. Describe the following
problems with respect to Tasks, Performance and Experience:
1. Checkers Learning Problems
2. Handwritten Recognition Problem
3. Robot Driving Learning Problem
5. Describe in detail all the steps involved in designing a Learning Systems
6. Discuss the Perspective and Issues in Machine Learning.

SDMIT ,Dept of CSE

You might also like