Classification: Basic Concepts, Decision
Trees, and Model Evaluation
Data Warehousing and Mining
Lecture 5
by
Hossen Asiful Mustafa
Estimating Generalization Errors
Re-substitution errors: error on training ( e(t) )
Generalization errors: error on testing ( e(t))
Methods for estimating generalization errors:
Optimistic approach: e(t) = e(t)
Pessimistic approach:
For each leaf node: e(t) = (e(t)+0.5)
Total errors: e(T) = e(T) + N 0.5 (N: number of leaf nodes)
For a tree with 30 leaf nodes and 10 errors on training
(out of 1000 instances):
Training error = 10/1000 = 1%
Generalization error = (10 + 300.5)/1000 = 2.5%
Reduced error pruning (REP):
uses validation data set to estimate generalization
error
Occams Razor
Given two models of similar generalization
errors, one should prefer the simpler model
over the more complex model
For complex models, there is a greater chance
that it was fitted accidentally by errors in data
Therefore, one should include model
complexity when evaluating a model
Minimum Description Length (MDL)
X
X1
X2
X3
X4
y
1
0
0
1
Xn
A?
Yes
No
B?
B1
B2
C?
C1
C2
X
X1
X2
X3
X4
y
?
?
?
?
Xn
Cost(Model,Data) = Cost(Data|Model) + Cost(Model)
Cost is the number of bits needed for encoding.
Search for the least costly model.
Cost(Data|Model) encodes the misclassification errors.
Cost(Model) uses node encoding (number of children) plus
splitting condition encoding.
How to Address Overfitting
Pre-Pruning (Early Stopping Rule)
Stop the algorithm before it becomes a fully-grown tree
Typical stopping conditions for a node:
Stop if all instances belong to the same class
Stop if all the attribute values are the same
More restrictive conditions:
Stop if number of instances is less than some user-specified threshold
Stop if class distribution of instances are independent of the available
features (e.g., using 2 test)
Stop if expanding the current node does not improve impurity
measures (e.g., Gini or information gain).
How to Address Overfitting
Post-pruning
Grow decision tree to its entirety
Trim the nodes of the decision tree in a bottom-up
fashion
If generalization error improves after trimming,
replace sub-tree by a leaf node.
Class label of leaf node is determined from
majority class of instances in the sub-tree
Can use MDL for post-pruning
Example of Post-Pruning
Training Error (Before splitting) = 10/30
Class = Yes
20
Pessimistic error = (10 + 0.5)/30 = 10.5/30
Class = No
10
Training Error (After splitting) = 9/30
Pessimistic error (After splitting)
Error = 10/30
= (9 + 4 0.5)/30 = 11/30
PRUNE!
A?
A1
A4
A3
A2
Class = Yes
Class = Yes
Class = Yes
Class = Yes
Class = No
Class = No
Class = No
Class = No
Handling Missing Attribute Values
Missing values affect decision tree
construction in three different ways:
Affects how impurity measures are computed
Affects how to distribute instance with missing
value to child nodes
Affects how a test instance with missing value is
classified
Computing Impurity Measure
Before Splitting:
Entropy(Parent)
= -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
Tid Refund Marital
Status
Taxable
Income Class
Yes
Single
125K
No
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
Refund=Yes
Refund=No
No
Divorced 95K
Yes
Refund=?
No
Married
No
Yes
Divorced 220K
No
No
Single
85K
Yes
Entropy(Refund=Yes) = 0
No
Married
75K
No
10
Single
90K
Yes
Entropy(Refund=No)
= -(2/6)log(2/6) (4/6)log(4/6) = 0.9183
60K
Class Class
= Yes = No
0
3
2
4
1
Split on Refund:
10
Missing
value
Entropy(Children)
= 0.3 (0) + 0.6 (0.9183) = 0.551
Gain = 0.9 (0.8813 0.551) = 0.3303
Distribute Instances
Tid Refund Marital
Status
Taxable
Income Class
Yes
Single
125K
No
No
Married
100K
No
No
Single
70K
No
Yes
Married
120K
No
No
Divorced 95K
Yes
No
Married
No
Yes
Divorced 220K
No
No
Single
85K
Yes
No
Married
75K
No
60K
Tid Refund Marital
Status
Taxable
Income Class
10
90K
Single
Yes
10
Refund
Yes
No
Class=Yes
0 + 3/9
Class=Yes
2 + 6/9
Class=No
Class=No
Probability that Refund=Yes is 3/9
10
Refund
Yes
Probability that Refund=No is 6/9
No
Class=Yes
Cheat=Yes
Class=No
Cheat=No
Assign record to the left child with
weight = 3/9 and to the right child
with weight = 6/9
Classify Instances
New record:
Married
Tid Refund Marital
Status
Taxable
Income Class
11
85K
No
Refund
NO
Divorced Total
Class=No
Class=Yes
6/9
2.67
Total
3.67
6.67
10
Yes
Single
No
Single,
Divorced
MarSt
Married
TaxInc
< 80K
NO
NO
> 80K
YES
Probability that Marital Status
= Married is 3.67/6.67
Probability that Marital Status
={Single,Divorced} is 3/6.67
Other Issues
Data Fragmentation
Search Strategy
Expressiveness
Tree Replication
Data Fragmentation
Number of instances gets smaller as you
traverse down the tree
Number of instances at the leaf nodes could
be too small to make any statistically
significant decision
Search Strategy
Finding an optimal decision tree is NP-hard
The algorithm presented so far uses a greedy,
top-down, recursive partitioning strategy to
induce a reasonable solution
Other strategies?
Bottom-up
Bi-directional
Expressiveness
Decision tree provides expressive representation for learning
discrete-valued function
But they do not generalize well to certain types of Boolean
functions
Example: parity function:
Class = 1 if there is an even number of Boolean attributes with truth
value = True
Class = 0 if there is an odd number of Boolean attributes with truth
value = True
For accurate modeling, must have a complete tree
Not expressive enough for modeling continuous variables
Particularly when test condition involves only a single
attribute at-a-time
Decision Boundary
1
0.9
x < 0.43?
0.8
0.7
Yes
No
0.6
y < 0.33?
y < 0.47?
0.5
0.4
Yes
0.3
0.2
:4
:0
0.1
No
:0
:4
Yes
:0
:3
0
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
Border line between two neighboring regions of different classes is
known as decision boundary
Decision boundary is parallel to axes because test condition involves
a single attribute at-a-time
No
:4
:0
Oblique Decision Trees
x+y<1
Class = +
Test condition may involve multiple attributes
More expressive representation
Finding optimal test condition is computationally expensive
Class =
Tree Replication
P
Same subtree appears in multiple branches
Model Evaluation
Metrics for Performance Evaluation
How to evaluate the performance of a model?
Methods for Performance Evaluation
How to obtain reliable estimates?
Methods for Model Comparison
How to compare the relative performance among
competing models?
Model Evaluation
Metrics for Performance Evaluation
How to evaluate the performance of a model?
Methods for Performance Evaluation
How to obtain reliable estimates?
Methods for Model Comparison
How to compare the relative performance among
competing models?
Metrics for Performance Evaluation
Focus on the predictive capability of a model
Rather than how fast it takes to classify or build
models, scalability, etc.
Confusion Matrix:
PREDICTED CLASS
Class=
Yes
Class=
No
ACTUAL Class=
CLASS Yes
Class=
No
a: TP (true positive)
b: FN (false negative)
c: FP (false positive)
d: TN (true negative)
Metrics for Performance Evaluation
PREDICTED CLASS
Class=Yes
Class=Yes
ACTUAL
CLASS Class=No
Class=No
a
(TP)
b
(FN)
c
(FP)
d
(TN)
Most widely-used metric:
ad
TP TN
Accuracy
a b c d TP TN FP FN
Limitation of Accuracy
Consider a 2-class problem
Number of Class 0 examples = 9990
Number of Class 1 examples = 10
If model predicts everything to be class 0,
accuracy is 9990/10000 = 99.9 %
Accuracy is misleading because model does not
detect any class 1 example
Cost Matrix
PREDICTED CLASS
C(i|j)
Class=Yes
Class=Yes
C(Yes|Yes)
C(No|Yes)
C(Yes|No)
C(No|No)
ACTUAL
CLASS Class=No
Class=No
C(i|j): Cost of misclassifying class j example as class i
Computing Cost of Classification
Cost
Matrix
PREDICTED CLASS
ACTUAL
CLASS
Model
M1
ACTUAL
CLASS
PREDICTED CLASS
150
40
60
250
Accuracy = 80%
Cost = 3910
C(i|j)
-1
100
Model
M2
ACTUAL
CLASS
PREDICTED CLASS
250
45
200
Accuracy = 90%
Cost = 4255
Cost vs Accuracy
Count
PREDICTED CLASS
Class=Yes
Class=Yes
ACTUAL
CLASS
Class=No
Accuracy is proportional to cost if
1. C(Yes|No)=C(No|Yes) = q
2. C(Yes|Yes)=C(No|No) = p
b
N=a+b+c+d
Class=No
d
Accuracy = (a + d)/N
Cost
PREDICTED CLASS
Class=Yes
ACTUAL
CLASS
Class=No
Class=Yes
Class=No
Cost = p (a + d) + q (b + c)
= p (a + d) + q (N a d)
= q N (q p)(a + d)
= N [q (q-p) Accuracy]
Cost-Sensitive Measures
a
Precision (p)
ac
a
Recall (r)
ab
2rp
2a
F - measure (F)
r p 2a b c
Precision is biased towards C(Yes|Yes) & C(Yes|No)
Recall is biased towards C(Yes|Yes) & C(No|Yes)
F-measure is biased towards all except C(No|No)
wa w d
Weighted Accuracy
wa wb wc w d
1
Model Evaluation
Metrics for Performance Evaluation
How to evaluate the performance of a model?
Methods for Performance Evaluation
How to obtain reliable estimates?
Methods for Model Comparison
How to compare the relative performance among
competing models?
Methods for Performance Evaluation
How to obtain a reliable estimate of
performance?
Performance of a model may depend on other
factors besides the learning algorithm:
Class distribution
Cost of misclassification
Size of training and test sets
Learning Curve
Learning curve shows
how accuracy changes
with varying sample size
Requires a sampling
schedule for creating
learning curve:
Arithmetic sampling
(Langley, et al)
Geometric sampling
(Provost et al)
Effect of small sample size:
-
Bias in the estimate
Variance of estimate
Methods of Estimation
Holdout
Reserve 2/3 for training and 1/3 for testing
Random subsampling
Repeated holdout
Cross validation
Partition data into k disjoint subsets
k-fold: train on k-1 partitions, test on the remaining one
Leave-one-out: k=n
Stratified sampling
oversampling vs undersampling
Bootstrap
Sampling with replacement
Model Evaluation
Metrics for Performance Evaluation
How to evaluate the performance of a model?
Methods for Performance Evaluation
How to obtain reliable estimates?
Methods for Model Comparison
How to compare the relative performance among
competing models?
ROC (Receiver Operating Characteristic)
Developed in 1950s for signal detection
theory to analyze noisy signals
Characterize the trade-off between positive hits
and false alarms
ROC curve plots TP (on the y-axis) against FP
(on the x-axis)
Performance of each classifier represented as
a point on the ROC curve
changing the threshold of algorithm, sample
distribution or cost matrix changes the location of
the point
ROC Curve
- 1-dimensional data set containing 2 classes (positive and negative)
- any points located at x > t is classified as positive
At threshold t:
TP=0.5, FN=0.5, FP=0.12, FN=0.88
ROC Curve
(TP,FP):
(0,0): declare everything
to be negative class
(1,1): declare everything
to be positive class
(1,0): ideal
Diagonal line:
Random guessing
Below diagonal line:
prediction is opposite of the
true class
Using ROC for Model Comparison
No model consistently
outperform the other
M1 is better for
small FPR
M2 is better for
large FPR
Area Under the ROC
curve
Ideal:
Area
=1
Random guess:
Area
= 0.5
How to Construct an ROC curve
Instance
P(+|A)
True Class
0.95
0.93
0.87
0.85
0.85
0.85
0.76
0.53
0.43
10
0.25
Use classifier that produces
posterior probability for each
test instance P(+|A)
Sort the instances according
to P(+|A) in decreasing order
Apply threshold at each
unique value of P(+|A)
Count the number of TP, FP,
TN, FN at each threshold
TP rate, TPR = TP/(TP+FN)
FP rate, FPR = FP/(FP + TN)
How to construct an ROC curve
+
0.25
0.43
0.53
0.76
0.85
0.85
0.85
0.87
0.93
0.95
1.00
TP
FP
TN
FN
TPR
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.2
FPR
0.8
0.8
0.6
0.4
0.2
0.2
Class
P
Threshold
>=
ROC Curve:
Test of Significance
Given two models:
Model M1: accuracy = 85%, tested on 30 instances
Model M2: accuracy = 75%, tested on 5000 instances
Can we say M1 is better than M2?
How much confidence can we place on accuracy of M1 and
M2?
Can the difference in performance measure be explained as
a result of random fluctuations in the test set?
Confidence Interval for Accuracy
Prediction can be regarded as a Bernoulli trial
A Bernoulli trial has 2 possible outcomes
Possible outcomes for prediction: correct or wrong
Collection of Bernoulli trials has a Binomial distribution:
x Bin(N, p) x: number of correct predictions
e.g: Toss a fair coin 50 times, how many heads would turn up?
Expected number of heads = Np = 50 0.5 = 25
Given x (# of correct predictions) or equivalently,
acc=x/N, and N (# of test instances),
Can we predict p (true accuracy of model)?
Confidence Interval for Accuracy
Area = 1 -
For large test sets (N > 30),
acc has a normal distribution
with mean p and variance
p(1-p)/N
P( Z
/2
acc p
Z
p (1 p ) / N
1 / 2
Z/2
Z1- /2
Confidence Interval for p:
2 N acc Z Z 4 N acc 4 N acc
p
2( N Z )
2
/2
/2
/2
Confidence Interval for Accuracy
Consider a model that produces an accuracy
of 80% when evaluated on 100 test instances:
N=100, acc = 0.8
Let 1- = 0.95 (95% confidence)
1-
From probability table, Z/2=1.96
0.99 2.58
N
p(lower)
50
0.670
100
0.711
500
0.763
1000
0.774
5000
0.789
0.98 2.33
0.95 1.96
0.90 1.65
p(upper)
0.888
0.866
0.833
0.824
0.811
Comparing Performance of 2 Models
Given two models, say M1 and M2, which is
better?
M1 is tested on D1 (size=n1), found error rate = e1
M2 is tested on D2 (size=n2), found error rate = e2
Assume D1 and D2 are independent
If n1 and n2 are sufficiently large, then
e1 ~ N 1 , 1
e2 ~ N 2 , 2
Approximate:
e (1 e )
n
i
Comparing Performance of 2 Models
To test if performance difference is statistically
significant: d = e1 e2
d ~ N(dt,t) where dt is the true difference
Since D1 and D2 are independent, their variance adds up:
2
2
e1(1 e1) e2(1 e2)
n1
n2
At (1-) confidence level,
d d Z
t
/2
An Illustrative Example
Given: M1: n1 = 30, e1 = 0.15
M2: n2 = 5000, e2 = 0.25
d = |e2 e1| = 0.1 (2-sided test)
0.15(1 0.15) 0.25(1 0.25)
0.0043
30
5000
d
At 95% confidence level, Z/2=1.96
d 0.100 1.96 0.0043 0.100 0.128
=> Interval contains 0 => difference may not be
statistically significant
t
Comparing Performance of 2 Algorithms
Each learning algorithm may produce k
models:
L1 may produce M11 , M12, , M1k
L2 may produce M21 , M22, , M2k
If models are generated on the same test sets
D1,D2, , Dk (e.g., via cross-validation)
(d d )
For each set: compute dj = e1j e2j
dj has mean dt and variance t
k (k 1)
Estimate:
d d t
k
j 1
1 , k 1