SlideShare a Scribd company logo
1 of 73
Classification
Lecture 1: Basics, Methods
Jing Gao
SUNY
Buffalo
1
2
Outline
• Basics
– Problem, goal,
evaluation
• Methods
– Nearest Neighbor
– Decision Tree
– Naïve Bayes
– Rule-based
Classification
– Logistic Regression
– Support Vector
Machines
– Ensemble methods
– …
…
…
• Advanced topics
– Semi-supervised
Learning
– Multi-view Learning
3
Readings
• Tan, Steinbach, Kumar, Chapters 4 and 5.
• Han, Kamber, Pei. Data Mining: Concepts and
Techniques.
Chapters 8 and 9.
• Additional readings posted on website
Classification: Definition
• Given a collection of records (training set
)
– Each record contains a set of attributes, one of the
attributes is the class.
• Find a modelfor class attribute as a
function of the values of other attributes.
• Goal: previously unseen records
should be assigned a class as
accurately as possible.
– A test set is used to determine the accuracy of the
model. Usually, the given data set is divided into
training and test sets, with training set used to build
the model and test set used to validate it.
Illustrating Classification Task
Apply
Model
Deduction
Learn
Model
Model
Tid Attrib1 Attrib2 Attrib3 Class
1 Yes Large 125K No
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No
8 No Small 85K Yes
9 No Medium 75K No
10 No Small 90K Yes
10
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
14 No Small 95K ?
15 No Large 67K ?
10
Test Set
Learning
algorithm
Induction
Training Set
5
6
Examples of Classification Task
• Predicting tumor cells as benign or
malignant
• Classifying credit card
transactions as legitimate or
fraudulent
• Classifying emails as spams or normal
emails
• Categorizing news stories as finance,
7
Metrics for Performance Evaluation
• Focus on the predictive capability of a
model
– Rather than how fast it takes to classify or
build models, scalability, etc.
• Confusion Matrix:
PREDICTED CLASS
ACTUAL
CLASS
Class=Yes Class=No
Class=Yes a b
Class=No c d
a: TP (true
positive) b: FN
(false negative) c:
FP (false positive)
d: TN (true
negative)
8
Metrics for Performance Evaluation
PREDICTED CLASS
ACTUAL
CLASS
Class=Yes Class=No
Class=Yes a
(TP)
b
(FN)
Class=No c
(FP)
d
(TN)
• Most widely-used metric:
Accuracy
a  d

TP TN
a  b  c  d TP TN  FP FN
9
Limitation of Accuracy
• Consider a 2-class problem
– Number of Class 0 examples = 9990
– Number of Class 1 examples = 10
• If model predicts everything to be
class 0, accuracy is 9990/10000 =
99.9 %
– Accuracy is misleading because model
does not detect any class 1 example
Cost-Sensitive Measures
r  p 2a b  c
10
2a
F- measure (F) 
2rp

a b
a
a c
a
Recall (r)
Precision (p)
11
Methods of Estimation
• Holdout
– Reserve 2/3 for training and 1/3 for testing
• Random subsampling
– Repeated holdout
• Cross validation
– Partition data into k disjoint subsets
– k-fold: train on k-1 partitions, test on the remaining
one
– Leave-one-out: k=n
• Stratified sampling
– oversampling vs undersampling
• Bootstrap
– Sampling with replacement
12
Classification Techniques
• Nearest Neighbor
• Decision Tree
• Naïve Bayes
• Rule-based
Classification
• Logistic Regression
• Support Vector
Machines
• Ensemble methods
• …
…
Nearest Neighbor Classifiers
Atr1 ……... AtrN
13
• Store the training
records
• Use training records
to predict the class
label of unseen
cases
Unseen Case
Atr1 ……... AtrN Class
A
B
B
C
A
C
B
Set of Stored Cases
Nearest-Neighbor Classifiers
 Requires three
things
– The set of stored records
– Distance Metric to
compute distance
between records
– The value of k, the number
of nearest neighbors to
retrieve
 To classify an unknown
record:
– Compute distance to other
training records
– Identify k nearest
neighbors
– Use class labels of nearest
neighbors to determine the
class label of unknown
Unknown record
14
Definition of Nearest Neighbor
X X X
(a) 1-nearest neighbor (b) 2-nearest neighbor (c) 3-nearest neighbor
K-nearest neighbors of a record x are data points
that
have the k smallest distance to x
15
1 nearest-neighbor
Voronoi
Diagram
16
Nearest Neighbor Classification
• Determine the class from nearest neighbor
list
– take the majority vote of class labels among
the k- nearest neighbors
– Weigh the vote according to distance
• weight factor, w = 1/d2
i
17
i i
• Compute distance between two
points:
– Euclidean distance
d( p,q)   ( p  q )2
Nearest Neighbor Classification
• Choosing the value of k:
– If k is too small, sensitive to noise points
– If k is too large, neighborhood may include points
from
other
classes
X
18
19
Nearest Neighbor Classification
• Scaling issues
– Attributes may have to be scaled to prevent
distance measures from being dominated by
one of the attributes
– Example:
• height of a person may vary from 1.5m to 1.8m
• weight of a person may vary from 90lb to 300lb
• income of a person may vary from $10K to $1M
20
Nearest neighbor Classification
• k-NN classifiers are lazy learners
– It does not build models explicitly
– Different from eager learners such as decision
tree induction
– Classifying unknown records are
relatively expensive
Example of a Decision Tree
Tid Refund Marital
Status
Taxable
Income Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
Refund
MarSt
TaxInc
YES
NO
NO
NO
Y
es No
Married
Single, Divorced
< 80K > 80K
Splitting Attributes
Training Data
21
Model: Decision Tree
Another Example of Decision Tree
Tid Refund Marital
Status
Taxable
Income Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
MarSt
Refund
TaxInc
YES
NO
NO
NO
22
Y
es No
Married
Single,
Divorced
< 80K > 80K
There could be more than one tree that
fits
the same data!
Decision Tree Classification Task
Apply
Model
Deduction
Learn
Model
Model
Tid Attrib1 Attrib2 Attrib3 Class
1 Yes Large 125K No
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No
8 No Small 85K Yes
9 No Medium 75K No
10 No Small 90K Yes
10
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
14 No Small 95K ?
15 No Large 67K ?
10
Test Set
Tree
Induction
algorithm
Induction
Training Set
23
Decisio
n
Tree
Apply Model to Test Data
Refund
MarSt
TaxInc
YES
NO
NO
NO
Y
es No
Married
Single, Divorced
< 80K > 80K
Refund Marital
Status
Taxable
Income Cheat
No Married 80K ?
10
Test Data
Start from the root of tree.
24
Apply Model to Test Data
Refund
MarSt
TaxInc
YES
NO
NO
NO
Y
es No
Married
Single, Divorced
< 80K > 80K
Refund Marital
Status
Taxable
Income Cheat
No Married 80K ?
10
Test Data
25
Apply Model to Test Data
Refund
MarSt
TaxInc
YES
NO
NO
NO
Y
es No
Married
Single, Divorced
< 80K > 80K
Refund Marital
Status
Taxable
Income Cheat
No Married 80K ?
10
Test Data
26
Apply Model to Test Data
Refund
MarSt
TaxInc
YES
NO
NO
NO
Y
es No
Married
Single, Divorced
< 80K > 80K
Refund Marital
Status
Taxable
Income Cheat
No Married 80K ?
10
Test Data
27
Apply Model to Test Data
Refund
MarSt
TaxInc
YES
NO
NO
NO
Y
es No
Married
Single, Divorced
< 80K > 80K
Refund Marital
Status
Taxable
Income Cheat
No Married 80K ?
10
Test Data
28
Apply Model to Test Data
Refund
MarSt
TaxInc
YES
NO
NO
NO
Y
es No
Married
Single, Divorced
< 80K > 80K
Refund Marital
Status
Taxable
Income Cheat
No Married 80K ?
10
Test Data
Assign Cheat to “No”
29
Decision Tree Classification Task
Apply
Model
Deduction
Learn
Model
Model
Tree
Induction
algorithm
Induction
Tid Attrib1 Attrib2 Attrib3 Class
1 Yes Large 125K No
2 No Medium 100K No
3 No Small 70K No
4 Yes Medium 120K No
5 No Large 95K Yes
6 No Medium 60K No
7 Yes Large 220K No
8 No Small 85K Yes
9 No Medium 75K No
10 No Small 90K Yes
10
Tid Attrib1 Attrib2 Attrib3 Class
11 No Small 55K ?
12 Yes Medium 80K ?
13 Yes Large 110K ?
14 No Small 95K ?
15 No Large 67K ?
10
Test Set
Training Set
30
Decisio
n Tree
31
Decision Tree Induction
• Many Algorithms:
– Hunt’sAlgorithm (one of theearliest)
– CART
– ID3, C4.5
– SLIQ,SPRINT
– …
…
General Structure of Hunt’s Algorithm
• Let Dt be the set of
training records that
reach a node t
• General Procedure:
– If Dt contains records that
belong the same class yt,
then t is a leaf node
labeled as yt
– If Dt contains records that
belong to more than one
class, use an attribute to
split the data into smaller
subsets. Recursively apply
the procedure to each
subset
Tid Refund Marital
Status
Taxable
Income Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
10
Dt
32
?
Hunt’s Algorithm
Refund
Don’t
Cheat
Ye
s
N
o
Refund
Don’t
Cheat
Ye
s
N
o
Marital
Status
Don’t
Cheat
Cheat
Marrie
d
Don’t
Cheat
Single,
Divorced
Taxable
Income
< 80K >=
80K
Refund
Don’t
Cheat
Ye
s
N
o
Marital
Status
Don’t
Cheat
Single,
Divorce
d
Marrie
d
Tid Refund Marital
Status
Taxable
Income Cheat
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10 No Single 90K Yes
33
10
34
Tree Induction
• Greedy strategy
– Split the records based on an attribute test
that optimizes certain criterion
• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
35
How to Specify Test Condition?
• Depends on attribute types
– Nominal
– Ordinal
– Continuous
• Depends on number of ways to
split
– 2-way split
– Multi-way split
Splitting Based on Nominal Attributes
• Multi-way split: Use as many
partitions as distinct values
CarType
Family Luxury
Sports
• Binary split:Divides values into two
subsets
Need to find optimal
partitioning
{Family
,
Luxury
}
{Sports
}
CarType CarType
{Sport
s,
Luxury
}
36
{Family
}
OR
• What about this
split?
Splitting Based on Ordinal Attributes
• Multi-way split: Use as many
partitions as distinct values.
Size
Small Large
Medium
• Binary split:Divides values into two
subsets
Need to find optimal
partitioning {Medium
,
Large}
{Small
}
Size Size
{Small,
Medium
}
{Large
}
OR
Size
{Smal
l,
Large
} 37
{Medium
}
38
Splitting Based on Continuous Attributes
• Different ways of handling
– Discretization to form an ordinal
categorical attribute
– Binary Decision: (A < v) or (A  v)
• consider all possible splits and finds the
best cut
• can be more computation intensive
Splitting Based on Continuous Attributes
Taxable
Income
> 80K?
Yes No
Taxable
Income?
(i) Binary split
39
(ii) Multi-way split
< 10K
[10K,25K) [25K,50K) [50K,80K)
> 80K
40
Tree Induction
• Greedy strategy
– Split the records based on an attribute test
that optimizes certain criterion.
• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
How to determine the Best Split
C0: 6
C1: 4
C0: 4
C1: 6
On-
Campus?
Yes No
C0: 1
C1: 3
C0: 8
C1: 0
C0: 1
C1: 7
Car
Type?
C0: 1
C1: 0
C0: 1
C1: 0
C0: 0
C1: 1
Student
ID?
...
Family
Sports
Luxury c1 c20
C0: 0
C1: 1
...
c10 c11
41
Before Splitting: 10 records of
class 0,
10 records of class 1
Which test condition is the
best?
42
How to determine the Best Split
• Greedy approach:
– Nodes with homogeneous class distribution
are preferred
• Need a measure of node impurity:
C0: 5
C1: 5
C0: 9
C1: 1
Non-homogeneous,
High degree of
impurity
Homogeneous,
Low degree of
impurity
How to Find the Best Split
B?
Y
es No
Node N3 Node N4
A?
Y
es No
Node N1 Node N2
Before
Splitting:
C0 N10
C1 N11
C0 N20
C1 N21
C0 N30
C1 N31
C0 N40
C1 N41
C0 N00
C1 N01
M0
M1 M2 M3 M4
M1
2
43
M3
4
Gain = M0 – M12 vs M0 –
M34
44
Measures of Node Impurity
• Gini Index
• Entropy
• Misclassification
error
Measure of Impurity: GINI
(NOTE: p( j | t) is the relative frequency of class j at node t).
– Maximum (1 - 1/nc) when records are equally
distributed among all classes, implying least
interesting information
– Minimum (0) when all records belong to one class,
implying most useful information
j
• Gini Index for a given node t :
GINI(t) 1 [ p( j | t)]2
C1 0
C2 6
Gini=0.000
45
C1 2
C2 4
Gini=0.444
C1 3
C2 3
Gini=0.500
C1 1
C2 5
Gini=0.278
Examples for computing GINI
C1 0
C2 6
C1 2
C2 4
C1 1
C2 5
GINI(t) 1 [ p( j | t)]2
j
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1
= 0
P(C1) = 1/6 P(C2) = 5/6
Gini = 1 – (1/6)2 – (5/6)2 = 0.278
P(C1) = 2/6 P(C2) = 4/6
Gini = 1 – (2/6)2 – (4/6)2 = 0.444
46
Splitting Based on GINI
• Used in CART, SLIQ, SPRINT.
• When a node p is split into k partitions (children), the
quality of
split is computed as,
where
,
ni = number of records at child
i,
n = number of records at node
p.
k ni
n
GINI 
split 
i1
GINI(i)
47
Binary Attributes: Computing GINI Index
 Splits into two partitions
 Effect of Weighing partitions:
– Larger and Purer Partitions are
sought for
B?
Y
es
48
No
Node N1 Node N2
Parent
C1 6
C2 6
Gini = 0.500
N1 N2
C1 5 1
C2 2 4
Gini=0.333
Gini(N1)
= 1 – (5/7)2 –
(2/7)2
= 0.408
Gini(N2)
= 1 – (1/5)2 –
(4/5)2
= 0.32
Gini(Children)
= 7/12 * 0.408
+ 5/12 * 0.32
= 0.371
(NOTE: p( j | t) is the relative frequency of class j at
node t).
–Measures purity of a node
• Maximum (log nc) when records are
equally distributed among all classes
implying least information
• Minimum (0.0) when all records belong to
one class, implying most information
j
Entropy
• Entropy at a given node t:
Entropy(t)   p( j | t)log p( j | t)
49
Examples for computing Entropy
C1 0
C2 6
C1 2
C2 4
C1 1
C2 5
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0
P(C1) = 1/6 P(C2) = 5/6
Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) =
0.65
P(C1) = 2/6 P(C2) = 4/6
Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) =
0.92
j
Entropy(t)   p( j | t)log p( j | t)
2
50
Splitting Based on Information Gain
• Information
Gain:
Parent Node, p is split into k partitions;
ni is number of records in partition i
– Measures reduction in entropy achieved
because of the split. Choose the split that
achieves most reduction (maximizes GAIN)
– Used in ID3 and C4.5




split
n
k ni
 i1
Entropy(i)
GAIN  Entropy( p)  
51
• Measures misclassification error made by a
node.
• Maximum (1 - 1/nc) when records are
equally distributed among all classes,
implying least interesting information
• Minimum (0.0) when all records belong to
one class, implying most interesting
information
i
Splitting Criteria based on Classification Error
• Classification error at a node t :
Error(t) 1 max P(i | t)
52
Examples for Computing Error
C1 0
C2 6
C1 2
C2 4
C1 1
C2 5
P(C1) = 0/6 = 0 P(C2) = 6/6 = 1
Error = 1 – max (0, 1) = 1 – 1 = 0
P(C1) = 1/6 P(C2) = 5/6
Error = 1 – max (1/6, 5/6) = 1 – 5/6 =
1/6
P(C1) = 2/6 P(C2) = 4/6
Error = 1 – max (2/6, 4/6) = 1 – 4/6 =
1/3
i
Error(t) 1 max P(i | t)
53
Comparison among Splitting Criteria
For a 2-class
problem:
54
55
Tree Induction
• Greedy strategy
– Split the records based on an attribute test
that optimizes certain criterion.
• Issues
– Determine how to split the records
• How to specify the attribute test condition?
• How to determine the best split?
– Determine when to stop splitting
56
Stopping Criteria for Tree Induction
• Stop expanding a node when all the
records belong to the same class
• Stop expanding a node when all the
records have similar attribute values
• Early termination (to be discussed later)
57
Decision Tree Based Classification
• Advantages:
– Inexpensive to construct
– Extremely fast at classifying unknown
records
– Easy to interpret for small-sized trees
– Accuracy is comparable to other
classification techniques for many simple
data sets
Underfitting and Overfitting (Example)
500 circular and 500
triangular data
points.
Circular points:
0.5  sqrt(x1
2+x2
2) 
1
Triangular points:
sqrt(x1
2+x2
2) > 0.5
or sqrt(x1
2+x2
2) <
1
58
Underfitting and Overfitting
Overfittin
g
59
60
Occam’s Razor
• Given two models of similar errors,
one should prefer the simpler model
over the more complex model
• For complex models, there is a greater
chance that it was fitted accidentally by
errors in data
• Therefore, one should include
model complexity when
evaluating a model
61
How to Address Overfitting
• Pre-Pruning (Early Stopping Rule)
– Stop the algorithm before it becomes a fully-grown tree
– Typical stopping conditions for a node:
• Stop if all instances belong to the same class
• Stop if all the attribute values are the same
– More restrictive conditions:
• Stop if number of instances is less than some user-specified
threshold
• Stop if class distribution of instances are independent of the
available features
• Stop if expanding the current node does not improve
impurity measures (e.g., Gini or information gain).
62
How to Address Overfitting
• Post-pruning
– Grow decision tree to its entirety
– Trim the nodes of the decision tree in a
bottom-up fashion
– If generalization error improves after
trimming, replace sub-tree by a leaf node.
– Class label of leaf node is determined
from majority class of instances in the
sub-tree
63
Handling Missing Attribute Values
• Missing values affect decision
tree construction in three
different ways:
– Affects how impurity measures are
computed
– Affects how to distribute instance with
missing value to child nodes
– Affects how a test instance with missing
value is classified
Computing Impurity Measure
Tid Refund Marital
Status
Taxable
Income Class
1
2
3
4
5
6
7
8
9
10
Yes
No
No
Yes
No
No
Yes
No
No
?
Single
Married
Single
Married
Divorced
Married
Divorced
Single
Married
Single
125K
100K
70K
120K
95K
60K
220K
85K
75K
90K
No
No
No
No
Yes
No
No
Yes
No
Yes
10
Class
= Yes
Class
= No
Refund=Yes 0 3
Refund=No 2 4
Refund=? 1 0
Split on Refund:
Entropy(Refund=Yes)
= 0
Entropy(Refund=No)
= -(2/6)log(2/6) – (4/6)log(4/6) =
0.9183
Entropy(Children)
= 0.3 (0) + 0.6 (0.9183) = 0.551
Missin
g
value
64
Before Splitting:
Entropy(Paren
t)
= -0.3 log(0.3)-(0.7)log(0.7) =
0.8813
Distribute Instances
Tid Refund Marital
Status
Taxable
Income Class
1 Yes Single 125K No
2 No Married 100K No
3 No Single 70K No
4 Yes Married 120K No
5 No Divorced 95K Yes
6 No Married 60K No
7 Yes Divorced 220K No
8 No Single 85K Yes
9 No Married 75K No
10
Refund
Y
es N
o
Class=Yes 0
Class=No 3
Cheat=Yes 2
Cheat=No 4
Refund
Y
es
Tid Refund Marital
Status
Taxable
Income Class
10 ? Single 90K Yes
10
65
N
o
Class=Yes 2 + 6/9
Class=No 4
Probability that Refund=Yes is
3/9 Probability that Refund=No
is 6/9 Assign record to the left
child with
weight = 3/9 and to the right child
with
weight = 6/9
Class=Yes 0 + 3/9
Class=No 3
Classify Instances
Refund
MarSt
TaxInc
YES
NO
NO
NO
Y
es
No
Married
Single,
Divorced
< 80K > 80K
Married Single Divorced Total
Class=No 3 1 0 4
Class=Yes 6/9 1 1 2.67
Total 3.67 2 1 6.67
Tid Refund Marital
Status
Taxable
Income Class
11 No ? 85K ?
10
New
record:
Probability that Marital
Status
= Married is 3.67/6.67
Probability that Marital
Status
={Single,Divorced} is
3/6.67
66
67
Other Issues
• Data
Fragmentation
• Search Strategy
• Expressiveness
• Tree Replication
68
Data Fragmentation
• Number of instances gets smaller as
you traverse down the tree
• Number of instances at the leaf nodes
could be too small to make any
statistically significant decision
69
Search Strategy
• Finding an optimal decision tree is NP-
hard
• The algorithm presented so far uses a
greedy, top-down, recursive partitioning
strategy to induce a reasonable solution
• Other strategies?
– Bottom-up
– Bi-directional
70
Expressiveness
• Decision tree provides expressive representation for
learning
discrete-valued function
– But they do not generalize well to certain types of
Boolean functions
• Example: parity function:
– Class = 1 if there is an even number of Boolean attributes with
truth value = True
– Class = 0 if there is an odd number of Boolean attributes with
truth value = True
• For accurate modeling, must have a complete tree
• Not expressive enough for modeling continuous
variables
– Particularly when test condition involves only a
single attribute at-a-time
Decision Boundary
: 0
: 3
: 4
: 0
: 4
: 0
: 0
: 4
x < 0.43?
Yes No
y < 0.47?
Yes No
y < 0.33?
Yes No
0
71
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
1
0.9
x
•Border line between two neighboring regions of different classes is
known as decision boundary
• Decision boundary is parallel to axes because test condition
involves a
single attribute at-a-time
y
Oblique Decision Trees
x + y <
1
Class = + Class =
• Test condition may involve multiple attributes
• More expressive representation
• Finding optimal test condition is computationally
expensive
72
73
Take-away Message
• What’s classification?
• How to evaluate classification model?
• How to use decision tree to make predictions?
• How to construct a decision tree from training
data?
• How to compute gini index, entropy,
misclassification error?
• How to avoid overfitting by pre-pruning or
post- pruning decision tree?

More Related Content

Similar to NN Classififcation Neural Network NN.pptx

Classification & Clustering.pptx
Classification & Clustering.pptxClassification & Clustering.pptx
Classification & Clustering.pptxImXaib
 
Machine Learning: An introduction โดย รศ.ดร.สุรพงค์ เอื้อวัฒนามงคล
Machine Learning: An introduction โดย รศ.ดร.สุรพงค์  เอื้อวัฒนามงคลMachine Learning: An introduction โดย รศ.ดร.สุรพงค์  เอื้อวัฒนามงคล
Machine Learning: An introduction โดย รศ.ดร.สุรพงค์ เอื้อวัฒนามงคลBAINIDA
 
Decision tree for data mining and computer
Decision tree for data mining and computerDecision tree for data mining and computer
Decision tree for data mining and computertttiba
 
Advanced Mathematics Program 8
Advanced Mathematics Program 8Advanced Mathematics Program 8
Advanced Mathematics Program 8Lade Asrah Carim
 
Decision tree induction
Decision tree inductionDecision tree induction
Decision tree inductionthamizh arasi
 
Information Retrieval 08
Information Retrieval 08 Information Retrieval 08
Information Retrieval 08 Jeet Das
 
Classification in data mining
Classification in data mining Classification in data mining
Classification in data mining Sulman Ahmed
 
Classification: Basic Concepts and Decision Trees
Classification: Basic Concepts and Decision TreesClassification: Basic Concepts and Decision Trees
Classification: Basic Concepts and Decision Treessathish sak
 
datamining-lect11.pptx
datamining-lect11.pptxdatamining-lect11.pptx
datamining-lect11.pptxRithikRaj25
 
datamining-lect11.pptx
datamining-lect11.pptxdatamining-lect11.pptx
datamining-lect11.pptxLazherZaidi1
 
Application of Machine Learning in Agriculture
Application of Machine  Learning in AgricultureApplication of Machine  Learning in Agriculture
Application of Machine Learning in AgricultureAman Vasisht
 
Lect8 Classification & prediction
Lect8 Classification & predictionLect8 Classification & prediction
Lect8 Classification & predictionhktripathy
 
Lecture 5 Decision tree.pdf
Lecture 5 Decision tree.pdfLecture 5 Decision tree.pdf
Lecture 5 Decision tree.pdfssuser4c50a9
 
Machine Learning - Classification
Machine Learning - ClassificationMachine Learning - Classification
Machine Learning - ClassificationDarío Garigliotti
 
Week 11 Model Evalaution Model Evaluation
Week 11 Model Evalaution Model EvaluationWeek 11 Model Evalaution Model Evaluation
Week 11 Model Evalaution Model Evaluationkhairulhuda242
 

Similar to NN Classififcation Neural Network NN.pptx (20)

Classification & Clustering.pptx
Classification & Clustering.pptxClassification & Clustering.pptx
Classification & Clustering.pptx
 
Machine Learning: An introduction โดย รศ.ดร.สุรพงค์ เอื้อวัฒนามงคล
Machine Learning: An introduction โดย รศ.ดร.สุรพงค์  เอื้อวัฒนามงคลMachine Learning: An introduction โดย รศ.ดร.สุรพงค์  เอื้อวัฒนามงคล
Machine Learning: An introduction โดย รศ.ดร.สุรพงค์ เอื้อวัฒนามงคล
 
Decision tree for data mining and computer
Decision tree for data mining and computerDecision tree for data mining and computer
Decision tree for data mining and computer
 
BAS 250 Lecture 8
BAS 250 Lecture 8BAS 250 Lecture 8
BAS 250 Lecture 8
 
Advanced Mathematics Program 8
Advanced Mathematics Program 8Advanced Mathematics Program 8
Advanced Mathematics Program 8
 
Decision tree induction
Decision tree inductionDecision tree induction
Decision tree induction
 
Information Retrieval 08
Information Retrieval 08 Information Retrieval 08
Information Retrieval 08
 
Classification in data mining
Classification in data mining Classification in data mining
Classification in data mining
 
evaluation and credibility-Part 1
evaluation and credibility-Part 1evaluation and credibility-Part 1
evaluation and credibility-Part 1
 
Classification: Basic Concepts and Decision Trees
Classification: Basic Concepts and Decision TreesClassification: Basic Concepts and Decision Trees
Classification: Basic Concepts and Decision Trees
 
datamining-lect11.pptx
datamining-lect11.pptxdatamining-lect11.pptx
datamining-lect11.pptx
 
datamining-lect11.pptx
datamining-lect11.pptxdatamining-lect11.pptx
datamining-lect11.pptx
 
evaluation and credibility-Part 2
evaluation and credibility-Part 2evaluation and credibility-Part 2
evaluation and credibility-Part 2
 
Application of Machine Learning in Agriculture
Application of Machine  Learning in AgricultureApplication of Machine  Learning in Agriculture
Application of Machine Learning in Agriculture
 
KNN presentation.pdf
KNN presentation.pdfKNN presentation.pdf
KNN presentation.pdf
 
Lect8 Classification & prediction
Lect8 Classification & predictionLect8 Classification & prediction
Lect8 Classification & prediction
 
Lecture 5 Decision tree.pdf
Lecture 5 Decision tree.pdfLecture 5 Decision tree.pdf
Lecture 5 Decision tree.pdf
 
Machine Learning - Classification
Machine Learning - ClassificationMachine Learning - Classification
Machine Learning - Classification
 
Week 11 Model Evalaution Model Evaluation
Week 11 Model Evalaution Model EvaluationWeek 11 Model Evalaution Model Evaluation
Week 11 Model Evalaution Model Evaluation
 
Week 1.pdf
Week 1.pdfWeek 1.pdf
Week 1.pdf
 

Recently uploaded

Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations120cr0395
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduitsrknatarajan
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...roncy bisnoi
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSISrknatarajan
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlysanyuktamishra911
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfJiananWang21
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...Call Girls in Nagpur High Profile
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spaintimesproduction05
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756dollysharma2066
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingrknatarajan
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...ranjana rawat
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Call Girls in Nagpur High Profile
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdfKamal Acharya
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college projectTonystark477637
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Dr.Costas Sachpazis
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Christo Ananth
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTbhaskargani46
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxfenichawla
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...ranjana rawat
 

Recently uploaded (20)

Extrusion Processes and Their Limitations
Extrusion Processes and Their LimitationsExtrusion Processes and Their Limitations
Extrusion Processes and Their Limitations
 
UNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular ConduitsUNIT-II FMM-Flow Through Circular Conduits
UNIT-II FMM-Flow Through Circular Conduits
 
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
Call Girls Pimpri Chinchwad Call Me 7737669865 Budget Friendly No Advance Boo...
 
UNIT-III FMM. DIMENSIONAL ANALYSIS
UNIT-III FMM.        DIMENSIONAL ANALYSISUNIT-III FMM.        DIMENSIONAL ANALYSIS
UNIT-III FMM. DIMENSIONAL ANALYSIS
 
KubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghlyKubeKraft presentation @CloudNativeHooghly
KubeKraft presentation @CloudNativeHooghly
 
data_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdfdata_management_and _data_science_cheat_sheet.pdf
data_management_and _data_science_cheat_sheet.pdf
 
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...Booking open Available Pune Call Girls Pargaon  6297143586 Call Hot Indian Gi...
Booking open Available Pune Call Girls Pargaon 6297143586 Call Hot Indian Gi...
 
Vivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design SpainVivazz, Mieres Social Housing Design Spain
Vivazz, Mieres Social Housing Design Spain
 
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
FULL ENJOY Call Girls In Mahipalpur Delhi Contact Us 8377877756
 
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and workingUNIT-V FMM.HYDRAULIC TURBINE - Construction and working
UNIT-V FMM.HYDRAULIC TURBINE - Construction and working
 
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
The Most Attractive Pune Call Girls Budhwar Peth 8250192130 Will You Miss Thi...
 
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...Top Rated  Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
Top Rated Pune Call Girls Budhwar Peth ⟟ 6297143586 ⟟ Call Me For Genuine Se...
 
University management System project report..pdf
University management System project report..pdfUniversity management System project report..pdf
University management System project report..pdf
 
result management system report for college project
result management system report for college projectresult management system report for college project
result management system report for college project
 
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
Structural Analysis and Design of Foundations: A Comprehensive Handbook for S...
 
Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...Call for Papers - International Journal of Intelligent Systems and Applicatio...
Call for Papers - International Journal of Intelligent Systems and Applicatio...
 
Generative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPTGenerative AI or GenAI technology based PPT
Generative AI or GenAI technology based PPT
 
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
(INDIRA) Call Girl Meerut Call Now 8617697112 Meerut Escorts 24x7
 
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptxBSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
BSides Seattle 2024 - Stopping Ethan Hunt From Taking Your Data.pptx
 
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
The Most Attractive Pune Call Girls Manchar 8250192130 Will You Miss This Cha...
 

NN Classififcation Neural Network NN.pptx

  • 1. Classification Lecture 1: Basics, Methods Jing Gao SUNY Buffalo 1
  • 2. 2 Outline • Basics – Problem, goal, evaluation • Methods – Nearest Neighbor – Decision Tree – Naïve Bayes – Rule-based Classification – Logistic Regression – Support Vector Machines – Ensemble methods – … … … • Advanced topics – Semi-supervised Learning – Multi-view Learning
  • 3. 3 Readings • Tan, Steinbach, Kumar, Chapters 4 and 5. • Han, Kamber, Pei. Data Mining: Concepts and Techniques. Chapters 8 and 9. • Additional readings posted on website
  • 4. Classification: Definition • Given a collection of records (training set ) – Each record contains a set of attributes, one of the attributes is the class. • Find a modelfor class attribute as a function of the values of other attributes. • Goal: previously unseen records should be assigned a class as accurately as possible. – A test set is used to determine the accuracy of the model. Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.
  • 5. Illustrating Classification Task Apply Model Deduction Learn Model Model Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes 10 Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10 Test Set Learning algorithm Induction Training Set 5
  • 6. 6 Examples of Classification Task • Predicting tumor cells as benign or malignant • Classifying credit card transactions as legitimate or fraudulent • Classifying emails as spams or normal emails • Categorizing news stories as finance,
  • 7. 7 Metrics for Performance Evaluation • Focus on the predictive capability of a model – Rather than how fast it takes to classify or build models, scalability, etc. • Confusion Matrix: PREDICTED CLASS ACTUAL CLASS Class=Yes Class=No Class=Yes a b Class=No c d a: TP (true positive) b: FN (false negative) c: FP (false positive) d: TN (true negative)
  • 8. 8 Metrics for Performance Evaluation PREDICTED CLASS ACTUAL CLASS Class=Yes Class=No Class=Yes a (TP) b (FN) Class=No c (FP) d (TN) • Most widely-used metric: Accuracy a  d  TP TN a  b  c  d TP TN  FP FN
  • 9. 9 Limitation of Accuracy • Consider a 2-class problem – Number of Class 0 examples = 9990 – Number of Class 1 examples = 10 • If model predicts everything to be class 0, accuracy is 9990/10000 = 99.9 % – Accuracy is misleading because model does not detect any class 1 example
  • 10. Cost-Sensitive Measures r  p 2a b  c 10 2a F- measure (F)  2rp  a b a a c a Recall (r) Precision (p)
  • 11. 11 Methods of Estimation • Holdout – Reserve 2/3 for training and 1/3 for testing • Random subsampling – Repeated holdout • Cross validation – Partition data into k disjoint subsets – k-fold: train on k-1 partitions, test on the remaining one – Leave-one-out: k=n • Stratified sampling – oversampling vs undersampling • Bootstrap – Sampling with replacement
  • 12. 12 Classification Techniques • Nearest Neighbor • Decision Tree • Naïve Bayes • Rule-based Classification • Logistic Regression • Support Vector Machines • Ensemble methods • … …
  • 13. Nearest Neighbor Classifiers Atr1 ……... AtrN 13 • Store the training records • Use training records to predict the class label of unseen cases Unseen Case Atr1 ……... AtrN Class A B B C A C B Set of Stored Cases
  • 14. Nearest-Neighbor Classifiers  Requires three things – The set of stored records – Distance Metric to compute distance between records – The value of k, the number of nearest neighbors to retrieve  To classify an unknown record: – Compute distance to other training records – Identify k nearest neighbors – Use class labels of nearest neighbors to determine the class label of unknown Unknown record 14
  • 15. Definition of Nearest Neighbor X X X (a) 1-nearest neighbor (b) 2-nearest neighbor (c) 3-nearest neighbor K-nearest neighbors of a record x are data points that have the k smallest distance to x 15
  • 17. Nearest Neighbor Classification • Determine the class from nearest neighbor list – take the majority vote of class labels among the k- nearest neighbors – Weigh the vote according to distance • weight factor, w = 1/d2 i 17 i i • Compute distance between two points: – Euclidean distance d( p,q)   ( p  q )2
  • 18. Nearest Neighbor Classification • Choosing the value of k: – If k is too small, sensitive to noise points – If k is too large, neighborhood may include points from other classes X 18
  • 19. 19 Nearest Neighbor Classification • Scaling issues – Attributes may have to be scaled to prevent distance measures from being dominated by one of the attributes – Example: • height of a person may vary from 1.5m to 1.8m • weight of a person may vary from 90lb to 300lb • income of a person may vary from $10K to $1M
  • 20. 20 Nearest neighbor Classification • k-NN classifiers are lazy learners – It does not build models explicitly – Different from eager learners such as decision tree induction – Classifying unknown records are relatively expensive
  • 21. Example of a Decision Tree Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 Refund MarSt TaxInc YES NO NO NO Y es No Married Single, Divorced < 80K > 80K Splitting Attributes Training Data 21 Model: Decision Tree
  • 22. Another Example of Decision Tree Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 MarSt Refund TaxInc YES NO NO NO 22 Y es No Married Single, Divorced < 80K > 80K There could be more than one tree that fits the same data!
  • 23. Decision Tree Classification Task Apply Model Deduction Learn Model Model Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes 10 Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10 Test Set Tree Induction algorithm Induction Training Set 23 Decisio n Tree
  • 24. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Y es No Married Single, Divorced < 80K > 80K Refund Marital Status Taxable Income Cheat No Married 80K ? 10 Test Data Start from the root of tree. 24
  • 25. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Y es No Married Single, Divorced < 80K > 80K Refund Marital Status Taxable Income Cheat No Married 80K ? 10 Test Data 25
  • 26. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Y es No Married Single, Divorced < 80K > 80K Refund Marital Status Taxable Income Cheat No Married 80K ? 10 Test Data 26
  • 27. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Y es No Married Single, Divorced < 80K > 80K Refund Marital Status Taxable Income Cheat No Married 80K ? 10 Test Data 27
  • 28. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Y es No Married Single, Divorced < 80K > 80K Refund Marital Status Taxable Income Cheat No Married 80K ? 10 Test Data 28
  • 29. Apply Model to Test Data Refund MarSt TaxInc YES NO NO NO Y es No Married Single, Divorced < 80K > 80K Refund Marital Status Taxable Income Cheat No Married 80K ? 10 Test Data Assign Cheat to “No” 29
  • 30. Decision Tree Classification Task Apply Model Deduction Learn Model Model Tree Induction algorithm Induction Tid Attrib1 Attrib2 Attrib3 Class 1 Yes Large 125K No 2 No Medium 100K No 3 No Small 70K No 4 Yes Medium 120K No 5 No Large 95K Yes 6 No Medium 60K No 7 Yes Large 220K No 8 No Small 85K Yes 9 No Medium 75K No 10 No Small 90K Yes 10 Tid Attrib1 Attrib2 Attrib3 Class 11 No Small 55K ? 12 Yes Medium 80K ? 13 Yes Large 110K ? 14 No Small 95K ? 15 No Large 67K ? 10 Test Set Training Set 30 Decisio n Tree
  • 31. 31 Decision Tree Induction • Many Algorithms: – Hunt’sAlgorithm (one of theearliest) – CART – ID3, C4.5 – SLIQ,SPRINT – … …
  • 32. General Structure of Hunt’s Algorithm • Let Dt be the set of training records that reach a node t • General Procedure: – If Dt contains records that belong the same class yt, then t is a leaf node labeled as yt – If Dt contains records that belong to more than one class, use an attribute to split the data into smaller subsets. Recursively apply the procedure to each subset Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 10 Dt 32 ?
  • 33. Hunt’s Algorithm Refund Don’t Cheat Ye s N o Refund Don’t Cheat Ye s N o Marital Status Don’t Cheat Cheat Marrie d Don’t Cheat Single, Divorced Taxable Income < 80K >= 80K Refund Don’t Cheat Ye s N o Marital Status Don’t Cheat Single, Divorce d Marrie d Tid Refund Marital Status Taxable Income Cheat 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 No Single 90K Yes 33 10
  • 34. 34 Tree Induction • Greedy strategy – Split the records based on an attribute test that optimizes certain criterion • Issues – Determine how to split the records • How to specify the attribute test condition? • How to determine the best split? – Determine when to stop splitting
  • 35. 35 How to Specify Test Condition? • Depends on attribute types – Nominal – Ordinal – Continuous • Depends on number of ways to split – 2-way split – Multi-way split
  • 36. Splitting Based on Nominal Attributes • Multi-way split: Use as many partitions as distinct values CarType Family Luxury Sports • Binary split:Divides values into two subsets Need to find optimal partitioning {Family , Luxury } {Sports } CarType CarType {Sport s, Luxury } 36 {Family } OR
  • 37. • What about this split? Splitting Based on Ordinal Attributes • Multi-way split: Use as many partitions as distinct values. Size Small Large Medium • Binary split:Divides values into two subsets Need to find optimal partitioning {Medium , Large} {Small } Size Size {Small, Medium } {Large } OR Size {Smal l, Large } 37 {Medium }
  • 38. 38 Splitting Based on Continuous Attributes • Different ways of handling – Discretization to form an ordinal categorical attribute – Binary Decision: (A < v) or (A  v) • consider all possible splits and finds the best cut • can be more computation intensive
  • 39. Splitting Based on Continuous Attributes Taxable Income > 80K? Yes No Taxable Income? (i) Binary split 39 (ii) Multi-way split < 10K [10K,25K) [25K,50K) [50K,80K) > 80K
  • 40. 40 Tree Induction • Greedy strategy – Split the records based on an attribute test that optimizes certain criterion. • Issues – Determine how to split the records • How to specify the attribute test condition? • How to determine the best split? – Determine when to stop splitting
  • 41. How to determine the Best Split C0: 6 C1: 4 C0: 4 C1: 6 On- Campus? Yes No C0: 1 C1: 3 C0: 8 C1: 0 C0: 1 C1: 7 Car Type? C0: 1 C1: 0 C0: 1 C1: 0 C0: 0 C1: 1 Student ID? ... Family Sports Luxury c1 c20 C0: 0 C1: 1 ... c10 c11 41 Before Splitting: 10 records of class 0, 10 records of class 1 Which test condition is the best?
  • 42. 42 How to determine the Best Split • Greedy approach: – Nodes with homogeneous class distribution are preferred • Need a measure of node impurity: C0: 5 C1: 5 C0: 9 C1: 1 Non-homogeneous, High degree of impurity Homogeneous, Low degree of impurity
  • 43. How to Find the Best Split B? Y es No Node N3 Node N4 A? Y es No Node N1 Node N2 Before Splitting: C0 N10 C1 N11 C0 N20 C1 N21 C0 N30 C1 N31 C0 N40 C1 N41 C0 N00 C1 N01 M0 M1 M2 M3 M4 M1 2 43 M3 4 Gain = M0 – M12 vs M0 – M34
  • 44. 44 Measures of Node Impurity • Gini Index • Entropy • Misclassification error
  • 45. Measure of Impurity: GINI (NOTE: p( j | t) is the relative frequency of class j at node t). – Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information – Minimum (0) when all records belong to one class, implying most useful information j • Gini Index for a given node t : GINI(t) 1 [ p( j | t)]2 C1 0 C2 6 Gini=0.000 45 C1 2 C2 4 Gini=0.444 C1 3 C2 3 Gini=0.500 C1 1 C2 5 Gini=0.278
  • 46. Examples for computing GINI C1 0 C2 6 C1 2 C2 4 C1 1 C2 5 GINI(t) 1 [ p( j | t)]2 j P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Gini = 1 – P(C1)2 – P(C2)2 = 1 – 0 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Gini = 1 – (1/6)2 – (5/6)2 = 0.278 P(C1) = 2/6 P(C2) = 4/6 Gini = 1 – (2/6)2 – (4/6)2 = 0.444 46
  • 47. Splitting Based on GINI • Used in CART, SLIQ, SPRINT. • When a node p is split into k partitions (children), the quality of split is computed as, where , ni = number of records at child i, n = number of records at node p. k ni n GINI  split  i1 GINI(i) 47
  • 48. Binary Attributes: Computing GINI Index  Splits into two partitions  Effect of Weighing partitions: – Larger and Purer Partitions are sought for B? Y es 48 No Node N1 Node N2 Parent C1 6 C2 6 Gini = 0.500 N1 N2 C1 5 1 C2 2 4 Gini=0.333 Gini(N1) = 1 – (5/7)2 – (2/7)2 = 0.408 Gini(N2) = 1 – (1/5)2 – (4/5)2 = 0.32 Gini(Children) = 7/12 * 0.408 + 5/12 * 0.32 = 0.371
  • 49. (NOTE: p( j | t) is the relative frequency of class j at node t). –Measures purity of a node • Maximum (log nc) when records are equally distributed among all classes implying least information • Minimum (0.0) when all records belong to one class, implying most information j Entropy • Entropy at a given node t: Entropy(t)   p( j | t)log p( j | t) 49
  • 50. Examples for computing Entropy C1 0 C2 6 C1 2 C2 4 C1 1 C2 5 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Entropy = – 0 log 0 – 1 log 1 = – 0 – 0 = 0 P(C1) = 1/6 P(C2) = 5/6 Entropy = – (1/6) log2 (1/6) – (5/6) log2 (1/6) = 0.65 P(C1) = 2/6 P(C2) = 4/6 Entropy = – (2/6) log2 (2/6) – (4/6) log2 (4/6) = 0.92 j Entropy(t)   p( j | t)log p( j | t) 2 50
  • 51. Splitting Based on Information Gain • Information Gain: Parent Node, p is split into k partitions; ni is number of records in partition i – Measures reduction in entropy achieved because of the split. Choose the split that achieves most reduction (maximizes GAIN) – Used in ID3 and C4.5     split n k ni  i1 Entropy(i) GAIN  Entropy( p)   51
  • 52. • Measures misclassification error made by a node. • Maximum (1 - 1/nc) when records are equally distributed among all classes, implying least interesting information • Minimum (0.0) when all records belong to one class, implying most interesting information i Splitting Criteria based on Classification Error • Classification error at a node t : Error(t) 1 max P(i | t) 52
  • 53. Examples for Computing Error C1 0 C2 6 C1 2 C2 4 C1 1 C2 5 P(C1) = 0/6 = 0 P(C2) = 6/6 = 1 Error = 1 – max (0, 1) = 1 – 1 = 0 P(C1) = 1/6 P(C2) = 5/6 Error = 1 – max (1/6, 5/6) = 1 – 5/6 = 1/6 P(C1) = 2/6 P(C2) = 4/6 Error = 1 – max (2/6, 4/6) = 1 – 4/6 = 1/3 i Error(t) 1 max P(i | t) 53
  • 54. Comparison among Splitting Criteria For a 2-class problem: 54
  • 55. 55 Tree Induction • Greedy strategy – Split the records based on an attribute test that optimizes certain criterion. • Issues – Determine how to split the records • How to specify the attribute test condition? • How to determine the best split? – Determine when to stop splitting
  • 56. 56 Stopping Criteria for Tree Induction • Stop expanding a node when all the records belong to the same class • Stop expanding a node when all the records have similar attribute values • Early termination (to be discussed later)
  • 57. 57 Decision Tree Based Classification • Advantages: – Inexpensive to construct – Extremely fast at classifying unknown records – Easy to interpret for small-sized trees – Accuracy is comparable to other classification techniques for many simple data sets
  • 58. Underfitting and Overfitting (Example) 500 circular and 500 triangular data points. Circular points: 0.5  sqrt(x1 2+x2 2)  1 Triangular points: sqrt(x1 2+x2 2) > 0.5 or sqrt(x1 2+x2 2) < 1 58
  • 60. 60 Occam’s Razor • Given two models of similar errors, one should prefer the simpler model over the more complex model • For complex models, there is a greater chance that it was fitted accidentally by errors in data • Therefore, one should include model complexity when evaluating a model
  • 61. 61 How to Address Overfitting • Pre-Pruning (Early Stopping Rule) – Stop the algorithm before it becomes a fully-grown tree – Typical stopping conditions for a node: • Stop if all instances belong to the same class • Stop if all the attribute values are the same – More restrictive conditions: • Stop if number of instances is less than some user-specified threshold • Stop if class distribution of instances are independent of the available features • Stop if expanding the current node does not improve impurity measures (e.g., Gini or information gain).
  • 62. 62 How to Address Overfitting • Post-pruning – Grow decision tree to its entirety – Trim the nodes of the decision tree in a bottom-up fashion – If generalization error improves after trimming, replace sub-tree by a leaf node. – Class label of leaf node is determined from majority class of instances in the sub-tree
  • 63. 63 Handling Missing Attribute Values • Missing values affect decision tree construction in three different ways: – Affects how impurity measures are computed – Affects how to distribute instance with missing value to child nodes – Affects how a test instance with missing value is classified
  • 64. Computing Impurity Measure Tid Refund Marital Status Taxable Income Class 1 2 3 4 5 6 7 8 9 10 Yes No No Yes No No Yes No No ? Single Married Single Married Divorced Married Divorced Single Married Single 125K 100K 70K 120K 95K 60K 220K 85K 75K 90K No No No No Yes No No Yes No Yes 10 Class = Yes Class = No Refund=Yes 0 3 Refund=No 2 4 Refund=? 1 0 Split on Refund: Entropy(Refund=Yes) = 0 Entropy(Refund=No) = -(2/6)log(2/6) – (4/6)log(4/6) = 0.9183 Entropy(Children) = 0.3 (0) + 0.6 (0.9183) = 0.551 Missin g value 64 Before Splitting: Entropy(Paren t) = -0.3 log(0.3)-(0.7)log(0.7) = 0.8813
  • 65. Distribute Instances Tid Refund Marital Status Taxable Income Class 1 Yes Single 125K No 2 No Married 100K No 3 No Single 70K No 4 Yes Married 120K No 5 No Divorced 95K Yes 6 No Married 60K No 7 Yes Divorced 220K No 8 No Single 85K Yes 9 No Married 75K No 10 Refund Y es N o Class=Yes 0 Class=No 3 Cheat=Yes 2 Cheat=No 4 Refund Y es Tid Refund Marital Status Taxable Income Class 10 ? Single 90K Yes 10 65 N o Class=Yes 2 + 6/9 Class=No 4 Probability that Refund=Yes is 3/9 Probability that Refund=No is 6/9 Assign record to the left child with weight = 3/9 and to the right child with weight = 6/9 Class=Yes 0 + 3/9 Class=No 3
  • 66. Classify Instances Refund MarSt TaxInc YES NO NO NO Y es No Married Single, Divorced < 80K > 80K Married Single Divorced Total Class=No 3 1 0 4 Class=Yes 6/9 1 1 2.67 Total 3.67 2 1 6.67 Tid Refund Marital Status Taxable Income Class 11 No ? 85K ? 10 New record: Probability that Marital Status = Married is 3.67/6.67 Probability that Marital Status ={Single,Divorced} is 3/6.67 66
  • 67. 67 Other Issues • Data Fragmentation • Search Strategy • Expressiveness • Tree Replication
  • 68. 68 Data Fragmentation • Number of instances gets smaller as you traverse down the tree • Number of instances at the leaf nodes could be too small to make any statistically significant decision
  • 69. 69 Search Strategy • Finding an optimal decision tree is NP- hard • The algorithm presented so far uses a greedy, top-down, recursive partitioning strategy to induce a reasonable solution • Other strategies? – Bottom-up – Bi-directional
  • 70. 70 Expressiveness • Decision tree provides expressive representation for learning discrete-valued function – But they do not generalize well to certain types of Boolean functions • Example: parity function: – Class = 1 if there is an even number of Boolean attributes with truth value = True – Class = 0 if there is an odd number of Boolean attributes with truth value = True • For accurate modeling, must have a complete tree • Not expressive enough for modeling continuous variables – Particularly when test condition involves only a single attribute at-a-time
  • 71. Decision Boundary : 0 : 3 : 4 : 0 : 4 : 0 : 0 : 4 x < 0.43? Yes No y < 0.47? Yes No y < 0.33? Yes No 0 71 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 1 0.9 x •Border line between two neighboring regions of different classes is known as decision boundary • Decision boundary is parallel to axes because test condition involves a single attribute at-a-time y
  • 72. Oblique Decision Trees x + y < 1 Class = + Class = • Test condition may involve multiple attributes • More expressive representation • Finding optimal test condition is computationally expensive 72
  • 73. 73 Take-away Message • What’s classification? • How to evaluate classification model? • How to use decision tree to make predictions? • How to construct a decision tree from training data? • How to compute gini index, entropy, misclassification error? • How to avoid overfitting by pre-pruning or post- pruning decision tree?