SlideShare a Scribd company logo
Decision Trees
October 17, 2019
Amit Praseed Decision Trees October 17, 2019 1 / 43
Introduction
A decision tree is a flowchart like tree structure
each internal node (non-leaf node) denotes a test on an attribute,
each branch represents an outcome of the test, and
each leaf node (or terminal node) holds a class label
Amit Praseed Decision Trees October 17, 2019 2 / 43
AllElectronics Customer Database
RID Age Student Class:Buy?
1 youth no no
2 youth no no
3 senior no no
4 senior yes yes
5 senior yes yes
6 youth no no
7 youth yes yes
8 senior yes yes
9 youth yes yes
10 senior no no
Two Decision Trees for the Same Data
Amit Praseed Decision Trees October 17, 2019 3 / 43
Attribute Selection
Selecting the best attribute for use at each stage of the decision tree
induction is crucial.
The attribute should be chosen that it separates the data points into
classes which are as pure as possible.
Amit Praseed Decision Trees October 17, 2019 4 / 43
Purity of a Partition - Entropy
The purity of a data partition can be approximated by using Shannon’s Entropy
(commonly called Information Content in the context of data mining).
It is a measure of homogeneity (or heterogeneity) of a data partition.
A partition D which contains all items from the same class will have Info(D) = 0.
A partition D with all items belonging to different classes will have the maximum
information content.
Info(D) = −
m
i=1
pi log2(pi )
Let the dataset D be partitioned on the attribute A, then the information of the
new partition is given by
InfoA(D) = −
v
j=1
|Dj |
|D|
x Info(Dj )
and the gain in information
Gain(A) = Info(D) − InfoA(D)
The attribute selected at each stage is the one which has the greatest information
gain. Decision Tree algorithms which use Information Gain as the attribute selection
criterion are called ID3 Algorithms.
Amit Praseed Decision Trees October 17, 2019 5 / 43
AllElectronics Customer Database
RID Age Income Student Credit Rating Class:Buy?
1 youth high no fair no
2 youth high no excellent no
3 middle aged high no fair yes
4 senior medium no fair yes
5 senior low yes fair yes
6 senior low yes excellent no
7 middle aged low yes excellent yes
8 youth medium no fair no
9 youth low yes fair yes
10 senior medium yes fair yes
11 youth medium yes excellent yes
12 middle aged medium no excellent yes
13 middle aged high yes fair yes
14 senior medium no excellent no
Info(D) = −
5
14
log2
5
14
−
9
14
log2
9
14
= 0.94
Amit Praseed Decision Trees October 17, 2019 6 / 43
AllElectronics Customer Database
Feature under Consideration: Age
RID Age Income Student Credit Rating Class:Buy?
1 youth high no fair no
2 youth high no excellent no
3 middle aged high no fair yes
4 senior medium no fair yes
5 senior low yes fair yes
6 senior low yes excellent no
7 middle aged low yes excellent yes
8 youth medium no fair no
9 youth low yes fair yes
10 senior medium yes fair yes
11 youth medium yes excellent yes
12 middle aged medium no excellent yes
13 middle aged high yes fair yes
14 senior medium no excellent no
Infoage = 5
14 −2
5 ∗ log2
2
5 − 3
5 ∗ log2
3
5 + 4
14 −4
4 ∗ log2
4
4 +
5
14 −3
5 ∗ log2
3
5 − 2
5 ∗ log2
2
5 = .6935
Amit Praseed Decision Trees October 17, 2019 7 / 43
AllElectronics Customer Database
Feature under Consideration: Income
RID Age Income Student Credit Rating Class:Buy?
1 youth high no fair no
2 youth high no excellent no
3 middle aged high no fair yes
4 senior medium no fair yes
5 senior low yes fair yes
6 senior low yes excellent no
7 middle aged low yes excellent yes
8 youth medium no fair no
9 youth low yes fair yes
10 senior medium yes fair yes
11 youth medium yes excellent yes
12 middle aged medium no excellent yes
13 middle aged high yes fair yes
14 senior medium no excellent no
Infoincome = 4
14 −1
2 ∗ log2
1
2 − 1
2 ∗ log2
1
2 +
6
14 −2
6 ∗ log2
2
6 − 4
6 ∗ log2
4
6 + 4
14 −1
4 ∗ log2
1
4 − 3
4 ∗ log2
3
4 =
.907
Amit Praseed Decision Trees October 17, 2019 8 / 43
AllElectronics Customer Database
Feature under Consideration: Student
RID Age Income Student Credit Rating Class:Buy?
1 youth high no fair no
2 youth high no excellent no
3 middle aged high no fair yes
4 senior medium no fair yes
5 senior low yes fair yes
6 senior low yes excellent no
7 middle aged low yes excellent yes
8 youth medium no fair no
9 youth low yes fair yes
10 senior medium yes fair yes
11 youth medium yes excellent yes
12 middle aged medium no excellent yes
13 middle aged high yes fair yes
14 senior medium no excellent no
Infostudent = 1
2 −1
7 ∗ log2
1
7 − 6
7 ∗ log2
6
7 +
1
2 −4
7 ∗ log2
4
7 − 3
7 ∗ log2
3
7 = .7875
Amit Praseed Decision Trees October 17, 2019 9 / 43
AllElectronics Customer Database
Feature under Consideration: Credit Rating
RID Age Income Student Credit Rating Class:Buy?
1 youth high no fair no
2 youth high no excellent no
3 middle aged high no fair yes
4 senior medium no fair yes
5 senior low yes fair yes
6 senior low yes excellent no
7 middle aged low yes excellent yes
8 youth medium no fair no
9 youth low yes fair yes
10 senior medium yes fair yes
11 youth medium yes excellent yes
12 middle aged medium no excellent yes
13 middle aged high yes fair yes
14 senior medium no excellent no
Infoage = 8
14 −2
8 ∗ log2
2
8 − 6
8 ∗ log2
6
8 +
6
14 −3
6 ∗ log2
3
6 − 3
6 ∗ log2
3
6 = .892
Amit Praseed Decision Trees October 17, 2019 10 / 43
Computing the Information Gain
Info(D) = 0.94
Gainage = 0.94 − 0.6935 = 0.2465
Gainincome = 0.033
Gainstudent = 0.1525
Gaincredit = 0.048
Hence, we choose Age as the splitting criterion
Amit Praseed Decision Trees October 17, 2019 11 / 43
Splitting Criterion: Age
Amit Praseed Decision Trees October 17, 2019 12 / 43
Final Decision Tree for AllElectronics
Amit Praseed Decision Trees October 17, 2019 13 / 43
Question
Consider the following table for AllElectronics Customer Database which
includes State information. Which attribute will be chosen for splitting at
the root node in this case?
RID Age Income Student Credit Rating State Class:Buy?
1 youth high no fair Kerala no
2 youth high no excellent Karnataka no
3 middle aged high no fair Maharashtra yes
4 senior medium no fair Andhra Pradesh yes
5 senior low yes fair UP yes
6 senior low yes excellent MP no
7 middle aged low yes excellent Sikkim yes
8 youth medium no fair Rajasthan no
9 youth low yes fair Delhi yes
10 senior medium yes fair TN yes
11 youth medium yes excellent Telangana yes
12 middle aged medium no excellent Assam yes
13 middle aged high yes fair West Bengal yes
14 senior medium no excellent Bihar no
Amit Praseed Decision Trees October 17, 2019 14 / 43
Answer
The information associated with a split on states is:
Infostate = 14 1
14 −1
1 ∗ log2
1
1 = 0
InfoGainage = 0.94
A split on state gives the biggest Information Gain, and hence is chosen
as the splitting attribute at the root node.
The resultant tree will have a root node splitting on ”State” and 14
pure leaf nodes.
Of course this resulting tree is worthless from a classification point of
view, and hints at a major drawback os using Information Gain.
What went wrong and how do we correct it?
Amit Praseed Decision Trees October 17, 2019 15 / 43
Answer
Information Gain is biased towards criteria which have a large number
of values.
A modification of the ID3 algorithm called C4.5 uses a normalized value
of information gain, called Gain Ratio to overcome this bias.
SplitInfoA(D) = −
v
j=1
|Dj |
|D|
x log2
|Dj |
|D|
and
GainRatio(A) =
Gain(A)
SplitInfoA(D)
Calculating SplitInfo for the ”State” attribute gives
SplitInfostate(D) = −
14
j=1
1
14
x log2
1
14
= 3.81
GainRatio(state) = 0.2467
Amit Praseed Decision Trees October 17, 2019 16 / 43
GainRatio is not Perfect
Using GainRatio mitigates the bias towards multi-valued attributes but
does not negate it.
GainRatioage = 0.157
GainRatioincome = 0.02
GainRatiostudent = 0.1525
GainRatiocredit = 0.049
GainRatiostate = 0.2647
Despite normalizing the values, ”state” will still be chosen as the splitting
criteria. Caution must be exercised while choosing attributes for inclusion
in a decision tree.
Amit Praseed Decision Trees October 17, 2019 17 / 43
The CART Algorithm
Classification And Regression Trees (CART) is a widely used decision
tree algorithm.
Exclusively produces binary trees.
Uses a measure called Gini Index as the measure of purity.
Gini(D) = 1 −
|C|
i=1
p2
i
Ginisplit(D) =
n1
n

1 −
|C|
i=1
p2
i

 +
n2
n

1 −
|C|
i=1
p2
i


Amit Praseed Decision Trees October 17, 2019 18 / 43
A Simple Example of CART Decision Tree Induction
Age Salary CLass
30 65 G
23 15 B
40 75 G
55 40 B
55 100 G
45 60 G
Gini(D) = 1 −
1
3
2
−
2
3
2
= 0.44
Giniage<23 = 1 −
1
3
2
−
2
3
2
= 0.44
Giniage<26.5 =
1
6
1 −
1
1
2
+
5
6
1 −
1
5
2
−
4
5
2
= 0.267
Giniage<35 =
2
6
1 −
1
2
2
−
1
2
2
+
4
6
1 −
1
4
2
−
3
4
2
= 0.4167
Giniage<42.5(D) = 0.44GiniSalary<50 = 0
Giniage<50(D) = 0.4167GiniSalary<62.5 = 0.22
Giniage<55(D) = 0.44GiniSalary<70 = 0.33
Ginisalary<15(D) = 0.44GiniSalary<87.5 = 0.4
Ginisalary<27.5(D) = 0.267GiniSalary<100 = 0.44
Amit Praseed Decision Trees October 17, 2019 19 / 43
A Simple Example of CART Decision Tree Induction
Amit Praseed Decision Trees October 17, 2019 20 / 43
Issues with Decision Trees
The entire dataset needs to be present in the main memory for decision
tree construction, which causes issues on large datasets.
Solution: Sample the dataset to construct a smaller approximation of
the data for building the decision tree
Eg: CLOUDS, BOAT
Finding the Ideal Splitting point requires sorting the data at every split,
which leads to increased complexity on large datasets.
Solution: Presorting the dataset.
Eg: SLIQ, SPRINT, RAINFOREST
Overfitting
Amit Praseed Decision Trees October 17, 2019 21 / 43
SLIQ - Decision Trees with Presorting
SLIQ (Supervised Learning In Quest) imposes no restrictions on the
size of the dataset for learning Decision Trees.
It constructs a presorted list of attribute values to avoid sorting over-
heads during constructing the decision tree and maintains a record of
where each record resides during the construction.
RID Age Salary CLass
1 30 65 G
2 23 15 B
3 40 75 G
4 55 40 B
5 55 100 G
6 45 60 G
Amit Praseed Decision Trees October 17, 2019 22 / 43
Attribute and Record Lists in SLIQ
Age Class RID
23 B 2
30 G 1
40 G 3
45 G 6
55 G 5
55 B 4
Salary Class RID
15 B 2
40 B 4
60 G 6
65 G 1
75 G 3
100 G 5
RID Leaf
1 N1
2 N1
3 N1
4 N1
5 N1
6 N1
Amit Praseed Decision Trees October 17, 2019 23 / 43
SLIQ - Decision Trees with Presorting
The use of presorted attribute lists means that a single scan of the
attribute list is sufficient to generate the adequate splitting point, com-
pared to the multiple scans required in the CART algorithm.
At each stage, the record list is updated to record where the records
reside during the construction process.
Amit Praseed Decision Trees October 17, 2019 24 / 43
CLOUDS - Decision Trees with Sampling
While SLIQ works reasonably well, it still encounters certain issues
while dealing with extremely large datasets, when the pre-sorted tables
cannot reside in memory.
A solution for this is to sample the dataset, and hence reduce the
number of data items.
Two points related to the Gini Index aid in this sampling:
Gini Index grows slowly, so there is a lesser chance of missing out on the
splitting point.
The minimum value of the Gini index for the splitting point is lower than
most of the other points.
Classification for Large OUt of core DataSets (CLOUDS) provides a
decision tree algorithm with sampling.
Amit Praseed Decision Trees October 17, 2019 25 / 43
CLOUDS - Decision Trees with Sampling
The CLOUDS algorithm seeks to eliminate the overhead of calculating
the Gini Index at every splitting point, by narrowing down the number
of splitting points.
This is done by dividing the data into multiple quantiles, and calculating
the Gini Index only at the boundaries.
Now the search for the best splitting point is restricted to the quantile
boundaries.
This approach significantly reduces the complexity of the algorithm,
but also introduces the possibility of errors. Errors can arise if the
actual best splitting point resided inside one of the quantiles.
To overcome this, each quantile computes a Gini Estimate within its
boundaries. If the Gini Estimate is less than the Minimum Gini Value
computed using the quantile boundaries, the quantile is discarded.
If the Gini Estimate is less than the value calculated at the boundaries,
then the quantile is inspected in detail to select the splitting point.
Amit Praseed Decision Trees October 17, 2019 26 / 43
BOAT Algorithm for Decision Trees
BOAT (Bootstrap Optimization Algorithm for Tree construction) is a
very interesting and efficient algorithm for building decision trees for
large datasets.
BOAT algorithm allows certain features which make it very interesting:
The use of bootstrapping
Limited number of passes over data
Amit Praseed Decision Trees October 17, 2019 27 / 43
Bootstrapping
Bootstrapping is a statistical technique used when estimates over the
entire population is not available.
Suppose we have an extremely large population S from which a small
sample S is taken. Since the statistics for the entire population is not
available we cannot say if the statistics derived from S describe the
properties of S or not.
Bootstrapping suggests creating n samples from S with replacement,
and estimating the statistics for each of those new samples to obtain
some idea about the variance of the statistics.
In the context of decision trees with sampling, bootstrapping suggests
that building multiple trees from the sampled population S and com-
bining them creates a tree that represents the original data set S fairly
well.
Amit Praseed Decision Trees October 17, 2019 28 / 43
Combining the Decision Trees
Combining the n decision trees obtained from the bootstrapped samples
are carried out as follows:
If the two nodes do not split on the same attribute, then the node and
its subtrees are deleted from all all the bootstrapped trees.
If the nodes split on the same categorical attribute, and the branches
are also same, they feature in the new tree, otherwise they are deleted.
If the nodes split on the same numerical attribute, but they split on
different splitting points, the constructed tree maintains a coarse splitting
interval built from these splitting points.
Once the entire tree is built, the coarse splitting criteria (intervals) are
inspected in detail to extract the exact splitting points.
The correctness of the final decision tree depends on the number of
bootstrapped trees used.
Amit Praseed Decision Trees October 17, 2019 29 / 43
Overfitting in Decision Trees
Overfitting refers to a phenomenon where the classification model per-
forms exceptionally well on the training data but performs poorly on
testing data.
This happens because the classifier attempts to accommodate all the
training samples into its model, even outliers or noisy samples, which
makes the model complex and leads to poor generalization on a training
set.
Pruning can help keep the complexity of the tree in check, and reduce
overfitting.
Pre-pruning : Pruning while the tree is being constructed by specifying
criteria such as maximum depth, number of nodes etc.
Post-pruning : Pruning after the entire tree is constructed.
Amit Praseed Decision Trees October 17, 2019 30 / 43
A Simple Example of Overfitting
F1 F2 F3 F4 F5 T
7 18 7 11 22 B
1 5 1 18 36 B
0 15 0 2 4 B
7 5 7 12 24 A
1 15 1 12 24 A
3 20 3 6 12 B
0 5 0 18 36 B
7 10 7 12 24 A
10 8 10 20 40 A
9 20 9 17 34 A
4 14 4 9 18 B
1 19 1 9 18 B
10 19 10 18 36 A
10 14 10 7 14 B
9.75 9 9.75 16 32 A
Feature Precision Recall F1 Support
A 1.00 1.00 1.00 2
B 1.00 1.00 1.00 3
Amit Praseed Decision Trees October 17, 2019 31 / 43
A Simple Example of Overfitting
F1 F2 F3 F4 F5 T
7 18 7 11 22 B
1 5 1 18 36 B
0 15 0 2 4 B
7 5 7 12 24 B
1 15 1 12 24 A
3 20 3 6 12 B
0 5 0 18 36 B
7 10 7 12 24 A
10 8 10 20 40 B
9 20 9 17 34 A
4 14 4 9 18 B
1 19 1 9 18 B
10 19 10 18 36 A
10 14 10 7 14 B
9.75 9 9.75 16 32 A
Feature Precision Recall F1 Support
A 0 0 0 2
B 0.6 1.0 0.75 3
Amit Praseed Decision Trees October 17, 2019 32 / 43
Pre-Pruned Tree with Depth Limit
F1 F2 F3 F4 F5 T
7 18 7 11 22 B
1 5 1 18 36 B
0 15 0 2 4 B
7 5 7 12 24 B
1 15 1 12 24 A
3 20 3 6 12 B
0 5 0 18 36 B
7 10 7 12 24 A
10 8 10 20 40 B
9 20 9 17 34 A
4 14 4 9 18 B
1 19 1 9 18 B
10 19 10 18 36 A
10 14 10 7 14 B
9.75 9 9.75 16 32 A
Feature Precision Recall F1 Support
A 1.0 1.0 1.0 2
B 1.0 1.0 1.0 3
Amit Praseed Decision Trees October 17, 2019 33 / 43
Post-Pruning in Decision Trees
Post-pruning first builds the decision tree, and then checks if any sub-
trees can be reduced to leaves.
There are a number of post pruning techniques.
Pessimistic Pruning
Reduced Error Pruning
Minimum Description Length (MDL) Pruning
Cost Complexity Pruning
Amit Praseed Decision Trees October 17, 2019 34 / 43
Reduced Error Pruning in Decision Trees
Maintains approximately one-third of the training set as a validation
set.
Once training is done, the tree is tested with the validation set - an
overfitted tree obviously produces errors.
Check if the error can be reduced if a subtree is converted into a leaf.
If the error is reduced when a subtree is converted to a leaf, the subtree
is pruned.
Amit Praseed Decision Trees October 17, 2019 35 / 43
Reduced Error Pruning
F1 F2 F3 F4 F5 T
4 14 4 9 18 B
1 19 1 9 18 B
6 19 6 18 36 A
9 14 9 12 24 A
10 8 10 16 32 A
Amit Praseed Decision Trees October 17, 2019 36 / 43
Reduced Error Pruning
F1 F2 F3 F4 F5 T
4 14 4 9 18 B
1 19 1 9 18 B
6 19 6 18 36 A
9 14 9 12 24 A
10 8 10 16 32 A
Amit Praseed Decision Trees October 17, 2019 37 / 43
The ”Wisdom of the Crowd” - Random Forests
To improve the performance of decision trees, often a number of un-
correlated decision trees are used together. This is called the Random
Forest Classifier.
All the trees are individually used to classify an incoming data sample,
and use an internal voting to decide on the final class value.
This is an example of an Ensemble Classifier and relies on the wisdom
of the crowd to select the best class.
The key here is that the trees in the forest should be uncorrelated.
Amit Praseed Decision Trees October 17, 2019 38 / 43
Why ”Random” Forests?
In order to avoid correlation between decision trees in the forest, two
main modifications are done to the tree building process in a random
forest classifier:
Random selection of data subset: For training the individual trees, a
random subset of the data with replacement is used - a concept called
as Bagging (Bootstrapped Aggregation).
Random selection of feature subset for splitting: At each node, only
a subset of features are used to split the node. The value is usually set
to the square root of the number of features.
Random Forests help to reduce the effects of overfitting in trees without
having to prune the trees.
They provide a more reliable classifier than individual decision trees.
Amit Praseed Decision Trees October 17, 2019 39 / 43
Visualizing a Decision Tree
Amit Praseed Decision Trees October 17, 2019 40 / 43
How does a Decision Tree Classify this Data?
1 2 3 4 5 6 7 8 9 10
1
2
3
4
5
6
7
8
9
10
x
y
Amit Praseed Decision Trees October 17, 2019 41 / 43
A Complex Division
1 2 3 4 5 6 7 8 9 10
1
2
3
4
5
6
7
8
9
10
x
y
Amit Praseed Decision Trees October 17, 2019 42 / 43
Oblique Decision Trees
Oblique decision trees allow
splitting on more than one at-
tribute at any node.
In many cases they allow for
more efficient classification.
The decision criteria at any
node looks like m
i=1 ai xi ≤ c.
Determining the best split in-
volves calculating proper values
for ai and c, which is computa-
tionally intensive.
Optimization algorithms like
Gradient Descent, Simulated
Annealing etc are used for cal-
culating these values.
However, in most cases these
1 2 3 4 5 6 7 8 9 10
1
2
3
4
5
6
7
8
9
10
x
y
Amit Praseed Decision Trees October 17, 2019 43 / 43

More Related Content

More from amitpraseed

Support Vector Machines (SVM)
Support Vector Machines (SVM)Support Vector Machines (SVM)
Support Vector Machines (SVM)
amitpraseed
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
amitpraseed
 
Perceptron Learning
Perceptron LearningPerceptron Learning
Perceptron Learning
amitpraseed
 
Introduction to Classification
Introduction to ClassificationIntroduction to Classification
Introduction to Classification
amitpraseed
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
amitpraseed
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
amitpraseed
 
Bayesianclassifiers
BayesianclassifiersBayesianclassifiers
Bayesianclassifiers
amitpraseed
 

More from amitpraseed (7)

Support Vector Machines (SVM)
Support Vector Machines (SVM)Support Vector Machines (SVM)
Support Vector Machines (SVM)
 
Principal Component Analysis
Principal Component AnalysisPrincipal Component Analysis
Principal Component Analysis
 
Perceptron Learning
Perceptron LearningPerceptron Learning
Perceptron Learning
 
Introduction to Classification
Introduction to ClassificationIntroduction to Classification
Introduction to Classification
 
Dimensionality Reduction
Dimensionality ReductionDimensionality Reduction
Dimensionality Reduction
 
Convolutional Neural Networks
Convolutional Neural NetworksConvolutional Neural Networks
Convolutional Neural Networks
 
Bayesianclassifiers
BayesianclassifiersBayesianclassifiers
Bayesianclassifiers
 

Recently uploaded

The geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideasThe geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideas
GeoBlogs
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
Peter Windle
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
joachimlavalley1
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
Levi Shapiro
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
Celine George
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
EugeneSaldivar
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Atul Kumar Singh
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
Special education needs
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
vaibhavrinwa19
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
timhan337
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
camakaiclarkmusic
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
Jisc
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
Tamralipta Mahavidyalaya
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
Jisc
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
Thiyagu K
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
Balvir Singh
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
DeeptiGupta154
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
beazzy04
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Po-Chuan Chen
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
kaushalkr1407
 

Recently uploaded (20)

The geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideasThe geography of Taylor Swift - some ideas
The geography of Taylor Swift - some ideas
 
A Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in EducationA Strategic Approach: GenAI in Education
A Strategic Approach: GenAI in Education
 
Additional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdfAdditional Benefits for Employee Website.pdf
Additional Benefits for Employee Website.pdf
 
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
June 3, 2024 Anti-Semitism Letter Sent to MIT President Kornbluth and MIT Cor...
 
Model Attribute Check Company Auto Property
Model Attribute  Check Company Auto PropertyModel Attribute  Check Company Auto Property
Model Attribute Check Company Auto Property
 
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...TESDA TM1 REVIEWER  FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
TESDA TM1 REVIEWER FOR NATIONAL ASSESSMENT WRITTEN AND ORAL QUESTIONS WITH A...
 
Guidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th SemesterGuidance_and_Counselling.pdf B.Ed. 4th Semester
Guidance_and_Counselling.pdf B.Ed. 4th Semester
 
special B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdfspecial B.ed 2nd year old paper_20240531.pdf
special B.ed 2nd year old paper_20240531.pdf
 
Acetabularia Information For Class 9 .docx
Acetabularia Information For Class 9  .docxAcetabularia Information For Class 9  .docx
Acetabularia Information For Class 9 .docx
 
Honest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptxHonest Reviews of Tim Han LMA Course Program.pptx
Honest Reviews of Tim Han LMA Course Program.pptx
 
CACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdfCACJapan - GROUP Presentation 1- Wk 4.pdf
CACJapan - GROUP Presentation 1- Wk 4.pdf
 
How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...How libraries can support authors with open access requirements for UKRI fund...
How libraries can support authors with open access requirements for UKRI fund...
 
Home assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdfHome assignment II on Spectroscopy 2024 Answers.pdf
Home assignment II on Spectroscopy 2024 Answers.pdf
 
Supporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptxSupporting (UKRI) OA monographs at Salford.pptx
Supporting (UKRI) OA monographs at Salford.pptx
 
Unit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdfUnit 8 - Information and Communication Technology (Paper I).pdf
Unit 8 - Information and Communication Technology (Paper I).pdf
 
Operation Blue Star - Saka Neela Tara
Operation Blue Star   -  Saka Neela TaraOperation Blue Star   -  Saka Neela Tara
Operation Blue Star - Saka Neela Tara
 
Overview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with MechanismOverview on Edible Vaccine: Pros & Cons with Mechanism
Overview on Edible Vaccine: Pros & Cons with Mechanism
 
Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345Sha'Carri Richardson Presentation 202345
Sha'Carri Richardson Presentation 202345
 
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdfAdversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
Adversarial Attention Modeling for Multi-dimensional Emotion Regression.pdf
 
The Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdfThe Roman Empire A Historical Colossus.pdf
The Roman Empire A Historical Colossus.pdf
 

Decision Trees

  • 1. Decision Trees October 17, 2019 Amit Praseed Decision Trees October 17, 2019 1 / 43
  • 2. Introduction A decision tree is a flowchart like tree structure each internal node (non-leaf node) denotes a test on an attribute, each branch represents an outcome of the test, and each leaf node (or terminal node) holds a class label Amit Praseed Decision Trees October 17, 2019 2 / 43
  • 3. AllElectronics Customer Database RID Age Student Class:Buy? 1 youth no no 2 youth no no 3 senior no no 4 senior yes yes 5 senior yes yes 6 youth no no 7 youth yes yes 8 senior yes yes 9 youth yes yes 10 senior no no Two Decision Trees for the Same Data Amit Praseed Decision Trees October 17, 2019 3 / 43
  • 4. Attribute Selection Selecting the best attribute for use at each stage of the decision tree induction is crucial. The attribute should be chosen that it separates the data points into classes which are as pure as possible. Amit Praseed Decision Trees October 17, 2019 4 / 43
  • 5. Purity of a Partition - Entropy The purity of a data partition can be approximated by using Shannon’s Entropy (commonly called Information Content in the context of data mining). It is a measure of homogeneity (or heterogeneity) of a data partition. A partition D which contains all items from the same class will have Info(D) = 0. A partition D with all items belonging to different classes will have the maximum information content. Info(D) = − m i=1 pi log2(pi ) Let the dataset D be partitioned on the attribute A, then the information of the new partition is given by InfoA(D) = − v j=1 |Dj | |D| x Info(Dj ) and the gain in information Gain(A) = Info(D) − InfoA(D) The attribute selected at each stage is the one which has the greatest information gain. Decision Tree algorithms which use Information Gain as the attribute selection criterion are called ID3 Algorithms. Amit Praseed Decision Trees October 17, 2019 5 / 43
  • 6. AllElectronics Customer Database RID Age Income Student Credit Rating Class:Buy? 1 youth high no fair no 2 youth high no excellent no 3 middle aged high no fair yes 4 senior medium no fair yes 5 senior low yes fair yes 6 senior low yes excellent no 7 middle aged low yes excellent yes 8 youth medium no fair no 9 youth low yes fair yes 10 senior medium yes fair yes 11 youth medium yes excellent yes 12 middle aged medium no excellent yes 13 middle aged high yes fair yes 14 senior medium no excellent no Info(D) = − 5 14 log2 5 14 − 9 14 log2 9 14 = 0.94 Amit Praseed Decision Trees October 17, 2019 6 / 43
  • 7. AllElectronics Customer Database Feature under Consideration: Age RID Age Income Student Credit Rating Class:Buy? 1 youth high no fair no 2 youth high no excellent no 3 middle aged high no fair yes 4 senior medium no fair yes 5 senior low yes fair yes 6 senior low yes excellent no 7 middle aged low yes excellent yes 8 youth medium no fair no 9 youth low yes fair yes 10 senior medium yes fair yes 11 youth medium yes excellent yes 12 middle aged medium no excellent yes 13 middle aged high yes fair yes 14 senior medium no excellent no Infoage = 5 14 −2 5 ∗ log2 2 5 − 3 5 ∗ log2 3 5 + 4 14 −4 4 ∗ log2 4 4 + 5 14 −3 5 ∗ log2 3 5 − 2 5 ∗ log2 2 5 = .6935 Amit Praseed Decision Trees October 17, 2019 7 / 43
  • 8. AllElectronics Customer Database Feature under Consideration: Income RID Age Income Student Credit Rating Class:Buy? 1 youth high no fair no 2 youth high no excellent no 3 middle aged high no fair yes 4 senior medium no fair yes 5 senior low yes fair yes 6 senior low yes excellent no 7 middle aged low yes excellent yes 8 youth medium no fair no 9 youth low yes fair yes 10 senior medium yes fair yes 11 youth medium yes excellent yes 12 middle aged medium no excellent yes 13 middle aged high yes fair yes 14 senior medium no excellent no Infoincome = 4 14 −1 2 ∗ log2 1 2 − 1 2 ∗ log2 1 2 + 6 14 −2 6 ∗ log2 2 6 − 4 6 ∗ log2 4 6 + 4 14 −1 4 ∗ log2 1 4 − 3 4 ∗ log2 3 4 = .907 Amit Praseed Decision Trees October 17, 2019 8 / 43
  • 9. AllElectronics Customer Database Feature under Consideration: Student RID Age Income Student Credit Rating Class:Buy? 1 youth high no fair no 2 youth high no excellent no 3 middle aged high no fair yes 4 senior medium no fair yes 5 senior low yes fair yes 6 senior low yes excellent no 7 middle aged low yes excellent yes 8 youth medium no fair no 9 youth low yes fair yes 10 senior medium yes fair yes 11 youth medium yes excellent yes 12 middle aged medium no excellent yes 13 middle aged high yes fair yes 14 senior medium no excellent no Infostudent = 1 2 −1 7 ∗ log2 1 7 − 6 7 ∗ log2 6 7 + 1 2 −4 7 ∗ log2 4 7 − 3 7 ∗ log2 3 7 = .7875 Amit Praseed Decision Trees October 17, 2019 9 / 43
  • 10. AllElectronics Customer Database Feature under Consideration: Credit Rating RID Age Income Student Credit Rating Class:Buy? 1 youth high no fair no 2 youth high no excellent no 3 middle aged high no fair yes 4 senior medium no fair yes 5 senior low yes fair yes 6 senior low yes excellent no 7 middle aged low yes excellent yes 8 youth medium no fair no 9 youth low yes fair yes 10 senior medium yes fair yes 11 youth medium yes excellent yes 12 middle aged medium no excellent yes 13 middle aged high yes fair yes 14 senior medium no excellent no Infoage = 8 14 −2 8 ∗ log2 2 8 − 6 8 ∗ log2 6 8 + 6 14 −3 6 ∗ log2 3 6 − 3 6 ∗ log2 3 6 = .892 Amit Praseed Decision Trees October 17, 2019 10 / 43
  • 11. Computing the Information Gain Info(D) = 0.94 Gainage = 0.94 − 0.6935 = 0.2465 Gainincome = 0.033 Gainstudent = 0.1525 Gaincredit = 0.048 Hence, we choose Age as the splitting criterion Amit Praseed Decision Trees October 17, 2019 11 / 43
  • 12. Splitting Criterion: Age Amit Praseed Decision Trees October 17, 2019 12 / 43
  • 13. Final Decision Tree for AllElectronics Amit Praseed Decision Trees October 17, 2019 13 / 43
  • 14. Question Consider the following table for AllElectronics Customer Database which includes State information. Which attribute will be chosen for splitting at the root node in this case? RID Age Income Student Credit Rating State Class:Buy? 1 youth high no fair Kerala no 2 youth high no excellent Karnataka no 3 middle aged high no fair Maharashtra yes 4 senior medium no fair Andhra Pradesh yes 5 senior low yes fair UP yes 6 senior low yes excellent MP no 7 middle aged low yes excellent Sikkim yes 8 youth medium no fair Rajasthan no 9 youth low yes fair Delhi yes 10 senior medium yes fair TN yes 11 youth medium yes excellent Telangana yes 12 middle aged medium no excellent Assam yes 13 middle aged high yes fair West Bengal yes 14 senior medium no excellent Bihar no Amit Praseed Decision Trees October 17, 2019 14 / 43
  • 15. Answer The information associated with a split on states is: Infostate = 14 1 14 −1 1 ∗ log2 1 1 = 0 InfoGainage = 0.94 A split on state gives the biggest Information Gain, and hence is chosen as the splitting attribute at the root node. The resultant tree will have a root node splitting on ”State” and 14 pure leaf nodes. Of course this resulting tree is worthless from a classification point of view, and hints at a major drawback os using Information Gain. What went wrong and how do we correct it? Amit Praseed Decision Trees October 17, 2019 15 / 43
  • 16. Answer Information Gain is biased towards criteria which have a large number of values. A modification of the ID3 algorithm called C4.5 uses a normalized value of information gain, called Gain Ratio to overcome this bias. SplitInfoA(D) = − v j=1 |Dj | |D| x log2 |Dj | |D| and GainRatio(A) = Gain(A) SplitInfoA(D) Calculating SplitInfo for the ”State” attribute gives SplitInfostate(D) = − 14 j=1 1 14 x log2 1 14 = 3.81 GainRatio(state) = 0.2467 Amit Praseed Decision Trees October 17, 2019 16 / 43
  • 17. GainRatio is not Perfect Using GainRatio mitigates the bias towards multi-valued attributes but does not negate it. GainRatioage = 0.157 GainRatioincome = 0.02 GainRatiostudent = 0.1525 GainRatiocredit = 0.049 GainRatiostate = 0.2647 Despite normalizing the values, ”state” will still be chosen as the splitting criteria. Caution must be exercised while choosing attributes for inclusion in a decision tree. Amit Praseed Decision Trees October 17, 2019 17 / 43
  • 18. The CART Algorithm Classification And Regression Trees (CART) is a widely used decision tree algorithm. Exclusively produces binary trees. Uses a measure called Gini Index as the measure of purity. Gini(D) = 1 − |C| i=1 p2 i Ginisplit(D) = n1 n  1 − |C| i=1 p2 i   + n2 n  1 − |C| i=1 p2 i   Amit Praseed Decision Trees October 17, 2019 18 / 43
  • 19. A Simple Example of CART Decision Tree Induction Age Salary CLass 30 65 G 23 15 B 40 75 G 55 40 B 55 100 G 45 60 G Gini(D) = 1 − 1 3 2 − 2 3 2 = 0.44 Giniage<23 = 1 − 1 3 2 − 2 3 2 = 0.44 Giniage<26.5 = 1 6 1 − 1 1 2 + 5 6 1 − 1 5 2 − 4 5 2 = 0.267 Giniage<35 = 2 6 1 − 1 2 2 − 1 2 2 + 4 6 1 − 1 4 2 − 3 4 2 = 0.4167 Giniage<42.5(D) = 0.44GiniSalary<50 = 0 Giniage<50(D) = 0.4167GiniSalary<62.5 = 0.22 Giniage<55(D) = 0.44GiniSalary<70 = 0.33 Ginisalary<15(D) = 0.44GiniSalary<87.5 = 0.4 Ginisalary<27.5(D) = 0.267GiniSalary<100 = 0.44 Amit Praseed Decision Trees October 17, 2019 19 / 43
  • 20. A Simple Example of CART Decision Tree Induction Amit Praseed Decision Trees October 17, 2019 20 / 43
  • 21. Issues with Decision Trees The entire dataset needs to be present in the main memory for decision tree construction, which causes issues on large datasets. Solution: Sample the dataset to construct a smaller approximation of the data for building the decision tree Eg: CLOUDS, BOAT Finding the Ideal Splitting point requires sorting the data at every split, which leads to increased complexity on large datasets. Solution: Presorting the dataset. Eg: SLIQ, SPRINT, RAINFOREST Overfitting Amit Praseed Decision Trees October 17, 2019 21 / 43
  • 22. SLIQ - Decision Trees with Presorting SLIQ (Supervised Learning In Quest) imposes no restrictions on the size of the dataset for learning Decision Trees. It constructs a presorted list of attribute values to avoid sorting over- heads during constructing the decision tree and maintains a record of where each record resides during the construction. RID Age Salary CLass 1 30 65 G 2 23 15 B 3 40 75 G 4 55 40 B 5 55 100 G 6 45 60 G Amit Praseed Decision Trees October 17, 2019 22 / 43
  • 23. Attribute and Record Lists in SLIQ Age Class RID 23 B 2 30 G 1 40 G 3 45 G 6 55 G 5 55 B 4 Salary Class RID 15 B 2 40 B 4 60 G 6 65 G 1 75 G 3 100 G 5 RID Leaf 1 N1 2 N1 3 N1 4 N1 5 N1 6 N1 Amit Praseed Decision Trees October 17, 2019 23 / 43
  • 24. SLIQ - Decision Trees with Presorting The use of presorted attribute lists means that a single scan of the attribute list is sufficient to generate the adequate splitting point, com- pared to the multiple scans required in the CART algorithm. At each stage, the record list is updated to record where the records reside during the construction process. Amit Praseed Decision Trees October 17, 2019 24 / 43
  • 25. CLOUDS - Decision Trees with Sampling While SLIQ works reasonably well, it still encounters certain issues while dealing with extremely large datasets, when the pre-sorted tables cannot reside in memory. A solution for this is to sample the dataset, and hence reduce the number of data items. Two points related to the Gini Index aid in this sampling: Gini Index grows slowly, so there is a lesser chance of missing out on the splitting point. The minimum value of the Gini index for the splitting point is lower than most of the other points. Classification for Large OUt of core DataSets (CLOUDS) provides a decision tree algorithm with sampling. Amit Praseed Decision Trees October 17, 2019 25 / 43
  • 26. CLOUDS - Decision Trees with Sampling The CLOUDS algorithm seeks to eliminate the overhead of calculating the Gini Index at every splitting point, by narrowing down the number of splitting points. This is done by dividing the data into multiple quantiles, and calculating the Gini Index only at the boundaries. Now the search for the best splitting point is restricted to the quantile boundaries. This approach significantly reduces the complexity of the algorithm, but also introduces the possibility of errors. Errors can arise if the actual best splitting point resided inside one of the quantiles. To overcome this, each quantile computes a Gini Estimate within its boundaries. If the Gini Estimate is less than the Minimum Gini Value computed using the quantile boundaries, the quantile is discarded. If the Gini Estimate is less than the value calculated at the boundaries, then the quantile is inspected in detail to select the splitting point. Amit Praseed Decision Trees October 17, 2019 26 / 43
  • 27. BOAT Algorithm for Decision Trees BOAT (Bootstrap Optimization Algorithm for Tree construction) is a very interesting and efficient algorithm for building decision trees for large datasets. BOAT algorithm allows certain features which make it very interesting: The use of bootstrapping Limited number of passes over data Amit Praseed Decision Trees October 17, 2019 27 / 43
  • 28. Bootstrapping Bootstrapping is a statistical technique used when estimates over the entire population is not available. Suppose we have an extremely large population S from which a small sample S is taken. Since the statistics for the entire population is not available we cannot say if the statistics derived from S describe the properties of S or not. Bootstrapping suggests creating n samples from S with replacement, and estimating the statistics for each of those new samples to obtain some idea about the variance of the statistics. In the context of decision trees with sampling, bootstrapping suggests that building multiple trees from the sampled population S and com- bining them creates a tree that represents the original data set S fairly well. Amit Praseed Decision Trees October 17, 2019 28 / 43
  • 29. Combining the Decision Trees Combining the n decision trees obtained from the bootstrapped samples are carried out as follows: If the two nodes do not split on the same attribute, then the node and its subtrees are deleted from all all the bootstrapped trees. If the nodes split on the same categorical attribute, and the branches are also same, they feature in the new tree, otherwise they are deleted. If the nodes split on the same numerical attribute, but they split on different splitting points, the constructed tree maintains a coarse splitting interval built from these splitting points. Once the entire tree is built, the coarse splitting criteria (intervals) are inspected in detail to extract the exact splitting points. The correctness of the final decision tree depends on the number of bootstrapped trees used. Amit Praseed Decision Trees October 17, 2019 29 / 43
  • 30. Overfitting in Decision Trees Overfitting refers to a phenomenon where the classification model per- forms exceptionally well on the training data but performs poorly on testing data. This happens because the classifier attempts to accommodate all the training samples into its model, even outliers or noisy samples, which makes the model complex and leads to poor generalization on a training set. Pruning can help keep the complexity of the tree in check, and reduce overfitting. Pre-pruning : Pruning while the tree is being constructed by specifying criteria such as maximum depth, number of nodes etc. Post-pruning : Pruning after the entire tree is constructed. Amit Praseed Decision Trees October 17, 2019 30 / 43
  • 31. A Simple Example of Overfitting F1 F2 F3 F4 F5 T 7 18 7 11 22 B 1 5 1 18 36 B 0 15 0 2 4 B 7 5 7 12 24 A 1 15 1 12 24 A 3 20 3 6 12 B 0 5 0 18 36 B 7 10 7 12 24 A 10 8 10 20 40 A 9 20 9 17 34 A 4 14 4 9 18 B 1 19 1 9 18 B 10 19 10 18 36 A 10 14 10 7 14 B 9.75 9 9.75 16 32 A Feature Precision Recall F1 Support A 1.00 1.00 1.00 2 B 1.00 1.00 1.00 3 Amit Praseed Decision Trees October 17, 2019 31 / 43
  • 32. A Simple Example of Overfitting F1 F2 F3 F4 F5 T 7 18 7 11 22 B 1 5 1 18 36 B 0 15 0 2 4 B 7 5 7 12 24 B 1 15 1 12 24 A 3 20 3 6 12 B 0 5 0 18 36 B 7 10 7 12 24 A 10 8 10 20 40 B 9 20 9 17 34 A 4 14 4 9 18 B 1 19 1 9 18 B 10 19 10 18 36 A 10 14 10 7 14 B 9.75 9 9.75 16 32 A Feature Precision Recall F1 Support A 0 0 0 2 B 0.6 1.0 0.75 3 Amit Praseed Decision Trees October 17, 2019 32 / 43
  • 33. Pre-Pruned Tree with Depth Limit F1 F2 F3 F4 F5 T 7 18 7 11 22 B 1 5 1 18 36 B 0 15 0 2 4 B 7 5 7 12 24 B 1 15 1 12 24 A 3 20 3 6 12 B 0 5 0 18 36 B 7 10 7 12 24 A 10 8 10 20 40 B 9 20 9 17 34 A 4 14 4 9 18 B 1 19 1 9 18 B 10 19 10 18 36 A 10 14 10 7 14 B 9.75 9 9.75 16 32 A Feature Precision Recall F1 Support A 1.0 1.0 1.0 2 B 1.0 1.0 1.0 3 Amit Praseed Decision Trees October 17, 2019 33 / 43
  • 34. Post-Pruning in Decision Trees Post-pruning first builds the decision tree, and then checks if any sub- trees can be reduced to leaves. There are a number of post pruning techniques. Pessimistic Pruning Reduced Error Pruning Minimum Description Length (MDL) Pruning Cost Complexity Pruning Amit Praseed Decision Trees October 17, 2019 34 / 43
  • 35. Reduced Error Pruning in Decision Trees Maintains approximately one-third of the training set as a validation set. Once training is done, the tree is tested with the validation set - an overfitted tree obviously produces errors. Check if the error can be reduced if a subtree is converted into a leaf. If the error is reduced when a subtree is converted to a leaf, the subtree is pruned. Amit Praseed Decision Trees October 17, 2019 35 / 43
  • 36. Reduced Error Pruning F1 F2 F3 F4 F5 T 4 14 4 9 18 B 1 19 1 9 18 B 6 19 6 18 36 A 9 14 9 12 24 A 10 8 10 16 32 A Amit Praseed Decision Trees October 17, 2019 36 / 43
  • 37. Reduced Error Pruning F1 F2 F3 F4 F5 T 4 14 4 9 18 B 1 19 1 9 18 B 6 19 6 18 36 A 9 14 9 12 24 A 10 8 10 16 32 A Amit Praseed Decision Trees October 17, 2019 37 / 43
  • 38. The ”Wisdom of the Crowd” - Random Forests To improve the performance of decision trees, often a number of un- correlated decision trees are used together. This is called the Random Forest Classifier. All the trees are individually used to classify an incoming data sample, and use an internal voting to decide on the final class value. This is an example of an Ensemble Classifier and relies on the wisdom of the crowd to select the best class. The key here is that the trees in the forest should be uncorrelated. Amit Praseed Decision Trees October 17, 2019 38 / 43
  • 39. Why ”Random” Forests? In order to avoid correlation between decision trees in the forest, two main modifications are done to the tree building process in a random forest classifier: Random selection of data subset: For training the individual trees, a random subset of the data with replacement is used - a concept called as Bagging (Bootstrapped Aggregation). Random selection of feature subset for splitting: At each node, only a subset of features are used to split the node. The value is usually set to the square root of the number of features. Random Forests help to reduce the effects of overfitting in trees without having to prune the trees. They provide a more reliable classifier than individual decision trees. Amit Praseed Decision Trees October 17, 2019 39 / 43
  • 40. Visualizing a Decision Tree Amit Praseed Decision Trees October 17, 2019 40 / 43
  • 41. How does a Decision Tree Classify this Data? 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 x y Amit Praseed Decision Trees October 17, 2019 41 / 43
  • 42. A Complex Division 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 x y Amit Praseed Decision Trees October 17, 2019 42 / 43
  • 43. Oblique Decision Trees Oblique decision trees allow splitting on more than one at- tribute at any node. In many cases they allow for more efficient classification. The decision criteria at any node looks like m i=1 ai xi ≤ c. Determining the best split in- volves calculating proper values for ai and c, which is computa- tionally intensive. Optimization algorithms like Gradient Descent, Simulated Annealing etc are used for cal- culating these values. However, in most cases these 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 x y Amit Praseed Decision Trees October 17, 2019 43 / 43