Understanding Random Forests
Marc Garcia
PyData Madrid - April 9th, 2016
1 / 44
Understanding Random Forests - Marc Garcia
Introduction
2 / 44
Understanding Random Forests - Marc Garcia
Introduction
About me: @datapythonista
Python programmer since 2006
Master in AI from UPC
Working for Bank of America
http://datapythonista.github.io
3 / 44
Understanding Random Forests - Marc Garcia
Introduction
Overview: Understanding Random Forests
How decision trees work
Training Random Forests
Practical example
Analyzing model parameters
4 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
5 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Using a decision tree
Simple example:
Event attendance: binary classification
2 explanatory variables: age and distance
from sklearn.tree import DecisionTreeClassifier
dtree = DecisionTreeClassifier()
dtree.fit(df[[’age’, ’distance’]], df[’attended’])
cart_plot(dtree)
6 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Data visualization
7 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Decision boundary visualization
8 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Decision boundary visualization
9 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Decision boundary visualization
10 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Decision boundary visualization
11 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Decision boundary visualization
12 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Decision boundary visualization
13 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
How is the model?
def decision_tree_model(age, distance):
if distance >= 2283.11:
if age >= 40.00:
if distance >= 6868.86:
if distance >= 8278.82:
return True
else:
return False
else:
return True
else:
return False
else:
if age >= 54.50:
if age >= 57.00:
return True
else:
return False
else:
return True
14 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Training: Basic algorithm
def train_decision_tree(x, y):
feature, value = get_best_split(x, y)
x_left, y_left = x[x[feature] < value], y[x[feature] < value]
if len(y_left.unique()) > 1:
left_node = train_decision_tree(x_left, y_left)
else:
left_node = None
x_right, y_right = x[x[feature] >= value], y[x[feature] >= value]
if len(y_right.unique()) > 1:
right_node = train_decision_tree(x_right, y_right)
else:
right_node = None
return Node(feature, value, left_node, right_node)
15 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Best split
Candidate split 1
age 18 19 21 27 29 34 38 42 49 54 62 64
attended F F T F T T F T F T T T
Split True False
Left 0 1
Right 7 4
Candidate split 2
age 18 19 21 27 29 34 38 42 49 54 62 64
attended F F T F T T F T F T T T
Split True False
Left 0 2
Right 7 3
16 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Best split algorithm
def get_best_split(x, y):
best_split = None
best_entropy = 1.
for feature in x.columns.values:
column = x[feature]
for value in column.iterrows():
a = y[column < value] == class_a_value
b = y[column < value] == class_b_value
left_weight = (a + b) / len(y.index)
left_entropy = entropy(a, b)
a = y[column >= value] == class_a_value
b = y[column >= value] == class_b_value
right_items = (a + b) / len(y.index)
right_entropy = entropy(a, b)
split_entropy = left_weight * left_etropy + right_weight * right_entropy
if split_entropy < best_entropy:
best_split = (feature, value)
best_entropy = split_entropy
return best_split
17 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Entropy
For a given subset1:
entropy = −Prattending · log2 Prattending − Pr¬attending · log2 Pr¬attending (1)
1
Note that for pure splits it’s assumed that 0 · log2 0 = 0
18 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Entropy algorithm
import math
def entropy(a, b):
total = a + b
prob_a = a / total
prob_b = b / total
return - prob_a * math.log(prob_a, 2) 
- prob_b * math.log(prob_b, 2)
19 / 44
Understanding Random Forests - Marc Garcia
How decision trees work
Information gain
For a given split:
information_gain = entropyparent −
itemsleft
itemstotal
· entropyleft +
itemsright
itemstotal
· entropyright (2)
20 / 44
Understanding Random Forests - Marc Garcia
Practical example
21 / 44
Understanding Random Forests - Marc Garcia
Practical example
Etsy dataset
Features:
Feedback (number of reviews)
Age (days since shop creation)
Admirers (likes)
Items (number of products)
Response variable:
Sales (number)
Samples:
58,092
Source:
www.bigml.com
22 / 44
Understanding Random Forests - Marc Garcia
Practical example
Features distribution
23 / 44
Understanding Random Forests - Marc Garcia
Practical example
Sales visualization
24 / 44
Understanding Random Forests - Marc Garcia
Practical example
Feature correlation
sales admirers age feedback items
sales 1.000000 0.458261 0.181045 0.955949 0.502423
admirers 0.458261 1.000000 0.340939 0.401995 0.268985
age 0.181045 0.340939 1.000000 0.184238 0.167535
feedback 0.955949 0.401995 0.184238 1.000000 0.458955
items 0.502423 0.268985 0.167535 0.458955 1.000000
25 / 44
Understanding Random Forests - Marc Garcia
Practical example
Correlation to sales
26 / 44
Understanding Random Forests - Marc Garcia
Practical example
Decision tree visualization
27 / 44
Understanding Random Forests - Marc Garcia
Practical example
Decision tree visualization (top)
28 / 44
Understanding Random Forests - Marc Garcia
Practical example
Decision tree visualization (left)
29 / 44
Understanding Random Forests - Marc Garcia
Practical example
Decision tree visualization (right)
30 / 44
Understanding Random Forests - Marc Garcia
Practical example
What is wrong with the tree?
Only one variable used to predict in some cases
Lack of stability:
If feedback = 207 we predict 109 sales
If feedback = 208 we predict 977 sales
Our model can change dramatically if we add few more samples to the dataset
Overfitting: We make predictions based on a single sample
31 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
32 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Ensembles: The board of directors
Why companies have a BoD?
What is best?
The best point of view
A mix of good points of view
How was our best tree?
What about a mix of not so good trees?
33 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Ensemble of trees
How do we build optimized trees, that are different?
We need to randomize them
Samples: bagging
Features
Splits
34 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Bagging: Uniform sampling with replacement
35 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Randomizing features
Even with bagging, trees will usually make the top splits by the same feature
max_features parameter:
Regression: Use all
Classification: Use the squared root
36 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Randomizing splits
A split per sample is considered in the original decision tree
We can consider a subset
We obtain randomized trees
Performance is improved
sklearn implements ExtraTreeRegressor and ExtraTreeClassifier
37 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
What happened with our example problems
Only one variable used to predict in some cases
We can avoid it by not using all features
Lack of stability:
Our model is smoother
Overfitting: We make predictions based on a single sample
Overfitting is mostly avoided
Random forests come with built-in cross validation
38 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Smoothing
39 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Parameter optimization (Etsy)
40 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Parameter optimization (Bank marketing)
Data source: [Moro et al., 2014] S. Moro, P. Cortez and P. Rita.
A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014
41 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
When should we use Random Forests?
Using a CART is usually good as part of exploratory analysis
Do not use CART or RF when the problem is linear
They are specially good for ordinal (categorical but sorted) variables
Performance
CART is slower than linear models (e.g. logit)
RF trains N trees, and takes N times but it works in multiple cores, seriously
But RF is usually much faster than ANN or SVM, and results can be similar
42 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Summary
CART is a simple, but still powerful model
Visualizing them we can better understand our data
Ensembles usually improve the predictive power of models
Random Forests fix CART problems
Better use of features
More stability
Better generalization (RFs avoid overfitting)
Random Forests main parameters
min_samples_leaf (inherited from CART)
num_estimators
43 / 44
Understanding Random Forests - Marc Garcia
Training Random Forests
Thank you
QUESTIONS?
44 / 44
Understanding Random Forests - Marc Garcia

Understanding random forests

  • 1.
    Understanding Random Forests MarcGarcia PyData Madrid - April 9th, 2016 1 / 44 Understanding Random Forests - Marc Garcia
  • 2.
    Introduction 2 / 44 UnderstandingRandom Forests - Marc Garcia
  • 3.
    Introduction About me: @datapythonista Pythonprogrammer since 2006 Master in AI from UPC Working for Bank of America http://datapythonista.github.io 3 / 44 Understanding Random Forests - Marc Garcia
  • 4.
    Introduction Overview: Understanding RandomForests How decision trees work Training Random Forests Practical example Analyzing model parameters 4 / 44 Understanding Random Forests - Marc Garcia
  • 5.
    How decision treeswork 5 / 44 Understanding Random Forests - Marc Garcia
  • 6.
    How decision treeswork Using a decision tree Simple example: Event attendance: binary classification 2 explanatory variables: age and distance from sklearn.tree import DecisionTreeClassifier dtree = DecisionTreeClassifier() dtree.fit(df[[’age’, ’distance’]], df[’attended’]) cart_plot(dtree) 6 / 44 Understanding Random Forests - Marc Garcia
  • 7.
    How decision treeswork Data visualization 7 / 44 Understanding Random Forests - Marc Garcia
  • 8.
    How decision treeswork Decision boundary visualization 8 / 44 Understanding Random Forests - Marc Garcia
  • 9.
    How decision treeswork Decision boundary visualization 9 / 44 Understanding Random Forests - Marc Garcia
  • 10.
    How decision treeswork Decision boundary visualization 10 / 44 Understanding Random Forests - Marc Garcia
  • 11.
    How decision treeswork Decision boundary visualization 11 / 44 Understanding Random Forests - Marc Garcia
  • 12.
    How decision treeswork Decision boundary visualization 12 / 44 Understanding Random Forests - Marc Garcia
  • 13.
    How decision treeswork Decision boundary visualization 13 / 44 Understanding Random Forests - Marc Garcia
  • 14.
    How decision treeswork How is the model? def decision_tree_model(age, distance): if distance >= 2283.11: if age >= 40.00: if distance >= 6868.86: if distance >= 8278.82: return True else: return False else: return True else: return False else: if age >= 54.50: if age >= 57.00: return True else: return False else: return True 14 / 44 Understanding Random Forests - Marc Garcia
  • 15.
    How decision treeswork Training: Basic algorithm def train_decision_tree(x, y): feature, value = get_best_split(x, y) x_left, y_left = x[x[feature] < value], y[x[feature] < value] if len(y_left.unique()) > 1: left_node = train_decision_tree(x_left, y_left) else: left_node = None x_right, y_right = x[x[feature] >= value], y[x[feature] >= value] if len(y_right.unique()) > 1: right_node = train_decision_tree(x_right, y_right) else: right_node = None return Node(feature, value, left_node, right_node) 15 / 44 Understanding Random Forests - Marc Garcia
  • 16.
    How decision treeswork Best split Candidate split 1 age 18 19 21 27 29 34 38 42 49 54 62 64 attended F F T F T T F T F T T T Split True False Left 0 1 Right 7 4 Candidate split 2 age 18 19 21 27 29 34 38 42 49 54 62 64 attended F F T F T T F T F T T T Split True False Left 0 2 Right 7 3 16 / 44 Understanding Random Forests - Marc Garcia
  • 17.
    How decision treeswork Best split algorithm def get_best_split(x, y): best_split = None best_entropy = 1. for feature in x.columns.values: column = x[feature] for value in column.iterrows(): a = y[column < value] == class_a_value b = y[column < value] == class_b_value left_weight = (a + b) / len(y.index) left_entropy = entropy(a, b) a = y[column >= value] == class_a_value b = y[column >= value] == class_b_value right_items = (a + b) / len(y.index) right_entropy = entropy(a, b) split_entropy = left_weight * left_etropy + right_weight * right_entropy if split_entropy < best_entropy: best_split = (feature, value) best_entropy = split_entropy return best_split 17 / 44 Understanding Random Forests - Marc Garcia
  • 18.
    How decision treeswork Entropy For a given subset1: entropy = −Prattending · log2 Prattending − Pr¬attending · log2 Pr¬attending (1) 1 Note that for pure splits it’s assumed that 0 · log2 0 = 0 18 / 44 Understanding Random Forests - Marc Garcia
  • 19.
    How decision treeswork Entropy algorithm import math def entropy(a, b): total = a + b prob_a = a / total prob_b = b / total return - prob_a * math.log(prob_a, 2) - prob_b * math.log(prob_b, 2) 19 / 44 Understanding Random Forests - Marc Garcia
  • 20.
    How decision treeswork Information gain For a given split: information_gain = entropyparent − itemsleft itemstotal · entropyleft + itemsright itemstotal · entropyright (2) 20 / 44 Understanding Random Forests - Marc Garcia
  • 21.
    Practical example 21 /44 Understanding Random Forests - Marc Garcia
  • 22.
    Practical example Etsy dataset Features: Feedback(number of reviews) Age (days since shop creation) Admirers (likes) Items (number of products) Response variable: Sales (number) Samples: 58,092 Source: www.bigml.com 22 / 44 Understanding Random Forests - Marc Garcia
  • 23.
    Practical example Features distribution 23/ 44 Understanding Random Forests - Marc Garcia
  • 24.
    Practical example Sales visualization 24/ 44 Understanding Random Forests - Marc Garcia
  • 25.
    Practical example Feature correlation salesadmirers age feedback items sales 1.000000 0.458261 0.181045 0.955949 0.502423 admirers 0.458261 1.000000 0.340939 0.401995 0.268985 age 0.181045 0.340939 1.000000 0.184238 0.167535 feedback 0.955949 0.401995 0.184238 1.000000 0.458955 items 0.502423 0.268985 0.167535 0.458955 1.000000 25 / 44 Understanding Random Forests - Marc Garcia
  • 26.
    Practical example Correlation tosales 26 / 44 Understanding Random Forests - Marc Garcia
  • 27.
    Practical example Decision treevisualization 27 / 44 Understanding Random Forests - Marc Garcia
  • 28.
    Practical example Decision treevisualization (top) 28 / 44 Understanding Random Forests - Marc Garcia
  • 29.
    Practical example Decision treevisualization (left) 29 / 44 Understanding Random Forests - Marc Garcia
  • 30.
    Practical example Decision treevisualization (right) 30 / 44 Understanding Random Forests - Marc Garcia
  • 31.
    Practical example What iswrong with the tree? Only one variable used to predict in some cases Lack of stability: If feedback = 207 we predict 109 sales If feedback = 208 we predict 977 sales Our model can change dramatically if we add few more samples to the dataset Overfitting: We make predictions based on a single sample 31 / 44 Understanding Random Forests - Marc Garcia
  • 32.
    Training Random Forests 32/ 44 Understanding Random Forests - Marc Garcia
  • 33.
    Training Random Forests Ensembles:The board of directors Why companies have a BoD? What is best? The best point of view A mix of good points of view How was our best tree? What about a mix of not so good trees? 33 / 44 Understanding Random Forests - Marc Garcia
  • 34.
    Training Random Forests Ensembleof trees How do we build optimized trees, that are different? We need to randomize them Samples: bagging Features Splits 34 / 44 Understanding Random Forests - Marc Garcia
  • 35.
    Training Random Forests Bagging:Uniform sampling with replacement 35 / 44 Understanding Random Forests - Marc Garcia
  • 36.
    Training Random Forests Randomizingfeatures Even with bagging, trees will usually make the top splits by the same feature max_features parameter: Regression: Use all Classification: Use the squared root 36 / 44 Understanding Random Forests - Marc Garcia
  • 37.
    Training Random Forests Randomizingsplits A split per sample is considered in the original decision tree We can consider a subset We obtain randomized trees Performance is improved sklearn implements ExtraTreeRegressor and ExtraTreeClassifier 37 / 44 Understanding Random Forests - Marc Garcia
  • 38.
    Training Random Forests Whathappened with our example problems Only one variable used to predict in some cases We can avoid it by not using all features Lack of stability: Our model is smoother Overfitting: We make predictions based on a single sample Overfitting is mostly avoided Random forests come with built-in cross validation 38 / 44 Understanding Random Forests - Marc Garcia
  • 39.
    Training Random Forests Smoothing 39/ 44 Understanding Random Forests - Marc Garcia
  • 40.
    Training Random Forests Parameteroptimization (Etsy) 40 / 44 Understanding Random Forests - Marc Garcia
  • 41.
    Training Random Forests Parameteroptimization (Bank marketing) Data source: [Moro et al., 2014] S. Moro, P. Cortez and P. Rita. A Data-Driven Approach to Predict the Success of Bank Telemarketing. Decision Support Systems, Elsevier, 62:22-31, June 2014 41 / 44 Understanding Random Forests - Marc Garcia
  • 42.
    Training Random Forests Whenshould we use Random Forests? Using a CART is usually good as part of exploratory analysis Do not use CART or RF when the problem is linear They are specially good for ordinal (categorical but sorted) variables Performance CART is slower than linear models (e.g. logit) RF trains N trees, and takes N times but it works in multiple cores, seriously But RF is usually much faster than ANN or SVM, and results can be similar 42 / 44 Understanding Random Forests - Marc Garcia
  • 43.
    Training Random Forests Summary CARTis a simple, but still powerful model Visualizing them we can better understand our data Ensembles usually improve the predictive power of models Random Forests fix CART problems Better use of features More stability Better generalization (RFs avoid overfitting) Random Forests main parameters min_samples_leaf (inherited from CART) num_estimators 43 / 44 Understanding Random Forests - Marc Garcia
  • 44.
    Training Random Forests Thankyou QUESTIONS? 44 / 44 Understanding Random Forests - Marc Garcia