SlideShare a Scribd company logo
1 of 21
GBM PACKAGE IN R
7/24/2014
Presentation Outline
• Algorithm Overview
• Basics
• How it solves problems
• Why to use it
• Deeper investigation while going through live code
What is GBM?
• Predictive modeling algorithm
• Classification & Regression
• Decision tree as a basis*
• Boosted
• Multiple weak models combined algorithmically
• Gradient boosted
• Iteratively solves residuals
• Stochastic
(some additional references on last slide)
* technically, GBM can take on other forms such as linear, but decision trees are the dominant usage,
Friedman specifically optimized for trees, and R’s implementation is internally represented as a tree
Predictive Modeling Landscape:
General Purpose Algorithms
(forillustrativepurposes only,nottoscale,precise,orcomprehensive;author’sperspective)
Linear Models Decision Trees Others
Linear Models
(lm)
Generalized
Linear Models (glm)
Regularized
Linear
Models
(glmnet)
Classification
And Regression
Trees (rpart)
Random
Forest
(randomForest)
Gradient
Boosted
Machines
(gbm)
Nearest Neighbor
(kNN)
Neural
Networks
(nnet)
Support
Vector
Machines
(kernlab)
complexity
Naïve Bayes
(klaR)
Splines
(earth)
More Comprehensive List: http://caret.r-forge.r-project.org/modelList.html
GBM’s decision tree structure
Why GBM?
• Characteristics
• Competitive Performance
• Robust
• Loss functions
• Fast (relatively)
• Usages
• Quick modeling
• Variable selection
• Final-stage precision modeling
Competitive Performance
• Competitive with high-end algorithms such as
RandomForest
• Reliable performance
• Avoids nonsensical predictions
• Rare to produce worse predictions than simpler models
• Often in winning Kaggle solutions
• Cited within winning solution descriptions in numerous
competitions, including $3M competition
• Many of the highest ranked competitors use it frequently
• Used in 4 of 5 personal top 20 finishes
Robust
• Explicitly handles NAs
• Scaling/normalization is unnecessary
• Handles more factor levels than random forest (1024 vs
32)
• Handles perfectly correlated independent variables
• No [known] limit to number of independent variables
Loss Functions
• Gaussian: squared loss
• Laplace: absolute loss
• Bernoulli: logistic, for 0/1
• Huberized: hinge, for 0/1
• Adaboost: exponential loss, for 0/1
• Multinomial: more than one class (produces probability matrix)
• Quantile: flexible alpha (e.g. optimize for 2 StDev threshold)
• Poisson: Poisson distribution, for counts
• CoxPH: Cox proportional hazard, for right-censored
• Tdist: t-distribution loss
• Pairwise: rankings (e.g. search result scoring)
• Concordant pairs
• Mean reciprocal rank
• Mean average precision
• Normalized discounted cumulative gain
Drawbacks
• Several hyper-parameters to tune
• I typically use roughly the same parameters to start, unless I
suspect the data set might have peculiar characteristics
• For creating a final model, tuning several parameters is advisable
• Still has capacity to overfit
• Despite internal cross-validation, it is still particularly prone to
overfit ID-like columns (suggestion: withhold them)
• Can have trouble with highly noisy data
• Black box
• However, GBM package does provide tools to analyze the resulting
models
Deeper Analysis via Walkthrough
• Hyper-parameter explanations (some, not all)
• Quickly analyze performance
• Analyze influence of variables
• Peek under the hood…then follow a toy problem
For those not attending the presentation, the code at the back is run at this
point and discussed. The remaining four slides were mainly to supplement
the discussion of the code and comments, and there was not sufficient
time.
Same analysis with a simpler data set
Note that one can recreate the
predictions of this first tree by finding the
terminal node for any prediction and
using the Prediction value (final column
in data frame). Those values for all
desired trees, plus the initial value (mean
for this) is the prediction.
Matches predictions 1 & 3
Matches predictions 2,4 & 5
Same analysis with a simpler data set
Explanation
1 tree built.
Tree has one decision only, node 0.
Node 0 indicates it split the 3rd field (SplitVar:2), to where values below 1.5
(ordered values 0 & 1 which are a & b) went to node 1; values above
1.5 (2/3 = c/d) went to node 2; missing (none) go to node 3.
Node 1 (X3=A/B) is a terminal node (SplitVar -1) and it predicts the mean plus -0.925.
Node 2 (X3=C/D) is a terminal node and it predicts the mean plus 1.01.
Node 3 (none) is a terminal node and it predicts the mean plus 0, effectively.
Later saw that gbm1$initF will show the intercept, which in this case is the mean.
GBM predict: fit a GBM to data
• gbm(formula = formula(data),
• distribution = "bernoulli",
• n.trees = 100,
• interaction.depth = 1,
• n.minobsinnode = 10,
• shrinkage = 0.001,
• bag.fraction = 0.5,
• train.fraction = 1.0,
• cv.folds=0,
• weights,
• data = list(),
• var.monotone = NULL,
• keep.data = TRUE,
• verbose = "CV",
• class.stratify.cv=NULL,
• n.cores = NULL)
Effect of shrinkage & trees
Source: https://www.youtube.com/watch?v=IXZKgIsZRm0 (GBM explanation by SciKit author)
Code Dump
• The code has been copied from a text R script into PowerPoint, so
the format isn’t great, but it should look OK if copying and pasting
back out to a text file. If not, here it is on Github.
• The code shown uses a competition data set that is comparable to
real world data and uses a simple GBM to predict sale prices of
construction equipment at auction.
• A GBM model was fit against 100k rows with 45-50 variables in about
2-4 minutes during the presentation. It improves the RMSE of
prediction against the mean from ~24.5k to ~9.7k, when scored on
data the model had not seen (and future dates, so the 100k/50k splits
should be valid), with fairly stable train:test performance.
• After predictions are made and scored, some GBM utilities are used
to see which variables the model found most influential, see how the
top 2 variables are used (per factor for one; throughout a continuous
distribution for the other), and see interaction effects of specific
variable pairs.
• Note: GBM was used by my teammate and I to finish 12th out of 476
in this competition (albeit a complex ensemble of GBMs)
Code Dump: Page1
library(Metrics) ##load evaluation package
setwd("C:/Users/Mark_Landry/Documents/K/dozer/")
##Done in advance to speed up loading of data set
train<-read.csv("Train.csv")
## Kaggle data set: http://www.kaggle.com/c/bluebook-for-bulldozers/data
train$saleTransform<-strptime(train$saledate,"%m/%d/%Y %H:%M")
train<-train[order(train$saleTransform),]
save(train,file="rTrain.Rdata")
load("rTrain.Rdata")
xTrain<-train[(nrow(train)-149999):(nrow(train)-50000),5:ncol(train)]
xTest<-train[(nrow(train)-49999):nrow(train),5:ncol(train)]
yTrain<-train[(nrow(train)-149999):(nrow(train)-50000),2]
yTest<-train[(nrow(train)-49999):nrow(train),2]
dim(xTrain); dim(xTest)
sapply(xTrain,function(x) length(levels(x)))
## check levels; gbm is robust, but still has a limit of 1024 per factor; for initial model, remove
## after iterating through model, would want to go back and compress these factors to investigate
## their usefulness (or other information analysis)
xTrain$saledate<-NULL; xTest$saledate<-NULL
xTrain$fiModelDesc<-NULL; xTest$fiModelDesc<-NULL
xTrain$fiBaseModel<-NULL; xTest$fiBaseModel<-NULL
xTrain$saleTransform<-NULL; xTest$saleTransform<-NULL
Code Dump: Page2
library(gbm)
## Set up parameters to pass in; there are many more hyper-parameters available, but these are the most common to
control
GBM_NTREES = 400
## 400 trees in the model; can scale back later for predictions, if desired or overfitting is suspected
GBM_SHRINKAGE = 0.05
## shrinkage is a regularization parameter dictating how fast/aggressive the algorithm moves across
the loss gradient
## 0.05 is somewhat aggressive; default is 0.001, values below 0.1 tend to produce good results
## decreasing shrinkage generally improves results, but requires more trees, so the two
should be adjusted in tandem
GBM_DEPTH = 4
## depth 4 means each tree will evaluate four decisions;
## will always yield [3*depth + 1] nodes and [2*depth + 1] terminal nodes (depth 4 = 9)
## because each decision yields 3 nodes, but each decision will come from a prior node
GBM_MINOBS = 30
## regularization parameter to dictate how many observations must be present to yield a terminal node
## higher number means more conservative fit; 30 is fairly high, but good for exploratory fits; default is
10
## Fit model
g<-gbm.fit(x=xTrain,y=yTrain,distribution = "gaussian",n.trees = GBM_NTREES,shrinkage = GBM_SHRINKAGE,
interaction.depth = GBM_DEPTH,n.minobsinnode = GBM_MINOBS)
## gbm fit; provide all remaining independent variables in xTrain; provide targets as yTrain;
## gaussian distribution will optimize squared loss;
Code Dump: Page3
## get predictions; first on train set, then on unseen test data
tP1 <- predict.gbm(object = g,newdata = xTrain,GBM_NTREES)
hP1 <- predict.gbm(object = g,newdata = xTest,GBM_NTREES)
## compare model performance to default (overall mean)
rmse(yTrain,tP1) ## 9452.742 on data used for training
rmse(yTest,hP1) ## 9740.559 ~3% drop on unseen data; does not seem
to be overfit
rmse(yTest,mean(yTrain)) ## 24481.08 overall mean; cut error rate (from perfection) by 60%
## look at variables
summary(g) ## summary will plot and then show the relative influence of each variable to the entire GBM model (all trees)
## test dominant variable mean
library(sqldf)
trainProdClass<-as.data.frame(cbind(as.character(xTrain$fiProductClassDesc),yTrain))
testProdClass<-as.data.frame(cbind(as.character(xTest$fiProductClassDesc),yTest))
colnames(trainProdClass)<-c("fiProductClassDesc","y"); colnames(testProdClass)<-c("fiProductClassDesc","y")
ProdClassMeans<-sqldf("SELECT fiProductClassDesc,avg(y) avg, COUNT(*) n FROM trainProdClass GROUP BY
fiProductClassDesc")
ProdClassPredictions<-sqldf("SELECT case when n > 30 then avg ELSE 31348.63 end avg
FROM ProdClassMeans P LEFT JOIN testProdClass t ON t.fiProductClassDesc = P.fiProductClassDesc")
rmse(yTest,ProdClassPredictions$avg) ## 29082.64 ? peculiar result on the fiProductClassDesc means, which seemed
fairly stable and useful
##seems to say that the primary factor alone is not helpful; full tree needed
Code Dump: Page4
## Investigate actual GBM model
pretty.gbm.tree(g,1) ## show underlying model for the first decision tree
summary(xTrain[,10]) ## underlying model showed variable 9 to be first point in tree (9 with 0 index = 10th
column)
g$initF ## view what is effectively the "y intercept"
mean(yTrain) ## equivalence shows gaussian y intercept is the mean
t(g$c.splits[1][[1]]) ## show whether each factor level should go left or right
plot(g,10) ## plot fiProductClassDesc, the variable with the highest
rel.inf
plot(g,3) ## plot YearMade, continuous variable with 2nd highest
rel.inf
interact.gbm(g,xTrain,c(10,3))
## compute H statistic to
show interaction; integrates
interact.gbm(g,xTrain,c(10,3))
## example of uninteresting
interaction
Selected References
• CRAN
• Documentation
• vignette
• Algorithm publications:
• Greedy function approximation: A gradient boosting machine
Friedman 2/99
• Stochastic Gradient Boosting; Friedman 3/99
• Overviews
• Gradient boosting machines, a tutorial: Frontiers (4/13)
• Wikipedia (pretty good article, really)
• Video of author of GBM in Python: Gradient Boosted Regression
Trees in scikit-learn
• Very helpful, but the implementation is not decision “stumps” in R, so
some things are different in R (e.g. number of trees need not be so high)

More Related Content

What's hot

Kaggle presentation
Kaggle presentationKaggle presentation
Kaggle presentationHJ van Veen
 
Feature Engineering - Getting most out of data for predictive models
Feature Engineering - Getting most out of data for predictive modelsFeature Engineering - Getting most out of data for predictive models
Feature Engineering - Getting most out of data for predictive modelsGabriel Moreira
 
Introduction to random forest and gradient boosting methods a lecture
Introduction to random forest and gradient boosting methods   a lectureIntroduction to random forest and gradient boosting methods   a lecture
Introduction to random forest and gradient boosting methods a lectureShreyas S K
 
Feature Engineering - Getting most out of data for predictive models - TDC 2017
Feature Engineering - Getting most out of data for predictive models - TDC 2017Feature Engineering - Getting most out of data for predictive models - TDC 2017
Feature Engineering - Getting most out of data for predictive models - TDC 2017Gabriel Moreira
 
churn prediction in telecom
churn prediction in telecom churn prediction in telecom
churn prediction in telecom Hong Bui Van
 
Feature Engineering for ML - Dmitry Larko, H2O.ai
Feature Engineering for ML - Dmitry Larko, H2O.aiFeature Engineering for ML - Dmitry Larko, H2O.ai
Feature Engineering for ML - Dmitry Larko, H2O.aiSri Ambati
 
Feature Selection in Machine Learning
Feature Selection in Machine LearningFeature Selection in Machine Learning
Feature Selection in Machine LearningUpekha Vandebona
 
Feature selection
Feature selectionFeature selection
Feature selectionDong Guo
 
Time series clustering presentation
Time series clustering presentationTime series clustering presentation
Time series clustering presentationEleni Stamatelou
 
Classification: Basic Concepts and Decision Trees
Classification: Basic Concepts and Decision TreesClassification: Basic Concepts and Decision Trees
Classification: Basic Concepts and Decision Treessathish sak
 
How to Win Machine Learning Competitions ?
How to Win Machine Learning Competitions ? How to Win Machine Learning Competitions ?
How to Win Machine Learning Competitions ? HackerEarth
 
Data preprocessing in Machine learning
Data preprocessing in Machine learning Data preprocessing in Machine learning
Data preprocessing in Machine learning pyingkodi maran
 
Introduction to XGboost
Introduction to XGboostIntroduction to XGboost
Introduction to XGboostShuai Zhang
 
XGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competitionXGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competitionJaroslaw Szymczak
 
Winning data science competitions, presented by Owen Zhang
Winning data science competitions, presented by Owen ZhangWinning data science competitions, presented by Owen Zhang
Winning data science competitions, presented by Owen ZhangVivian S. Zhang
 
Winning Kaggle 101: Introduction to Stacking
Winning Kaggle 101: Introduction to StackingWinning Kaggle 101: Introduction to Stacking
Winning Kaggle 101: Introduction to StackingTed Xiao
 
Churn prediction
Churn predictionChurn prediction
Churn predictionGigi Lino
 

What's hot (20)

Kaggle presentation
Kaggle presentationKaggle presentation
Kaggle presentation
 
Feature Engineering - Getting most out of data for predictive models
Feature Engineering - Getting most out of data for predictive modelsFeature Engineering - Getting most out of data for predictive models
Feature Engineering - Getting most out of data for predictive models
 
XGBoost (System Overview)
XGBoost (System Overview)XGBoost (System Overview)
XGBoost (System Overview)
 
Introduction to random forest and gradient boosting methods a lecture
Introduction to random forest and gradient boosting methods   a lectureIntroduction to random forest and gradient boosting methods   a lecture
Introduction to random forest and gradient boosting methods a lecture
 
Feature Engineering - Getting most out of data for predictive models - TDC 2017
Feature Engineering - Getting most out of data for predictive models - TDC 2017Feature Engineering - Getting most out of data for predictive models - TDC 2017
Feature Engineering - Getting most out of data for predictive models - TDC 2017
 
churn prediction in telecom
churn prediction in telecom churn prediction in telecom
churn prediction in telecom
 
Feature Engineering for ML - Dmitry Larko, H2O.ai
Feature Engineering for ML - Dmitry Larko, H2O.aiFeature Engineering for ML - Dmitry Larko, H2O.ai
Feature Engineering for ML - Dmitry Larko, H2O.ai
 
Feature Selection in Machine Learning
Feature Selection in Machine LearningFeature Selection in Machine Learning
Feature Selection in Machine Learning
 
Feature selection
Feature selectionFeature selection
Feature selection
 
Time series clustering presentation
Time series clustering presentationTime series clustering presentation
Time series clustering presentation
 
Classification: Basic Concepts and Decision Trees
Classification: Basic Concepts and Decision TreesClassification: Basic Concepts and Decision Trees
Classification: Basic Concepts and Decision Trees
 
How to Win Machine Learning Competitions ?
How to Win Machine Learning Competitions ? How to Win Machine Learning Competitions ?
How to Win Machine Learning Competitions ?
 
Data preprocessing in Machine learning
Data preprocessing in Machine learning Data preprocessing in Machine learning
Data preprocessing in Machine learning
 
Introduction to XGboost
Introduction to XGboostIntroduction to XGboost
Introduction to XGboost
 
Data Preprocessing
Data PreprocessingData Preprocessing
Data Preprocessing
 
XGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competitionXGBoost: the algorithm that wins every competition
XGBoost: the algorithm that wins every competition
 
Winning data science competitions, presented by Owen Zhang
Winning data science competitions, presented by Owen ZhangWinning data science competitions, presented by Owen Zhang
Winning data science competitions, presented by Owen Zhang
 
Dimensionality reduction
Dimensionality reductionDimensionality reduction
Dimensionality reduction
 
Winning Kaggle 101: Introduction to Stacking
Winning Kaggle 101: Introduction to StackingWinning Kaggle 101: Introduction to Stacking
Winning Kaggle 101: Introduction to Stacking
 
Churn prediction
Churn predictionChurn prediction
Churn prediction
 

Viewers also liked

Gbm.more GBM in H2O
Gbm.more GBM in H2OGbm.more GBM in H2O
Gbm.more GBM in H2OSri Ambati
 
Automated data analysis with Python
Automated data analysis with PythonAutomated data analysis with Python
Automated data analysis with PythonGramener
 
Gradient boosting in practice: a deep dive into xgboost
Gradient boosting in practice: a deep dive into xgboostGradient boosting in practice: a deep dive into xgboost
Gradient boosting in practice: a deep dive into xgboostJaroslaw Szymczak
 
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its authorKaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its authorVivian S. Zhang
 
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting MachinesDecision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting MachinesDeepak George
 

Viewers also liked (7)

Inlining Heuristics
Inlining HeuristicsInlining Heuristics
Inlining Heuristics
 
Gbm.more GBM in H2O
Gbm.more GBM in H2OGbm.more GBM in H2O
Gbm.more GBM in H2O
 
Automated data analysis with Python
Automated data analysis with PythonAutomated data analysis with Python
Automated data analysis with Python
 
GBM theory code and parameters
GBM theory code and parametersGBM theory code and parameters
GBM theory code and parameters
 
Gradient boosting in practice: a deep dive into xgboost
Gradient boosting in practice: a deep dive into xgboostGradient boosting in practice: a deep dive into xgboost
Gradient boosting in practice: a deep dive into xgboost
 
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its authorKaggle Winning Solution Xgboost algorithm -- Let us learn from its author
Kaggle Winning Solution Xgboost algorithm -- Let us learn from its author
 
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting MachinesDecision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
Decision Tree Ensembles - Bagging, Random Forest & Gradient Boosting Machines
 

Similar to GBM package in r

Algorithm explanations
Algorithm explanationsAlgorithm explanations
Algorithm explanationsnikita kapil
 
Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Zihui Li
 
Genetic programming
Genetic programmingGenetic programming
Genetic programmingYun-Yan Chi
 
Minmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptxMinmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptxPriyadharshiniG41
 
Kaggle review Planet: Understanding the Amazon from Space
Kaggle reviewPlanet: Understanding the Amazon from SpaceKaggle reviewPlanet: Understanding the Amazon from Space
Kaggle review Planet: Understanding the Amazon from SpaceEduard Tyantov
 
Musings of kaggler
Musings of kagglerMusings of kaggler
Musings of kagglerKai Xin Thia
 
CLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptxCLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptxShwetapadmaBabu1
 
Heuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient searchHeuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient searchGreg Makowski
 
Firefly exact MCMC for Big Data
Firefly exact MCMC for Big DataFirefly exact MCMC for Big Data
Firefly exact MCMC for Big DataGianvito Siciliano
 
Tensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdTensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdDatabricks
 
Decision tree induction
Decision tree inductionDecision tree induction
Decision tree inductionthamizh arasi
 
Graph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraGraph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraJason Riedy
 
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화NAVER Engineering
 
DAA 1 ppt.pptx
DAA 1 ppt.pptxDAA 1 ppt.pptx
DAA 1 ppt.pptxRAJESH S
 
DAA ppt.pptx
DAA ppt.pptxDAA ppt.pptx
DAA ppt.pptxRAJESH S
 
진동데이터 활용 충돌체 탐지 AI 경진대회 1등
진동데이터 활용 충돌체 탐지 AI 경진대회 1등진동데이터 활용 충돌체 탐지 AI 경진대회 1등
진동데이터 활용 충돌체 탐지 AI 경진대회 1등DACON AI 데이콘
 

Similar to GBM package in r (20)

Algorithm explanations
Algorithm explanationsAlgorithm explanations
Algorithm explanations
 
Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)Machine Learning Algorithms (Part 1)
Machine Learning Algorithms (Part 1)
 
Machine Learning - Supervised Learning
Machine Learning - Supervised LearningMachine Learning - Supervised Learning
Machine Learning - Supervised Learning
 
Genetic programming
Genetic programmingGenetic programming
Genetic programming
 
Minmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptxMinmax and alpha beta pruning.pptx
Minmax and alpha beta pruning.pptx
 
Kaggle review Planet: Understanding the Amazon from Space
Kaggle reviewPlanet: Understanding the Amazon from SpaceKaggle reviewPlanet: Understanding the Amazon from Space
Kaggle review Planet: Understanding the Amazon from Space
 
Musings of kaggler
Musings of kagglerMusings of kaggler
Musings of kaggler
 
ngboost.pptx
ngboost.pptxngboost.pptx
ngboost.pptx
 
CLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptxCLUSTER ANALYSIS ALGORITHMS.pptx
CLUSTER ANALYSIS ALGORITHMS.pptx
 
Heuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient searchHeuristic design of experiments w meta gradient search
Heuristic design of experiments w meta gradient search
 
Firefly exact MCMC for Big Data
Firefly exact MCMC for Big DataFirefly exact MCMC for Big Data
Firefly exact MCMC for Big Data
 
Tensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with HummingbirdTensors Are All You Need: Faster Inference with Hummingbird
Tensors Are All You Need: Faster Inference with Hummingbird
 
R user group meeting 25th jan 2017
R user group meeting 25th jan 2017R user group meeting 25th jan 2017
R user group meeting 25th jan 2017
 
Decision tree induction
Decision tree inductionDecision tree induction
Decision tree induction
 
Graph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear AlgebraGraph Analysis Beyond Linear Algebra
Graph Analysis Beyond Linear Algebra
 
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
대용량 데이터 분석을 위한 병렬 Clustering 알고리즘 최적화
 
DAA 1 ppt.pptx
DAA 1 ppt.pptxDAA 1 ppt.pptx
DAA 1 ppt.pptx
 
DAA ppt.pptx
DAA ppt.pptxDAA ppt.pptx
DAA ppt.pptx
 
Deeplearning
Deeplearning Deeplearning
Deeplearning
 
진동데이터 활용 충돌체 탐지 AI 경진대회 1등
진동데이터 활용 충돌체 탐지 AI 경진대회 1등진동데이터 활용 충돌체 탐지 AI 경진대회 1등
진동데이터 활용 충돌체 탐지 AI 경진대회 1등
 

Recently uploaded

Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Delhi Call girls
 
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfAccredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfadriantubila
 
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Standamitlee9823
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...amitlee9823
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfMarinCaroMartnezBerg
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...amitlee9823
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusTimothy Spann
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...amitlee9823
 
Week-01-2.ppt BBB human Computer interaction
Week-01-2.ppt BBB human Computer interactionWeek-01-2.ppt BBB human Computer interaction
Week-01-2.ppt BBB human Computer interactionfulawalesam
 
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...amitlee9823
 
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...amitlee9823
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFxolyaivanovalion
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxolyaivanovalion
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...amitlee9823
 
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...amitlee9823
 

Recently uploaded (20)

Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
Call Girls in Sarai Kale Khan Delhi 💯 Call Us 🔝9205541914 🔝( Delhi) Escorts S...
 
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdfAccredited-Transport-Cooperatives-Jan-2021-Web.pdf
Accredited-Transport-Cooperatives-Jan-2021-Web.pdf
 
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night StandCall Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
Call Girls In Bellandur ☎ 7737669865 🥵 Book Your One night Stand
 
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Bommasandra Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
FESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdfFESE Capital Markets Fact Sheet 2024 Q1.pdf
FESE Capital Markets Fact Sheet 2024 Q1.pdf
 
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
Vip Mumbai Call Girls Thane West Call On 9920725232 With Body to body massage...
 
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
(NEHA) Call Girls Katra Call Now 8617697112 Katra Escorts 24x7
 
Generative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and MilvusGenerative AI on Enterprise Cloud with NiFi and Milvus
Generative AI on Enterprise Cloud with NiFi and Milvus
 
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
Junnasandra Call Girls: 🍓 7737669865 🍓 High Profile Model Escorts | Bangalore...
 
Week-01-2.ppt BBB human Computer interaction
Week-01-2.ppt BBB human Computer interactionWeek-01-2.ppt BBB human Computer interaction
Week-01-2.ppt BBB human Computer interaction
 
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICECHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
CHEAP Call Girls in Saket (-DELHI )🔝 9953056974🔝(=)/CALL GIRLS SERVICE
 
Predicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science ProjectPredicting Loan Approval: A Data Science Project
Predicting Loan Approval: A Data Science Project
 
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
Escorts Service Kumaraswamy Layout ☎ 7737669865☎ Book Your One night Stand (B...
 
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
Call Girls Hsr Layout Just Call 👗 7737669865 👗 Top Class Call Girl Service Ba...
 
Halmar dropshipping via API with DroFx
Halmar  dropshipping  via API with DroFxHalmar  dropshipping  via API with DroFx
Halmar dropshipping via API with DroFx
 
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get CytotecAbortion pills in Doha Qatar (+966572737505 ! Get Cytotec
Abortion pills in Doha Qatar (+966572737505 ! Get Cytotec
 
VidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptxVidaXL dropshipping via API with DroFx.pptx
VidaXL dropshipping via API with DroFx.pptx
 
Sampling (random) method and Non random.ppt
Sampling (random) method and Non random.pptSampling (random) method and Non random.ppt
Sampling (random) method and Non random.ppt
 
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
Call Girls Indiranagar Just Call 👗 7737669865 👗 Top Class Call Girl Service B...
 
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
Call Girls Bannerghatta Road Just Call 👗 7737669865 👗 Top Class Call Girl Ser...
 

GBM package in r

  • 1. GBM PACKAGE IN R 7/24/2014
  • 2. Presentation Outline • Algorithm Overview • Basics • How it solves problems • Why to use it • Deeper investigation while going through live code
  • 3. What is GBM? • Predictive modeling algorithm • Classification & Regression • Decision tree as a basis* • Boosted • Multiple weak models combined algorithmically • Gradient boosted • Iteratively solves residuals • Stochastic (some additional references on last slide) * technically, GBM can take on other forms such as linear, but decision trees are the dominant usage, Friedman specifically optimized for trees, and R’s implementation is internally represented as a tree
  • 4. Predictive Modeling Landscape: General Purpose Algorithms (forillustrativepurposes only,nottoscale,precise,orcomprehensive;author’sperspective) Linear Models Decision Trees Others Linear Models (lm) Generalized Linear Models (glm) Regularized Linear Models (glmnet) Classification And Regression Trees (rpart) Random Forest (randomForest) Gradient Boosted Machines (gbm) Nearest Neighbor (kNN) Neural Networks (nnet) Support Vector Machines (kernlab) complexity Naïve Bayes (klaR) Splines (earth) More Comprehensive List: http://caret.r-forge.r-project.org/modelList.html
  • 6. Why GBM? • Characteristics • Competitive Performance • Robust • Loss functions • Fast (relatively) • Usages • Quick modeling • Variable selection • Final-stage precision modeling
  • 7. Competitive Performance • Competitive with high-end algorithms such as RandomForest • Reliable performance • Avoids nonsensical predictions • Rare to produce worse predictions than simpler models • Often in winning Kaggle solutions • Cited within winning solution descriptions in numerous competitions, including $3M competition • Many of the highest ranked competitors use it frequently • Used in 4 of 5 personal top 20 finishes
  • 8. Robust • Explicitly handles NAs • Scaling/normalization is unnecessary • Handles more factor levels than random forest (1024 vs 32) • Handles perfectly correlated independent variables • No [known] limit to number of independent variables
  • 9. Loss Functions • Gaussian: squared loss • Laplace: absolute loss • Bernoulli: logistic, for 0/1 • Huberized: hinge, for 0/1 • Adaboost: exponential loss, for 0/1 • Multinomial: more than one class (produces probability matrix) • Quantile: flexible alpha (e.g. optimize for 2 StDev threshold) • Poisson: Poisson distribution, for counts • CoxPH: Cox proportional hazard, for right-censored • Tdist: t-distribution loss • Pairwise: rankings (e.g. search result scoring) • Concordant pairs • Mean reciprocal rank • Mean average precision • Normalized discounted cumulative gain
  • 10. Drawbacks • Several hyper-parameters to tune • I typically use roughly the same parameters to start, unless I suspect the data set might have peculiar characteristics • For creating a final model, tuning several parameters is advisable • Still has capacity to overfit • Despite internal cross-validation, it is still particularly prone to overfit ID-like columns (suggestion: withhold them) • Can have trouble with highly noisy data • Black box • However, GBM package does provide tools to analyze the resulting models
  • 11. Deeper Analysis via Walkthrough • Hyper-parameter explanations (some, not all) • Quickly analyze performance • Analyze influence of variables • Peek under the hood…then follow a toy problem For those not attending the presentation, the code at the back is run at this point and discussed. The remaining four slides were mainly to supplement the discussion of the code and comments, and there was not sufficient time.
  • 12. Same analysis with a simpler data set Note that one can recreate the predictions of this first tree by finding the terminal node for any prediction and using the Prediction value (final column in data frame). Those values for all desired trees, plus the initial value (mean for this) is the prediction. Matches predictions 1 & 3 Matches predictions 2,4 & 5
  • 13. Same analysis with a simpler data set Explanation 1 tree built. Tree has one decision only, node 0. Node 0 indicates it split the 3rd field (SplitVar:2), to where values below 1.5 (ordered values 0 & 1 which are a & b) went to node 1; values above 1.5 (2/3 = c/d) went to node 2; missing (none) go to node 3. Node 1 (X3=A/B) is a terminal node (SplitVar -1) and it predicts the mean plus -0.925. Node 2 (X3=C/D) is a terminal node and it predicts the mean plus 1.01. Node 3 (none) is a terminal node and it predicts the mean plus 0, effectively. Later saw that gbm1$initF will show the intercept, which in this case is the mean.
  • 14. GBM predict: fit a GBM to data • gbm(formula = formula(data), • distribution = "bernoulli", • n.trees = 100, • interaction.depth = 1, • n.minobsinnode = 10, • shrinkage = 0.001, • bag.fraction = 0.5, • train.fraction = 1.0, • cv.folds=0, • weights, • data = list(), • var.monotone = NULL, • keep.data = TRUE, • verbose = "CV", • class.stratify.cv=NULL, • n.cores = NULL)
  • 15. Effect of shrinkage & trees Source: https://www.youtube.com/watch?v=IXZKgIsZRm0 (GBM explanation by SciKit author)
  • 16. Code Dump • The code has been copied from a text R script into PowerPoint, so the format isn’t great, but it should look OK if copying and pasting back out to a text file. If not, here it is on Github. • The code shown uses a competition data set that is comparable to real world data and uses a simple GBM to predict sale prices of construction equipment at auction. • A GBM model was fit against 100k rows with 45-50 variables in about 2-4 minutes during the presentation. It improves the RMSE of prediction against the mean from ~24.5k to ~9.7k, when scored on data the model had not seen (and future dates, so the 100k/50k splits should be valid), with fairly stable train:test performance. • After predictions are made and scored, some GBM utilities are used to see which variables the model found most influential, see how the top 2 variables are used (per factor for one; throughout a continuous distribution for the other), and see interaction effects of specific variable pairs. • Note: GBM was used by my teammate and I to finish 12th out of 476 in this competition (albeit a complex ensemble of GBMs)
  • 17. Code Dump: Page1 library(Metrics) ##load evaluation package setwd("C:/Users/Mark_Landry/Documents/K/dozer/") ##Done in advance to speed up loading of data set train<-read.csv("Train.csv") ## Kaggle data set: http://www.kaggle.com/c/bluebook-for-bulldozers/data train$saleTransform<-strptime(train$saledate,"%m/%d/%Y %H:%M") train<-train[order(train$saleTransform),] save(train,file="rTrain.Rdata") load("rTrain.Rdata") xTrain<-train[(nrow(train)-149999):(nrow(train)-50000),5:ncol(train)] xTest<-train[(nrow(train)-49999):nrow(train),5:ncol(train)] yTrain<-train[(nrow(train)-149999):(nrow(train)-50000),2] yTest<-train[(nrow(train)-49999):nrow(train),2] dim(xTrain); dim(xTest) sapply(xTrain,function(x) length(levels(x))) ## check levels; gbm is robust, but still has a limit of 1024 per factor; for initial model, remove ## after iterating through model, would want to go back and compress these factors to investigate ## their usefulness (or other information analysis) xTrain$saledate<-NULL; xTest$saledate<-NULL xTrain$fiModelDesc<-NULL; xTest$fiModelDesc<-NULL xTrain$fiBaseModel<-NULL; xTest$fiBaseModel<-NULL xTrain$saleTransform<-NULL; xTest$saleTransform<-NULL
  • 18. Code Dump: Page2 library(gbm) ## Set up parameters to pass in; there are many more hyper-parameters available, but these are the most common to control GBM_NTREES = 400 ## 400 trees in the model; can scale back later for predictions, if desired or overfitting is suspected GBM_SHRINKAGE = 0.05 ## shrinkage is a regularization parameter dictating how fast/aggressive the algorithm moves across the loss gradient ## 0.05 is somewhat aggressive; default is 0.001, values below 0.1 tend to produce good results ## decreasing shrinkage generally improves results, but requires more trees, so the two should be adjusted in tandem GBM_DEPTH = 4 ## depth 4 means each tree will evaluate four decisions; ## will always yield [3*depth + 1] nodes and [2*depth + 1] terminal nodes (depth 4 = 9) ## because each decision yields 3 nodes, but each decision will come from a prior node GBM_MINOBS = 30 ## regularization parameter to dictate how many observations must be present to yield a terminal node ## higher number means more conservative fit; 30 is fairly high, but good for exploratory fits; default is 10 ## Fit model g<-gbm.fit(x=xTrain,y=yTrain,distribution = "gaussian",n.trees = GBM_NTREES,shrinkage = GBM_SHRINKAGE, interaction.depth = GBM_DEPTH,n.minobsinnode = GBM_MINOBS) ## gbm fit; provide all remaining independent variables in xTrain; provide targets as yTrain; ## gaussian distribution will optimize squared loss;
  • 19. Code Dump: Page3 ## get predictions; first on train set, then on unseen test data tP1 <- predict.gbm(object = g,newdata = xTrain,GBM_NTREES) hP1 <- predict.gbm(object = g,newdata = xTest,GBM_NTREES) ## compare model performance to default (overall mean) rmse(yTrain,tP1) ## 9452.742 on data used for training rmse(yTest,hP1) ## 9740.559 ~3% drop on unseen data; does not seem to be overfit rmse(yTest,mean(yTrain)) ## 24481.08 overall mean; cut error rate (from perfection) by 60% ## look at variables summary(g) ## summary will plot and then show the relative influence of each variable to the entire GBM model (all trees) ## test dominant variable mean library(sqldf) trainProdClass<-as.data.frame(cbind(as.character(xTrain$fiProductClassDesc),yTrain)) testProdClass<-as.data.frame(cbind(as.character(xTest$fiProductClassDesc),yTest)) colnames(trainProdClass)<-c("fiProductClassDesc","y"); colnames(testProdClass)<-c("fiProductClassDesc","y") ProdClassMeans<-sqldf("SELECT fiProductClassDesc,avg(y) avg, COUNT(*) n FROM trainProdClass GROUP BY fiProductClassDesc") ProdClassPredictions<-sqldf("SELECT case when n > 30 then avg ELSE 31348.63 end avg FROM ProdClassMeans P LEFT JOIN testProdClass t ON t.fiProductClassDesc = P.fiProductClassDesc") rmse(yTest,ProdClassPredictions$avg) ## 29082.64 ? peculiar result on the fiProductClassDesc means, which seemed fairly stable and useful ##seems to say that the primary factor alone is not helpful; full tree needed
  • 20. Code Dump: Page4 ## Investigate actual GBM model pretty.gbm.tree(g,1) ## show underlying model for the first decision tree summary(xTrain[,10]) ## underlying model showed variable 9 to be first point in tree (9 with 0 index = 10th column) g$initF ## view what is effectively the "y intercept" mean(yTrain) ## equivalence shows gaussian y intercept is the mean t(g$c.splits[1][[1]]) ## show whether each factor level should go left or right plot(g,10) ## plot fiProductClassDesc, the variable with the highest rel.inf plot(g,3) ## plot YearMade, continuous variable with 2nd highest rel.inf interact.gbm(g,xTrain,c(10,3)) ## compute H statistic to show interaction; integrates interact.gbm(g,xTrain,c(10,3)) ## example of uninteresting interaction
  • 21. Selected References • CRAN • Documentation • vignette • Algorithm publications: • Greedy function approximation: A gradient boosting machine Friedman 2/99 • Stochastic Gradient Boosting; Friedman 3/99 • Overviews • Gradient boosting machines, a tutorial: Frontiers (4/13) • Wikipedia (pretty good article, really) • Video of author of GBM in Python: Gradient Boosted Regression Trees in scikit-learn • Very helpful, but the implementation is not decision “stumps” in R, so some things are different in R (e.g. number of trees need not be so high)