SlideShare a Scribd company logo
Exploratory DataAnalysis
Using XGBoost
XGBoost を使った探索的データ分析
第1回 R勉強会@仙台(#Sendai.R)
?誰
臨床検査事業 の なかのひと
?専門
遊牧@モンゴル(生態学/環境科学)
▼
臨床検査事業の研究所(データを縦にしたり横にしたりするしごと)
1cm
@kato_kohaku
Exploratory Data Analysis (EDA)
https://www.itl.nist.gov/div898/handbook/eda/section1/eda11.htm
is an approach/philosophy for data analysis that employs a variety of
techniques (mostly graphical) to
1. maximize insight into a data set;
2. uncover underlying structure;
3. extract important variables;
4. detect outliers and anomalies;
5. test underlying assumptions;
6. develop parsimonious models; and
7. determine optimal factor settings.
EDA (or explanation) after modelling
Taxonomy of Interpretation / Explanation
https://christophm.github.io/interpretable-ml-book/
EDA using Random Forest (EDARF)
randomForest を使った探索的データ分析 (off-topic)
Random Forest
model
Imputation for missing
 rfimpute()
 {missForest}
Rule Extraction
 {intrees}
 defragTrees@python
 EDARF::plot_prox()
 getTree()
Feature importance
 Gini / Accuracy
 Permutation based
Sensitivity analysis
 Partial Dependence Plot (PDP)
 feature contribution based {forestFloor}
Suggestion
 Feature Tweaking
Today’s topic
Intrinsic Post hoc
Model-Specific
Methods
• Linear Regression
• Logistic Regression
• GLM, GAM and more
• Decision Tree
• Decision Rules
• RuleFit
• Naive Bayes Classifier
• K-Nearest Neighbors
• Feature Importance (OOB error@RF;
gain/cover/weight @XGB)
• Feature Contribution (forestFloor@RF,
XGBoostexplainer, lightgbmExplainer)
• Alternate / Enumerate lasso
(@LASSO)
• inTrees / defragTrees (@RF/XGB)
• Actionable feature tweaking
(@RF/XGB)
Model-
Agnostic
Methods
Intrinsic interpretable
Model にも適用可能
• Partial Dependence Plot
• Individual Conditional Expectation
• Accumulated Local Effects Plot
• Feature Interaction
• Permutation Feature Importance
• Global Surrogate
• Local Explanation (LIME, Shapley
Values, breakDown)
Example-
based
Explanations
??
• Counterfactual Explanations
• Adversarial Examples
• Prototypes and Criticisms
• Influential Instances
EDA × XGBoost
Why EDA × XGBoost (or LightGBM)?
Motivation
https://twitter.com/fchollet/status/1113476428249464833?s=19
Decision tree, Random Forest & Gradient Boosting
Overview
https://www.kdnuggets.com/2017/10/understanding-machine-learning-algorithms.html
http://www.cse.chalmers.se/~richajo/dit866/lectures/l8/gb_explainer.pdf
Gradient Boosting
Gradient Boosting & XGBoost
Overview
http://www.yisongyue.com/courses/cs155/2019_winter/lectures/Lecture_06.pdf
https://www.kdd.org/kdd2016/papers/files/rfp0697-chenAemb.pdf
XGBoost’s Improvements:
 Overfitting suppression
 Split finding efficiency
 Computation time
EDA using XGBoost
XGBoost を使った探索的データ分析
XGBoost
model
Rule Extraction
 Xgb.model.dt.tree()
 {intrees}
 defragTrees@python
Feature importance
 Gain & Cover
 Permutation based
Summarize explanation
 Clustering of observations
 Variable response (2)
 Feature interaction
Suggestion
 Feature Tweaking
Individual explanation
 Shapley value (predcontrib)
 Structure based (predapprox)
Variable response (1)
 PDP / ICE / ALE
EDA (or explanation) using XGBoost
1. Build XGBoost model
2. Feature importance
• Gain & Cover
• Permutation based
3. Variable response (1)
• Partial Dependence Plot (PDP/ICE/ALE)
4. Rule Extraction
• Xgb.model.dt.tree()
• intrees
• defragTrees@python
5. Individual explanation
• Shapley value (predcontrib)
• Structure based (predapprox)
6. Variable response (2)
• Shapley value (predcontrib)
• Structure based (predapprox)
7. Feature interaction
• 2-way SHAP (predinteraction)
URL
Today’s Topic
Suggestion(off topic)
 Feature Tweaking
To Get ALL the Sample Codes
Please see github:
• https://github.com/katokohaku/EDAxgboost
1.XGBOOST MODELの構築
1. データセット
1. 変数の基本プロファイルの確認(型、定義、情報、構造、etc)
2. 前処理(変数変換、教師/テストへの分割・サンプリング、 データ変換)
2. タスクと評価指標の設定
1. 分類問題? 回帰問題(回帰の種類)? クラスタリング? その他?
2. 正確度、誤差、AUC、その他?
3. ハイパーパラメタの設定
1. パラメターサーチする・しない
2. どのパラメータ?、探索の方法?
4. 学習済みモデルの評価
1. 予測精度、予測特性(バイアス傾向)、その他
https://github.com/katokohaku/EDAxgboost/blob/master/100_building_xgboost_model.Rmd
EDA (or explanation) after modelling
1. Build XGBoost model
2. Feature importance
• Structure based (Gain & Cover)
• Permutation based
3. Variable response (1)
• Partial Dependence Plot (PDP / ICE / ALE)
4. Rule Extraction
• Xgb.model.dt.tree()
• intrees
5. Individual explanation
• Shapley value (predcontrib)
• Structure based (predapprox)
6. Variable response (2)
• Shapley value (predcontrib)
• Structure based (predapprox)
7. Feature interaction
• 2-way SHAP (predinteraction)
URL
EDA tools for XGBoost
Suggestion(off topic)
 Feature Tweaking
Human Resources Analytics Data Set
Preparation
• left (target to predict)
• Whether the employee left the workplace or not (1 or 0) Factor
• satisfaction_level
• Level of satisfaction (0-1)
• last_evaluation
• Time since last performance evaluation (in Years)
• number_project
• Number of projects completed while at work
• average_montly_hours
• Average monthly hours at workplace
• time_spend_company
• Number of years spent in the company
• Work_accident
• Whether the employee had a workplace accident
• promotion_last_5years
• Whether the employee was promoted in the last five years
• Sales
• Department in which they work for
• Salary
• Relative level of salary (high)
Source
https://github.com/ryankarlos/Human-Resource-Analytics-Kaggle-Dataset/tree/master/Original_Kaggle_Dataset
Take a glance
Preparation
• GGally::ggpairs()
+ Random Noise
Make continuous features noisy with the same way as:
• https://medium.com/applied-data-science/new-r-package-the-xgboost-explainer-51dd7d1aa211
Preparation
Baseline profile: table1::table1()
Convert Train / Test set to xgb.DMatrix
Preparation
1. Factor variable → Integer (or dummy)
2. Separate trainset / testset (+under sampling)
3. (data.frame →) matrix → xgb.DMatrix
Convert Train / Test set to xgb.DMatrix
To minimize the intercept
of xgb model
Factor → Integer
Separate train set
(+under sampling)
Convert xgb.DMatrix
Separate test set
Convert xgb.DMatrix
Hyper-parameter settings
Preparation
• According to:
https://xgboost.readthedocs.io/en/latest/parameter.html
• Tune with Grid/Random/BayesOpt. etc., if you like.
(Recommendation: using mlR)
Search optimal number of booster
Build XGBoost model
• Using cross-validation : xgb.cv()
Build XGBoost model: xgb.cv()
Predictive performances
• For test set
Distribution of Prediction
Predictive performances
URL
2.学習したXGBOOST MODELのプロファイル
1. 予測における特徴量の重要度 (feature importance)
1. Structure based importance(Gain & Cover): xgb.importance()
2. Permutation based importance: DALEX::variable_importance()
URL
https://github.com/katokohaku/EDAxgboost/blob/master/100_building_xgboost_model.Rmd
EDA (or explanation) after modelling
1. Build XGBoost model
2. Feature importance
• Structure based (Gain & Cover)
• Permutation based
3. Variable response (1)
• Partial Dependence Plot (PDP / ICE / ALE)
4. Rule Extraction
• Xgb.model.dt.tree()
• intrees
5. Individual explanation
• Shapley value (predcontrib)
• Structure based (predapprox)
6. Variable response (2)
• Shapley value (predcontrib)
• Structure based (predapprox)
7. Feature interaction
• 2-way SHAP (predinteraction)
URL
EDA tools for XGBoost
Suggestion(off topic)
 Feature Tweaking
xgb.importance()
Feature importance
For a tree model:
Gain
• represents fractional contribution of each feature to the model based on the
total gain of this feature's splits. Higher percentage means a more important
predictive feature.
Cover
• metric of the number of observation related to this feature;
Frequency
• percentage representing the relative number of times a feature have been
used in trees.
For a linear model's importance:
Weight
• the linear coefficient of the feature;
https://www.rdocumentation.org/packages/xgboost/versions/0.6.4.1/topics/xgb.importance
Feature importance (structure based)
Calculates weight when not split further for each node
1. Distribute weight differences to each node
2. Accumulate the weight of the path passed by each observation, for each
booster for each feature (node)
Feature importance (structure based)
Feature importance
Gain
• represents fractional contribution of each feature to the model based on the
total gain of this feature's splits. Higher percentage means a more important
predictive feature.
https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf
Gain of ith feature at kth node in jth booster is calculated as
Feature importance (permutation based)
Feature importance
• Calculating the increase in the model’s prediction error after
permuting the feature.
• A feature is “important” if shuffling its values increases the model error,
because in this case the model relied on the feature for the prediction.
https://christophm.github.io/interpretable-ml-book/feature-importance.html
FROM: https://www.kaggle.com/dansbecker/permutation-importance
Structure based vs Permutation based
Feature Importance
Structure based Permutation based
For consistency check, rather than for "which is better?“.
Feature Importance
3.感度分析(1)
1. 変数値の変化に対するモデル出力の応答
1. Individual Conditional Expectation & Partial Dependence Plot (ICE & PD plot)
2. PDPの問題点
3. Accumulated Local Effect (ALE) Plot
URL
https://github.com/katokohaku/EDAxgboost/blob/master/200_Sensitivity_analysis.Rmd
EDA (or explanation) after modelling
1. Build XGBoost model
2. Feature importance
• Structure based (Gain & Cover)
• Permutation based
3. Variable response (1)
• Partial Dependence Plot (PDP / ICE / ALE)
4. Rule Extraction
• Xgb.model.dt.tree()
• intrees
5. Individual explanation
• Shapley value (predcontrib)
• Structure based (predapprox)
6. Variable response (2)
• Shapley value (predcontrib)
• Structure based (predapprox)
7. Feature interaction
• 2-way SHAP (predinteraction)
URL
EDA tools for XGBoost
Suggestion(off topic)
 Feature Tweaking
Marginal Response for a Single Variable
Sensitivity Analysis: ICE+PDP vs ALE Plot
Variable response comparison:
ICE+PD Plot
ALE Plot
What-If & other observation (ICE) + average line (PD)
Ceteris Paribus Plots (blue line)
• show possible scenarios for model predictions allowing for changes in a single
dimension keeping all other features constant (the ceteris paribus principle).
Individual Conditional Expectation (ICE) plot (gray lines)
• visualizes one line per instance.
Partial Dependence plot (red line)
• are shown as the average line of all observation.
https://christophm.github.io/interpretable-ml-book/ice.html
Feature value
Modeloutput
The assumption of independence
• is the biggest issue with Partial Dependence plots. When the features are correlated,
PD create new data points in areas of the feature distribution where the actual
probability is very low.
Disadvantage of Ceteris Paribus Plots and PDP
https://christophm.github.io/interpretable-ml-book/pdp.html#disadvantages-5
Forexample,it is unlikelythat:
Someone is 2 meters tall
but weighs less than 50 kg.
A Solution
Local Effect
• averages its derivative of observations on conditional distribution, instead of averaging
overall distribution of target feature.
Accumulated Local Effects (ALE)
• averages Local Effects across the window after being calculated for each window.
https://arxiv.org/abs/1612.08468
0.0 0.2 0.4 0.6 0.8 1.0
0.00.20.40.60.81.0
LocalEffect(4)
ALE = mean(Local Effects)
Sensitivity Analysis: ICE+PDP & ALE Plot
Sensitivity Analysis: ICE+PDP vs ALE Plot
4-1.ツリーの可視化 と ルールの要約
1. ツリーの可視化
1. boosterのダンプ: xgb.model.dt.tree()
2. Single boosterの可視化: xgb.plot.tree()
3. 要約したツリーの可視化: xgb.plot.multi.trees()
2. 予測ルールの抽出(inTrees)
1. ルールの列挙
2. ルールの要約
URL
https://github.com/katokohaku/EDAxgboost/blob/master/300_rule_extraction_xgbPlots.Rmd
EDA (or explanation) after modelling
1. Build XGBoost model
2. Feature importance
• Structure based (Gain & Cover)
• Permutation based
3. Variable response (1)
• Partial Dependence Plot (PDP / ICE / ALE)
4. Rule Extraction
• Xgb.model.dt.tree()
• intrees
5. Individual explanation
• Shapley value (predcontrib)
• Structure based (predapprox)
6. Variable response (2)
• Shapley value (predcontrib)
• Structure based (predapprox)
7. Feature interaction
• 2-way SHAP (predinteraction)
URL
EDA tools for XGBoost
Suggestion(off topic)
 Feature Tweaking
Text dump Tree model structure
Rule Extraction:: xgb.model.dt.tree()
• Parse a boosted tree model into a data.table structure.
Plot a boosted tree model (1st tree)
Rule Extraction
URL
Plot a boosted tree model (2nd tree)
Rule Extraction
URL
Plot multiple tree model
Rule Extraction
URL
Multiple-in-one plot
Rule Extraction
URL
4-2.ツリーの可視化 と ルールの要約
1. ツリーの可視化
1. boosterのダンプ: xgb.model.dt.tree()
2. Single boosterの可視化: xgb.plot.tree()
3. 要約したツリーの可視化: xgb.plot.multi.trees()
2. 予測ルールの抽出(inTrees)
1. ルールの列挙
2. ルールの要約
URL
https://github.com/katokohaku/EDAxgboost/blob/master/300_rule_extraction_xgbPlots.Rmd
Extract rules from of trees
Rule Extraction: {inTrees}
https://arxiv.org/abs/1408.5456
• Using inTrees
Enumerate rules from of trees
Rule Extraction: {inTrees}
Build a simplified tree ensemble learner (STEL)
Rule Extraction: {inTrees}
ALL of sample code are:
https://github.com/katokohaku/EDAxgboost/blob/master/310_rule_extraction_inTrees.md
5-1.FEATURE CONTRIBUTIONにもとづくプロファイル
1. 個別の観察の説明 (prediction breakdown)
1. Shapley value: predict(..., predcontrib = TRUE, predapprox = FALSE)
2. Structure based: predict(..., predcontrib = TRUE, predapprox = TRUE)
3. 予測に基づく観察対象の次元削減
4. クラスタリングによるグループ化
5. グループ内の観察の可視化
URL
https://github.com/katokohaku/EDAxgboost/blob/master/400_breakdown_individual-explanation_and_clustering.Rmd
EDA (or explanation) after modelling
1. Build XGBoost model
2. Feature importance
• Structure based (Gain & Cover)
• Permutation based
3. Variable response (1)
• Partial Dependence Plot (PDP / ICE / ALE)
4. Rule Extraction
• Xgb.model.dt.tree()
• intrees
5. Individual explanation
• Shapley value (predcontrib)
• Structure based (predapprox)
6. Variable response (2)
• Shapley value (predcontrib)
• Structure based (predapprox)
7. Feature interaction
• 2-way SHAP (predinteraction)
URL
EDA tools for XGBoost
Suggestion(off topic)
 Feature Tweaking
Shapley value
A method for assigning payouts to players depending on their contribution to
the total payout. Players cooperate in a coalition and receive a certain profit
from this cooperation.
The “game”
• is the prediction task for a single instance of the dataset.
The “gain”
• is the actual prediction for this instance minus the average prediction for all instances.
The “players”
• are the feature values of the instance that collaborate to receive the gain (= predict a
certain value).
• https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
• https://christophm.github.io/interpretable-ml-book/shapley.html
Feature contribution based on cooperative game theory
Shapley value
Shapley value is the average of all the marginal contributions
to all possible coalitions.
• One solution to keep the computation time manageable is to compute
contributions for only a few samples of the possible coalitions.
• https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf
• https://christophm.github.io/interpretable-ml-book/shapley.html
Feature contribution based on cooperative game theory
Shapley value
Breakdown individual explanation path
Feature contribution based on tree structure
Based on xgboost model structure,
1. Calculate weight when not split further for each node
2. Distribute weight differences to each node
3. Accumulate the weight of the path passed by each observation, for each
booster for each feature (node)
Feature contribution based on tree structure
To get prediction path
Feature contribution based on tree structure
Individual explanation path
Enumerate Feature contribution based on Shapley / tree structure
Each row explains each observation (prediction breakdown)
Explain single observation
Individual explanation:
Each row explains each observation (prediction breakdown)
5-2.FEATURE CONTRIBUTIONにもとづくプロファイル
1. 個別の観察の説明 (prediction breakdown)
1. Shapley value: predict(..., predcontrib = TRUE, predapprox = FALSE)
2. Structure based: predict(..., predcontrib = TRUE, predapprox = TRUE)
3. 予測に基づく観察対象の次元削減
4. クラスタリングによるグループ化
5. グループ内の観察の可視化
URL
https://github.com/katokohaku/EDAxgboost/blob/master/400_breakdown_individual-explanation_and_clustering.Rmd
Identify clusters based on xgboost
Clustering of featurecontribution of each observation using t-SNE
• Dimension reduction using t-SNE
Dimension reduction: Rtsne::Rtsne()
Identify clusters based on xgboost
Rtsne::Rtsne() → hclust() → cutree() → ggrepel::geom_label_repel()
• Class labeling using hierarchical clustering (hclust)
Rtsne::Rtsne() → hclust() → cutree() → ggrepel::geom_label_repel()
Rtsne::Rtsne() → hclust() → cutree() → ggrepel::geom_label_repel()
Scatter plot with group label
Similar observations in a cluster (1)
Individual explanation
URL
Similar observations in a cluster (2)
Individual explanation
URL
Individual explanation
https://github.com/katokohaku/EDAxgboost/blob/master/R/waterfallBreakdown.R
6.FEATURE CONTRIBUTIONにもとづく感度分析
1. 変数値の変化に対するモデル出力の応答(感度分析)②
1. Shapley value: predict(..., predcontrib = TRUE, predapprox = FALSE)
2. Structure based: predict(..., predcontrib = TRUE, predapprox = TRUE)
URL
https://github.com/katokohaku/EDAxgboost/blob/master/410_breakdown_feature_response-interaction.Rmd
EDA (or explanation) after modelling
1. Build XGBoost model
2. Feature importance
• Structure based (Gain & Cover)
• Permutation based
3. Variable response (1)
• Partial Dependence Plot (PDP / ICE / ALE)
4. Rule Extraction
• Xgb.model.dt.tree()
• intrees
5. Individual explanation
• Shapley value (predcontrib)
• Structure based (predapprox)
6. Variable response (2)
• Shapley value (predcontrib)
• Structure based (predapprox)
7. Feature interaction
• 2-way SHAP (predinteraction)
URL
EDA tools for XGBoost
Suggestion(off topic)
 Feature Tweaking
Individual explanation path
Individual explanation
Each column explains each feature impact (variable response)
Individual Feature Impact (1)
Sensitivity Analysis
Each column explains each feature impact (variable response)
Individual Feature Impact (2-1)
Sensitivity Analysis
Each column explains each feature impact (variable response)
Individual Feature Impact (2-2)
Sensitivity Analysis
Each column explains each feature impact (variable response)
Contribution dependency plots
Sensitivity Analysis
URL
xgb.plot.shap()
• display the estimated contributions (Shapley value) of a feature to model
prediction for each individual case.
Feature Impact Summary
Sensitivity Analysis
http://www.f1-predictor.com/model-interpretability-with-shap/
Similar to SHAPR,
• contribution breakdown from prediction path (model structure).
6.CONTRIBUTIONにもとづく相互作用分析
1. 変数同士の相互作用
1. 2変数の相互作用の強さ: predict(..., predinteraction = TRUE)
URL
https://github.com/katokohaku/EDAxgboost/blob/master/410_breakdown_feature_response-interaction.Rmd
EDA (or explanation) after modelling
1. Build XGBoost model
2. Feature importance
• Structure based (Gain & Cover)
• Permutation based
3. Variable response (1)
• Partial Dependence Plot (PDP / ICE / ALE)
4. Rule Extraction
• Xgb.model.dt.tree()
• intrees
5. Individual explanation
• Shapley value (predcontrib)
• Structure based (predapprox)
6. Variable response (2)
• Shapley value (predcontrib)
• Structure based (predapprox)
7. Feature interaction
• 2-way SHAP (predinteraction)
URL
EDA tools for XGBoost
Suggestion(off topic)
 Feature Tweaking
Feature interaction of single observation
• Feature contribution can be decomposed as 2-way feature interaction.
Feature interaction
2-way featue interaction:
Feature contribution for feature contribution
Individual explanation
Each row shows breakdown of contribution
Feature interaction of single observation
• xgboost:::predict.xgb.Booster(..., predinteraction = TRUE)
xgboost:::predict.xgb.Booster(..., predinteraction = TRUE)
Individual explanation
Feature contribution for feature contribution of single instance
Absolute mean of all interaction
• SHAP can be decomposed as 2-way feature interaction.
xgboost:::predict.xgb.Booster(..., predinteraction = TRUE)
xgboost
Original Paper
• https://www.kdd.org/kdd2016/subtopic/view/xgboost-a-scalable-tree-
boosting-system
Tasks, Metrics & other Parameters
• https://xgboost.readthedocs.io/en/latest/
For R
• http://dmlc.ml/rstats/2016/03/10/xgboost.html
• https://xgboost.readthedocs.io/en/latest/R-
package/xgboostPresentation.html
• https://xgboost.readthedocs.io/en/latest/R-package/discoverYourData.html
解説ブログ記事・スライド(日本語)
• http://kefism.hatenablog.com/entry/2017/06/11/182959
• https://speakerdeck.com/hoxomaxwell/dive-into-xgboost
References
Data & Model explanation
Generic interpretability/explainability
• Iml book
• https://christophm.github.io/interpretable-ml-book/
Exploratory Data Analysis (EDA)
• What is EDA?
• https://www.itl.nist.gov/div898/handbook/eda/section1/eda11.htm
• DALEX
• Descriptive mAchine Learning EXplanations
• https://pbiecek.github.io/DALEX/
• DrWhy
• the collection of tools for Explainable AI (XAI)
• https://pbiecek.github.io/DALEX/
References

More Related Content

What's hot

勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
RyuichiKanoh
 
基礎からのベイズ統計学 輪読会資料 第4章 メトロポリス・ヘイスティングス法
基礎からのベイズ統計学 輪読会資料 第4章 メトロポリス・ヘイスティングス法基礎からのベイズ統計学 輪読会資料 第4章 メトロポリス・ヘイスティングス法
基礎からのベイズ統計学 輪読会資料 第4章 メトロポリス・ヘイスティングス法
Ken'ichi Matsui
 
ベイズ最適化によるハイパラーパラメータ探索
ベイズ最適化によるハイパラーパラメータ探索ベイズ最適化によるハイパラーパラメータ探索
ベイズ最適化によるハイパラーパラメータ探索
西岡 賢一郎
 
Imputation of Missing Values using Random Forest
Imputation of Missing Values using  Random ForestImputation of Missing Values using  Random Forest
Imputation of Missing Values using Random Forest
Satoshi Kato
 
一般化線形混合モデル入門の入門
一般化線形混合モデル入門の入門一般化線形混合モデル入門の入門
一般化線形混合モデル入門の入門
Yu Tamura
 
データ解析入門
データ解析入門データ解析入門
データ解析入門
Takeo Noda
 
3. Vertex AIを用いた時系列データの解析
3. Vertex AIを用いた時系列データの解析3. Vertex AIを用いた時系列データの解析
3. Vertex AIを用いた時系列データの解析
幸太朗 岩澤
 
最高の統計ソフトウェアはどれか? "What’s the Best Statistical Software? A Comparison of R, Py...
最高の統計ソフトウェアはどれか? "What’s the Best Statistical Software? A Comparison of R, Py...最高の統計ソフトウェアはどれか? "What’s the Best Statistical Software? A Comparison of R, Py...
最高の統計ソフトウェアはどれか? "What’s the Best Statistical Software? A Comparison of R, Py...
ケンタ タナカ
 
数式を使わずイメージで理解するEMアルゴリズム
数式を使わずイメージで理解するEMアルゴリズム数式を使わずイメージで理解するEMアルゴリズム
数式を使わずイメージで理解するEMアルゴリズム裕樹 奥田
 
ランダムフォレスト
ランダムフォレストランダムフォレスト
ランダムフォレスト
Kinki University
 
Data assim r
Data assim rData assim r
Data assim r
Xiangze
 
SEMを用いた縦断データの解析 潜在曲線モデル
SEMを用いた縦断データの解析 潜在曲線モデルSEMを用いた縦断データの解析 潜在曲線モデル
SEMを用いた縦断データの解析 潜在曲線モデル
Masaru Tokuoka
 
テーブル・テキスト・画像の反実仮想説明
テーブル・テキスト・画像の反実仮想説明テーブル・テキスト・画像の反実仮想説明
テーブル・テキスト・画像の反実仮想説明
tmtm otm
 
Kaggleのテクニック
KaggleのテクニックKaggleのテクニック
Kaggleのテクニック
Yasunori Ozaki
 
SIGIR2011読み会 3. Learning to Rank
SIGIR2011読み会 3. Learning to RankSIGIR2011読み会 3. Learning to Rank
SIGIR2011読み会 3. Learning to Rank
sleepy_yoshi
 
状態空間モデルの考え方・使い方 - TokyoR #38
状態空間モデルの考え方・使い方 - TokyoR #38状態空間モデルの考え方・使い方 - TokyoR #38
状態空間モデルの考え方・使い方 - TokyoR #38horihorio
 
StanとRでベイズ統計モデリング読書会 導入編(1章~3章)
StanとRでベイズ統計モデリング読書会 導入編(1章~3章)StanとRでベイズ統計モデリング読書会 導入編(1章~3章)
StanとRでベイズ統計モデリング読書会 導入編(1章~3章)
Hiroshi Shimizu
 
学習時に使ってはいないデータの混入「リーケージを避ける」
学習時に使ってはいないデータの混入「リーケージを避ける」学習時に使ってはいないデータの混入「リーケージを避ける」
学習時に使ってはいないデータの混入「リーケージを避ける」
西岡 賢一郎
 
21世紀の手法対決 (MIC vs HSIC)
21世紀の手法対決 (MIC vs HSIC)21世紀の手法対決 (MIC vs HSIC)
21世紀の手法対決 (MIC vs HSIC)
Toru Imai
 
DARM勉強会第3回 (missing data analysis)
DARM勉強会第3回 (missing data analysis)DARM勉強会第3回 (missing data analysis)
DARM勉強会第3回 (missing data analysis)
Masaru Tokuoka
 

What's hot (20)

勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
勾配ブースティングの基礎と最新の動向 (MIRU2020 Tutorial)
 
基礎からのベイズ統計学 輪読会資料 第4章 メトロポリス・ヘイスティングス法
基礎からのベイズ統計学 輪読会資料 第4章 メトロポリス・ヘイスティングス法基礎からのベイズ統計学 輪読会資料 第4章 メトロポリス・ヘイスティングス法
基礎からのベイズ統計学 輪読会資料 第4章 メトロポリス・ヘイスティングス法
 
ベイズ最適化によるハイパラーパラメータ探索
ベイズ最適化によるハイパラーパラメータ探索ベイズ最適化によるハイパラーパラメータ探索
ベイズ最適化によるハイパラーパラメータ探索
 
Imputation of Missing Values using Random Forest
Imputation of Missing Values using  Random ForestImputation of Missing Values using  Random Forest
Imputation of Missing Values using Random Forest
 
一般化線形混合モデル入門の入門
一般化線形混合モデル入門の入門一般化線形混合モデル入門の入門
一般化線形混合モデル入門の入門
 
データ解析入門
データ解析入門データ解析入門
データ解析入門
 
3. Vertex AIを用いた時系列データの解析
3. Vertex AIを用いた時系列データの解析3. Vertex AIを用いた時系列データの解析
3. Vertex AIを用いた時系列データの解析
 
最高の統計ソフトウェアはどれか? "What’s the Best Statistical Software? A Comparison of R, Py...
最高の統計ソフトウェアはどれか? "What’s the Best Statistical Software? A Comparison of R, Py...最高の統計ソフトウェアはどれか? "What’s the Best Statistical Software? A Comparison of R, Py...
最高の統計ソフトウェアはどれか? "What’s the Best Statistical Software? A Comparison of R, Py...
 
数式を使わずイメージで理解するEMアルゴリズム
数式を使わずイメージで理解するEMアルゴリズム数式を使わずイメージで理解するEMアルゴリズム
数式を使わずイメージで理解するEMアルゴリズム
 
ランダムフォレスト
ランダムフォレストランダムフォレスト
ランダムフォレスト
 
Data assim r
Data assim rData assim r
Data assim r
 
SEMを用いた縦断データの解析 潜在曲線モデル
SEMを用いた縦断データの解析 潜在曲線モデルSEMを用いた縦断データの解析 潜在曲線モデル
SEMを用いた縦断データの解析 潜在曲線モデル
 
テーブル・テキスト・画像の反実仮想説明
テーブル・テキスト・画像の反実仮想説明テーブル・テキスト・画像の反実仮想説明
テーブル・テキスト・画像の反実仮想説明
 
Kaggleのテクニック
KaggleのテクニックKaggleのテクニック
Kaggleのテクニック
 
SIGIR2011読み会 3. Learning to Rank
SIGIR2011読み会 3. Learning to RankSIGIR2011読み会 3. Learning to Rank
SIGIR2011読み会 3. Learning to Rank
 
状態空間モデルの考え方・使い方 - TokyoR #38
状態空間モデルの考え方・使い方 - TokyoR #38状態空間モデルの考え方・使い方 - TokyoR #38
状態空間モデルの考え方・使い方 - TokyoR #38
 
StanとRでベイズ統計モデリング読書会 導入編(1章~3章)
StanとRでベイズ統計モデリング読書会 導入編(1章~3章)StanとRでベイズ統計モデリング読書会 導入編(1章~3章)
StanとRでベイズ統計モデリング読書会 導入編(1章~3章)
 
学習時に使ってはいないデータの混入「リーケージを避ける」
学習時に使ってはいないデータの混入「リーケージを避ける」学習時に使ってはいないデータの混入「リーケージを避ける」
学習時に使ってはいないデータの混入「リーケージを避ける」
 
21世紀の手法対決 (MIC vs HSIC)
21世紀の手法対決 (MIC vs HSIC)21世紀の手法対決 (MIC vs HSIC)
21世紀の手法対決 (MIC vs HSIC)
 
DARM勉強会第3回 (missing data analysis)
DARM勉強会第3回 (missing data analysis)DARM勉強会第3回 (missing data analysis)
DARM勉強会第3回 (missing data analysis)
 

Similar to Exploratory data analysis using xgboost package in R

Ember
EmberEmber
Ember
mrphilroth
 
모듈형 패키지를 활용한 나만의 기계학습 모형 만들기 - 회귀나무모형을 중심으로
모듈형 패키지를 활용한 나만의 기계학습 모형 만들기 - 회귀나무모형을 중심으로 모듈형 패키지를 활용한 나만의 기계학습 모형 만들기 - 회귀나무모형을 중심으로
모듈형 패키지를 활용한 나만의 기계학습 모형 만들기 - 회귀나무모형을 중심으로
r-kor
 
MLBox
MLBoxMLBox
R user group meeting 25th jan 2017
R user group meeting 25th jan 2017R user group meeting 25th jan 2017
R user group meeting 25th jan 2017
Garrett Teoh Hor Keong
 
226 team project-report-manjula kollipara
226 team project-report-manjula kollipara226 team project-report-manjula kollipara
226 team project-report-manjula kollipara
Manjula Kollipara
 
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learnt
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learntKaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learnt
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learnt
Eugene Yan Ziyou
 
Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)
Dmitry Grapov
 
ProFET - Protein Feature Engineering Toolki
ProFET - Protein Feature Engineering ToolkiProFET - Protein Feature Engineering Toolki
ProFET - Protein Feature Engineering Toolki
Dan Ofer
 
Spock
SpockSpock
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San JoseThe Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
Nikolay Samokhvalov
 
Cutting edge hyperparameter tuning made simple with ray tune
Cutting edge hyperparameter tuning made simple with ray tuneCutting edge hyperparameter tuning made simple with ray tune
Cutting edge hyperparameter tuning made simple with ray tune
XiaoweiJiang7
 
10 Reasons to Start Your Analytics Project with PostgreSQL
10 Reasons to Start Your Analytics Project with PostgreSQL10 Reasons to Start Your Analytics Project with PostgreSQL
10 Reasons to Start Your Analytics Project with PostgreSQL
Satoshi Nagayasu
 
AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)
Joaquin Vanschoren
 
Understanding GBM and XGBoost in Scikit-Learn
Understanding GBM and XGBoost in Scikit-LearnUnderstanding GBM and XGBoost in Scikit-Learn
Understanding GBM and XGBoost in Scikit-Learn
철민 권
 
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan ZhuBuilding a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Databricks
 
Go Faster With Native Compilation
Go Faster With Native CompilationGo Faster With Native Compilation
Go Faster With Native Compilation
PGConf APAC
 
Go faster with_native_compilation Part-2
Go faster with_native_compilation Part-2Go faster with_native_compilation Part-2
Go faster with_native_compilation Part-2
Rajeev Rastogi (KRR)
 
Go Faster With Native Compilation
Go Faster With Native CompilationGo Faster With Native Compilation
Go Faster With Native Compilation
Rajeev Rastogi (KRR)
 

Similar to Exploratory data analysis using xgboost package in R (20)

Ember
EmberEmber
Ember
 
모듈형 패키지를 활용한 나만의 기계학습 모형 만들기 - 회귀나무모형을 중심으로
모듈형 패키지를 활용한 나만의 기계학습 모형 만들기 - 회귀나무모형을 중심으로 모듈형 패키지를 활용한 나만의 기계학습 모형 만들기 - 회귀나무모형을 중심으로
모듈형 패키지를 활용한 나만의 기계학습 모형 만들기 - 회귀나무모형을 중심으로
 
MLBox
MLBoxMLBox
MLBox
 
R user group meeting 25th jan 2017
R user group meeting 25th jan 2017R user group meeting 25th jan 2017
R user group meeting 25th jan 2017
 
226 team project-report-manjula kollipara
226 team project-report-manjula kollipara226 team project-report-manjula kollipara
226 team project-report-manjula kollipara
 
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learnt
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learntKaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learnt
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learnt
 
Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)Metabolomic Data Analysis Workshop and Tutorials (2014)
Metabolomic Data Analysis Workshop and Tutorials (2014)
 
ProFET - Protein Feature Engineering Toolki
ProFET - Protein Feature Engineering ToolkiProFET - Protein Feature Engineering Toolki
ProFET - Protein Feature Engineering Toolki
 
DB
DBDB
DB
 
Spock
SpockSpock
Spock
 
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San JoseThe Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
The Art of Database Experiments – PostgresConf Silicon Valley 2018 / San Jose
 
Cutting edge hyperparameter tuning made simple with ray tune
Cutting edge hyperparameter tuning made simple with ray tuneCutting edge hyperparameter tuning made simple with ray tune
Cutting edge hyperparameter tuning made simple with ray tune
 
10 Reasons to Start Your Analytics Project with PostgreSQL
10 Reasons to Start Your Analytics Project with PostgreSQL10 Reasons to Start Your Analytics Project with PostgreSQL
10 Reasons to Start Your Analytics Project with PostgreSQL
 
AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)AutoML lectures (ACDL 2019)
AutoML lectures (ACDL 2019)
 
Understanding GBM and XGBoost in Scikit-Learn
Understanding GBM and XGBoost in Scikit-LearnUnderstanding GBM and XGBoost in Scikit-Learn
Understanding GBM and XGBoost in Scikit-Learn
 
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan ZhuBuilding a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
Building a Unified Data Pipeline with Apache Spark and XGBoost with Nan Zhu
 
[ppt]
[ppt][ppt]
[ppt]
 
Go Faster With Native Compilation
Go Faster With Native CompilationGo Faster With Native Compilation
Go Faster With Native Compilation
 
Go faster with_native_compilation Part-2
Go faster with_native_compilation Part-2Go faster with_native_compilation Part-2
Go faster with_native_compilation Part-2
 
Go Faster With Native Compilation
Go Faster With Native CompilationGo Faster With Native Compilation
Go Faster With Native Compilation
 

More from Satoshi Kato

How to generate PowerPoint slides Non-manually using R
How to generate PowerPoint slides Non-manually using RHow to generate PowerPoint slides Non-manually using R
How to generate PowerPoint slides Non-manually using R
Satoshi Kato
 
Dimensionality reduction with t-SNE(Rtsne) and UMAP(uwot) using R packages.
Dimensionality reduction with t-SNE(Rtsne) and UMAP(uwot) using R packages. Dimensionality reduction with t-SNE(Rtsne) and UMAP(uwot) using R packages.
Dimensionality reduction with t-SNE(Rtsne) and UMAP(uwot) using R packages.
Satoshi Kato
 
How to use in R model-agnostic data explanation with DALEX & iml
How to use in R model-agnostic data explanation with DALEX & imlHow to use in R model-agnostic data explanation with DALEX & iml
How to use in R model-agnostic data explanation with DALEX & iml
Satoshi Kato
 
Introduction of inspectDF package
Introduction of inspectDF packageIntroduction of inspectDF package
Introduction of inspectDF package
Satoshi Kato
 
Introduction of featuretweakR package
Introduction of featuretweakR packageIntroduction of featuretweakR package
Introduction of featuretweakR package
Satoshi Kato
 
Genetic algorithm full scratch with R
Genetic algorithm full scratch with RGenetic algorithm full scratch with R
Genetic algorithm full scratch with R
Satoshi Kato
 
Intoroduction & R implementation of "Interpretable predictions of tree-based ...
Intoroduction & R implementation of "Interpretable predictions of tree-based ...Intoroduction & R implementation of "Interpretable predictions of tree-based ...
Intoroduction & R implementation of "Interpretable predictions of tree-based ...
Satoshi Kato
 
Multiple optimization and Non-dominated sorting with rPref package in R
Multiple optimization and Non-dominated sorting with rPref package in RMultiple optimization and Non-dominated sorting with rPref package in R
Multiple optimization and Non-dominated sorting with rPref package in R
Satoshi Kato
 
Deep forest (preliminary ver.)
Deep forest  (preliminary ver.)Deep forest  (preliminary ver.)
Deep forest (preliminary ver.)
Satoshi Kato
 
Introduction of "the alternate features search" using R
Introduction of  "the alternate features search" using RIntroduction of  "the alternate features search" using R
Introduction of "the alternate features search" using R
Satoshi Kato
 
forestFloorパッケージを使ったrandomForestの感度分析
forestFloorパッケージを使ったrandomForestの感度分析forestFloorパッケージを使ったrandomForestの感度分析
forestFloorパッケージを使ったrandomForestの感度分析
Satoshi Kato
 
Oracle property and_hdm_pkg_rigorouslasso
Oracle property and_hdm_pkg_rigorouslassoOracle property and_hdm_pkg_rigorouslasso
Oracle property and_hdm_pkg_rigorouslasso
Satoshi Kato
 
Interpreting Tree Ensembles with inTrees
Interpreting Tree Ensembles with  inTreesInterpreting Tree Ensembles with  inTrees
Interpreting Tree Ensembles with inTrees
Satoshi Kato
 

More from Satoshi Kato (13)

How to generate PowerPoint slides Non-manually using R
How to generate PowerPoint slides Non-manually using RHow to generate PowerPoint slides Non-manually using R
How to generate PowerPoint slides Non-manually using R
 
Dimensionality reduction with t-SNE(Rtsne) and UMAP(uwot) using R packages.
Dimensionality reduction with t-SNE(Rtsne) and UMAP(uwot) using R packages. Dimensionality reduction with t-SNE(Rtsne) and UMAP(uwot) using R packages.
Dimensionality reduction with t-SNE(Rtsne) and UMAP(uwot) using R packages.
 
How to use in R model-agnostic data explanation with DALEX & iml
How to use in R model-agnostic data explanation with DALEX & imlHow to use in R model-agnostic data explanation with DALEX & iml
How to use in R model-agnostic data explanation with DALEX & iml
 
Introduction of inspectDF package
Introduction of inspectDF packageIntroduction of inspectDF package
Introduction of inspectDF package
 
Introduction of featuretweakR package
Introduction of featuretweakR packageIntroduction of featuretweakR package
Introduction of featuretweakR package
 
Genetic algorithm full scratch with R
Genetic algorithm full scratch with RGenetic algorithm full scratch with R
Genetic algorithm full scratch with R
 
Intoroduction & R implementation of "Interpretable predictions of tree-based ...
Intoroduction & R implementation of "Interpretable predictions of tree-based ...Intoroduction & R implementation of "Interpretable predictions of tree-based ...
Intoroduction & R implementation of "Interpretable predictions of tree-based ...
 
Multiple optimization and Non-dominated sorting with rPref package in R
Multiple optimization and Non-dominated sorting with rPref package in RMultiple optimization and Non-dominated sorting with rPref package in R
Multiple optimization and Non-dominated sorting with rPref package in R
 
Deep forest (preliminary ver.)
Deep forest  (preliminary ver.)Deep forest  (preliminary ver.)
Deep forest (preliminary ver.)
 
Introduction of "the alternate features search" using R
Introduction of  "the alternate features search" using RIntroduction of  "the alternate features search" using R
Introduction of "the alternate features search" using R
 
forestFloorパッケージを使ったrandomForestの感度分析
forestFloorパッケージを使ったrandomForestの感度分析forestFloorパッケージを使ったrandomForestの感度分析
forestFloorパッケージを使ったrandomForestの感度分析
 
Oracle property and_hdm_pkg_rigorouslasso
Oracle property and_hdm_pkg_rigorouslassoOracle property and_hdm_pkg_rigorouslasso
Oracle property and_hdm_pkg_rigorouslasso
 
Interpreting Tree Ensembles with inTrees
Interpreting Tree Ensembles with  inTreesInterpreting Tree Ensembles with  inTrees
Interpreting Tree Ensembles with inTrees
 

Recently uploaded

Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
Subhajit Sahu
 
Analysis insight about a Flyball dog competition team's performance
Analysis insight about a Flyball dog competition team's performanceAnalysis insight about a Flyball dog competition team's performance
Analysis insight about a Flyball dog competition team's performance
roli9797
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
John Andrews
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
Timothy Spann
 
Learn SQL from basic queries to Advance queries
Learn SQL from basic queries to Advance queriesLearn SQL from basic queries to Advance queries
Learn SQL from basic queries to Advance queries
manishkhaire30
 
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
sameer shah
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
axoqas
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
axoqas
 
Adjusting OpenMP PageRank : SHORT REPORT / NOTES
Adjusting OpenMP PageRank : SHORT REPORT / NOTESAdjusting OpenMP PageRank : SHORT REPORT / NOTES
Adjusting OpenMP PageRank : SHORT REPORT / NOTES
Subhajit Sahu
 
Global Situational Awareness of A.I. and where its headed
Global Situational Awareness of A.I. and where its headedGlobal Situational Awareness of A.I. and where its headed
Global Situational Awareness of A.I. and where its headed
vikram sood
 
The Building Blocks of QuestDB, a Time Series Database
The Building Blocks of QuestDB, a Time Series DatabaseThe Building Blocks of QuestDB, a Time Series Database
The Building Blocks of QuestDB, a Time Series Database
javier ramirez
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
ahzuo
 
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdf
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfUnleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdf
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdf
Enterprise Wired
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
jerlynmaetalle
 
Data_and_Analytics_Essentials_Architect_an_Analytics_Platform.pptx
Data_and_Analytics_Essentials_Architect_an_Analytics_Platform.pptxData_and_Analytics_Essentials_Architect_an_Analytics_Platform.pptx
Data_and_Analytics_Essentials_Architect_an_Analytics_Platform.pptx
AnirbanRoy608946
 
Everything you wanted to know about LIHTC
Everything you wanted to know about LIHTCEverything you wanted to know about LIHTC
Everything you wanted to know about LIHTC
Roger Valdez
 
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
74nqk8xf
 
My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.
rwarrenll
 
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
mbawufebxi
 
办(uts毕业证书)悉尼科技大学毕业证学历证书原版一模一样
办(uts毕业证书)悉尼科技大学毕业证学历证书原版一模一样办(uts毕业证书)悉尼科技大学毕业证学历证书原版一模一样
办(uts毕业证书)悉尼科技大学毕业证学历证书原版一模一样
apvysm8
 

Recently uploaded (20)

Adjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTESAdjusting primitives for graph : SHORT REPORT / NOTES
Adjusting primitives for graph : SHORT REPORT / NOTES
 
Analysis insight about a Flyball dog competition team's performance
Analysis insight about a Flyball dog competition team's performanceAnalysis insight about a Flyball dog competition team's performance
Analysis insight about a Flyball dog competition team's performance
 
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
Chatty Kathy - UNC Bootcamp Final Project Presentation - Final Version - 5.23...
 
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
06-04-2024 - NYC Tech Week - Discussion on Vector Databases, Unstructured Dat...
 
Learn SQL from basic queries to Advance queries
Learn SQL from basic queries to Advance queriesLearn SQL from basic queries to Advance queries
Learn SQL from basic queries to Advance queries
 
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
STATATHON: Unleashing the Power of Statistics in a 48-Hour Knowledge Extravag...
 
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
哪里卖(usq毕业证书)南昆士兰大学毕业证研究生文凭证书托福证书原版一模一样
 
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
做(mqu毕业证书)麦考瑞大学毕业证硕士文凭证书学费发票原版一模一样
 
Adjusting OpenMP PageRank : SHORT REPORT / NOTES
Adjusting OpenMP PageRank : SHORT REPORT / NOTESAdjusting OpenMP PageRank : SHORT REPORT / NOTES
Adjusting OpenMP PageRank : SHORT REPORT / NOTES
 
Global Situational Awareness of A.I. and where its headed
Global Situational Awareness of A.I. and where its headedGlobal Situational Awareness of A.I. and where its headed
Global Situational Awareness of A.I. and where its headed
 
The Building Blocks of QuestDB, a Time Series Database
The Building Blocks of QuestDB, a Time Series DatabaseThe Building Blocks of QuestDB, a Time Series Database
The Building Blocks of QuestDB, a Time Series Database
 
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
一比一原版(UIUC毕业证)伊利诺伊大学|厄巴纳-香槟分校毕业证如何办理
 
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdf
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdfUnleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdf
Unleashing the Power of Data_ Choosing a Trusted Analytics Platform.pdf
 
The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...The affect of service quality and online reviews on customer loyalty in the E...
The affect of service quality and online reviews on customer loyalty in the E...
 
Data_and_Analytics_Essentials_Architect_an_Analytics_Platform.pptx
Data_and_Analytics_Essentials_Architect_an_Analytics_Platform.pptxData_and_Analytics_Essentials_Architect_an_Analytics_Platform.pptx
Data_and_Analytics_Essentials_Architect_an_Analytics_Platform.pptx
 
Everything you wanted to know about LIHTC
Everything you wanted to know about LIHTCEverything you wanted to know about LIHTC
Everything you wanted to know about LIHTC
 
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
一比一原版(Coventry毕业证书)考文垂大学毕业证如何办理
 
My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.My burning issue is homelessness K.C.M.O.
My burning issue is homelessness K.C.M.O.
 
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
一比一原版(Bradford毕业证书)布拉德福德大学毕业证如何办理
 
办(uts毕业证书)悉尼科技大学毕业证学历证书原版一模一样
办(uts毕业证书)悉尼科技大学毕业证学历证书原版一模一样办(uts毕业证书)悉尼科技大学毕业证学历证书原版一模一样
办(uts毕业证书)悉尼科技大学毕业证学历证书原版一模一样
 

Exploratory data analysis using xgboost package in R

  • 1. Exploratory DataAnalysis Using XGBoost XGBoost を使った探索的データ分析 第1回 R勉強会@仙台(#Sendai.R)
  • 3. Exploratory Data Analysis (EDA) https://www.itl.nist.gov/div898/handbook/eda/section1/eda11.htm is an approach/philosophy for data analysis that employs a variety of techniques (mostly graphical) to 1. maximize insight into a data set; 2. uncover underlying structure; 3. extract important variables; 4. detect outliers and anomalies; 5. test underlying assumptions; 6. develop parsimonious models; and 7. determine optimal factor settings.
  • 4. EDA (or explanation) after modelling Taxonomy of Interpretation / Explanation https://christophm.github.io/interpretable-ml-book/
  • 5. EDA using Random Forest (EDARF) randomForest を使った探索的データ分析 (off-topic) Random Forest model Imputation for missing  rfimpute()  {missForest} Rule Extraction  {intrees}  defragTrees@python  EDARF::plot_prox()  getTree() Feature importance  Gini / Accuracy  Permutation based Sensitivity analysis  Partial Dependence Plot (PDP)  feature contribution based {forestFloor} Suggestion  Feature Tweaking
  • 6. Today’s topic Intrinsic Post hoc Model-Specific Methods • Linear Regression • Logistic Regression • GLM, GAM and more • Decision Tree • Decision Rules • RuleFit • Naive Bayes Classifier • K-Nearest Neighbors • Feature Importance (OOB error@RF; gain/cover/weight @XGB) • Feature Contribution (forestFloor@RF, XGBoostexplainer, lightgbmExplainer) • Alternate / Enumerate lasso (@LASSO) • inTrees / defragTrees (@RF/XGB) • Actionable feature tweaking (@RF/XGB) Model- Agnostic Methods Intrinsic interpretable Model にも適用可能 • Partial Dependence Plot • Individual Conditional Expectation • Accumulated Local Effects Plot • Feature Interaction • Permutation Feature Importance • Global Surrogate • Local Explanation (LIME, Shapley Values, breakDown) Example- based Explanations ?? • Counterfactual Explanations • Adversarial Examples • Prototypes and Criticisms • Influential Instances EDA × XGBoost
  • 7. Why EDA × XGBoost (or LightGBM)? Motivation https://twitter.com/fchollet/status/1113476428249464833?s=19
  • 8. Decision tree, Random Forest & Gradient Boosting Overview https://www.kdnuggets.com/2017/10/understanding-machine-learning-algorithms.html http://www.cse.chalmers.se/~richajo/dit866/lectures/l8/gb_explainer.pdf Gradient Boosting
  • 9. Gradient Boosting & XGBoost Overview http://www.yisongyue.com/courses/cs155/2019_winter/lectures/Lecture_06.pdf https://www.kdd.org/kdd2016/papers/files/rfp0697-chenAemb.pdf XGBoost’s Improvements:  Overfitting suppression  Split finding efficiency  Computation time
  • 10. EDA using XGBoost XGBoost を使った探索的データ分析 XGBoost model Rule Extraction  Xgb.model.dt.tree()  {intrees}  defragTrees@python Feature importance  Gain & Cover  Permutation based Summarize explanation  Clustering of observations  Variable response (2)  Feature interaction Suggestion  Feature Tweaking Individual explanation  Shapley value (predcontrib)  Structure based (predapprox) Variable response (1)  PDP / ICE / ALE
  • 11. EDA (or explanation) using XGBoost 1. Build XGBoost model 2. Feature importance • Gain & Cover • Permutation based 3. Variable response (1) • Partial Dependence Plot (PDP/ICE/ALE) 4. Rule Extraction • Xgb.model.dt.tree() • intrees • defragTrees@python 5. Individual explanation • Shapley value (predcontrib) • Structure based (predapprox) 6. Variable response (2) • Shapley value (predcontrib) • Structure based (predapprox) 7. Feature interaction • 2-way SHAP (predinteraction) URL Today’s Topic Suggestion(off topic)  Feature Tweaking
  • 12. To Get ALL the Sample Codes Please see github: • https://github.com/katokohaku/EDAxgboost
  • 13. 1.XGBOOST MODELの構築 1. データセット 1. 変数の基本プロファイルの確認(型、定義、情報、構造、etc) 2. 前処理(変数変換、教師/テストへの分割・サンプリング、 データ変換) 2. タスクと評価指標の設定 1. 分類問題? 回帰問題(回帰の種類)? クラスタリング? その他? 2. 正確度、誤差、AUC、その他? 3. ハイパーパラメタの設定 1. パラメターサーチする・しない 2. どのパラメータ?、探索の方法? 4. 学習済みモデルの評価 1. 予測精度、予測特性(バイアス傾向)、その他 https://github.com/katokohaku/EDAxgboost/blob/master/100_building_xgboost_model.Rmd
  • 14. EDA (or explanation) after modelling 1. Build XGBoost model 2. Feature importance • Structure based (Gain & Cover) • Permutation based 3. Variable response (1) • Partial Dependence Plot (PDP / ICE / ALE) 4. Rule Extraction • Xgb.model.dt.tree() • intrees 5. Individual explanation • Shapley value (predcontrib) • Structure based (predapprox) 6. Variable response (2) • Shapley value (predcontrib) • Structure based (predapprox) 7. Feature interaction • 2-way SHAP (predinteraction) URL EDA tools for XGBoost Suggestion(off topic)  Feature Tweaking
  • 15. Human Resources Analytics Data Set Preparation • left (target to predict) • Whether the employee left the workplace or not (1 or 0) Factor • satisfaction_level • Level of satisfaction (0-1) • last_evaluation • Time since last performance evaluation (in Years) • number_project • Number of projects completed while at work • average_montly_hours • Average monthly hours at workplace • time_spend_company • Number of years spent in the company • Work_accident • Whether the employee had a workplace accident • promotion_last_5years • Whether the employee was promoted in the last five years • Sales • Department in which they work for • Salary • Relative level of salary (high) Source https://github.com/ryankarlos/Human-Resource-Analytics-Kaggle-Dataset/tree/master/Original_Kaggle_Dataset
  • 16. Take a glance Preparation • GGally::ggpairs()
  • 17. + Random Noise Make continuous features noisy with the same way as: • https://medium.com/applied-data-science/new-r-package-the-xgboost-explainer-51dd7d1aa211 Preparation
  • 19. Convert Train / Test set to xgb.DMatrix Preparation 1. Factor variable → Integer (or dummy) 2. Separate trainset / testset (+under sampling) 3. (data.frame →) matrix → xgb.DMatrix
  • 20. Convert Train / Test set to xgb.DMatrix To minimize the intercept of xgb model Factor → Integer Separate train set (+under sampling) Convert xgb.DMatrix Separate test set Convert xgb.DMatrix
  • 21. Hyper-parameter settings Preparation • According to: https://xgboost.readthedocs.io/en/latest/parameter.html • Tune with Grid/Random/BayesOpt. etc., if you like. (Recommendation: using mlR)
  • 22. Search optimal number of booster Build XGBoost model • Using cross-validation : xgb.cv()
  • 26. 2.学習したXGBOOST MODELのプロファイル 1. 予測における特徴量の重要度 (feature importance) 1. Structure based importance(Gain & Cover): xgb.importance() 2. Permutation based importance: DALEX::variable_importance() URL https://github.com/katokohaku/EDAxgboost/blob/master/100_building_xgboost_model.Rmd
  • 27. EDA (or explanation) after modelling 1. Build XGBoost model 2. Feature importance • Structure based (Gain & Cover) • Permutation based 3. Variable response (1) • Partial Dependence Plot (PDP / ICE / ALE) 4. Rule Extraction • Xgb.model.dt.tree() • intrees 5. Individual explanation • Shapley value (predcontrib) • Structure based (predapprox) 6. Variable response (2) • Shapley value (predcontrib) • Structure based (predapprox) 7. Feature interaction • 2-way SHAP (predinteraction) URL EDA tools for XGBoost Suggestion(off topic)  Feature Tweaking
  • 28. xgb.importance() Feature importance For a tree model: Gain • represents fractional contribution of each feature to the model based on the total gain of this feature's splits. Higher percentage means a more important predictive feature. Cover • metric of the number of observation related to this feature; Frequency • percentage representing the relative number of times a feature have been used in trees. For a linear model's importance: Weight • the linear coefficient of the feature; https://www.rdocumentation.org/packages/xgboost/versions/0.6.4.1/topics/xgb.importance
  • 29. Feature importance (structure based) Calculates weight when not split further for each node 1. Distribute weight differences to each node 2. Accumulate the weight of the path passed by each observation, for each booster for each feature (node)
  • 30. Feature importance (structure based) Feature importance Gain • represents fractional contribution of each feature to the model based on the total gain of this feature's splits. Higher percentage means a more important predictive feature. https://homes.cs.washington.edu/~tqchen/pdf/BoostedTree.pdf Gain of ith feature at kth node in jth booster is calculated as
  • 31. Feature importance (permutation based) Feature importance • Calculating the increase in the model’s prediction error after permuting the feature. • A feature is “important” if shuffling its values increases the model error, because in this case the model relied on the feature for the prediction. https://christophm.github.io/interpretable-ml-book/feature-importance.html FROM: https://www.kaggle.com/dansbecker/permutation-importance
  • 32. Structure based vs Permutation based Feature Importance Structure based Permutation based For consistency check, rather than for "which is better?“.
  • 34. 3.感度分析(1) 1. 変数値の変化に対するモデル出力の応答 1. Individual Conditional Expectation & Partial Dependence Plot (ICE & PD plot) 2. PDPの問題点 3. Accumulated Local Effect (ALE) Plot URL https://github.com/katokohaku/EDAxgboost/blob/master/200_Sensitivity_analysis.Rmd
  • 35. EDA (or explanation) after modelling 1. Build XGBoost model 2. Feature importance • Structure based (Gain & Cover) • Permutation based 3. Variable response (1) • Partial Dependence Plot (PDP / ICE / ALE) 4. Rule Extraction • Xgb.model.dt.tree() • intrees 5. Individual explanation • Shapley value (predcontrib) • Structure based (predapprox) 6. Variable response (2) • Shapley value (predcontrib) • Structure based (predapprox) 7. Feature interaction • 2-way SHAP (predinteraction) URL EDA tools for XGBoost Suggestion(off topic)  Feature Tweaking
  • 36. Marginal Response for a Single Variable Sensitivity Analysis: ICE+PDP vs ALE Plot Variable response comparison: ICE+PD Plot ALE Plot
  • 37. What-If & other observation (ICE) + average line (PD) Ceteris Paribus Plots (blue line) • show possible scenarios for model predictions allowing for changes in a single dimension keeping all other features constant (the ceteris paribus principle). Individual Conditional Expectation (ICE) plot (gray lines) • visualizes one line per instance. Partial Dependence plot (red line) • are shown as the average line of all observation. https://christophm.github.io/interpretable-ml-book/ice.html Feature value Modeloutput
  • 38. The assumption of independence • is the biggest issue with Partial Dependence plots. When the features are correlated, PD create new data points in areas of the feature distribution where the actual probability is very low. Disadvantage of Ceteris Paribus Plots and PDP https://christophm.github.io/interpretable-ml-book/pdp.html#disadvantages-5 Forexample,it is unlikelythat: Someone is 2 meters tall but weighs less than 50 kg.
  • 39. A Solution Local Effect • averages its derivative of observations on conditional distribution, instead of averaging overall distribution of target feature. Accumulated Local Effects (ALE) • averages Local Effects across the window after being calculated for each window. https://arxiv.org/abs/1612.08468 0.0 0.2 0.4 0.6 0.8 1.0 0.00.20.40.60.81.0 LocalEffect(4) ALE = mean(Local Effects)
  • 42. 4-1.ツリーの可視化 と ルールの要約 1. ツリーの可視化 1. boosterのダンプ: xgb.model.dt.tree() 2. Single boosterの可視化: xgb.plot.tree() 3. 要約したツリーの可視化: xgb.plot.multi.trees() 2. 予測ルールの抽出(inTrees) 1. ルールの列挙 2. ルールの要約 URL https://github.com/katokohaku/EDAxgboost/blob/master/300_rule_extraction_xgbPlots.Rmd
  • 43. EDA (or explanation) after modelling 1. Build XGBoost model 2. Feature importance • Structure based (Gain & Cover) • Permutation based 3. Variable response (1) • Partial Dependence Plot (PDP / ICE / ALE) 4. Rule Extraction • Xgb.model.dt.tree() • intrees 5. Individual explanation • Shapley value (predcontrib) • Structure based (predapprox) 6. Variable response (2) • Shapley value (predcontrib) • Structure based (predapprox) 7. Feature interaction • 2-way SHAP (predinteraction) URL EDA tools for XGBoost Suggestion(off topic)  Feature Tweaking
  • 44. Text dump Tree model structure Rule Extraction:: xgb.model.dt.tree() • Parse a boosted tree model into a data.table structure.
  • 45. Plot a boosted tree model (1st tree) Rule Extraction URL
  • 46. Plot a boosted tree model (2nd tree) Rule Extraction URL
  • 47. Plot multiple tree model Rule Extraction URL
  • 49. 4-2.ツリーの可視化 と ルールの要約 1. ツリーの可視化 1. boosterのダンプ: xgb.model.dt.tree() 2. Single boosterの可視化: xgb.plot.tree() 3. 要約したツリーの可視化: xgb.plot.multi.trees() 2. 予測ルールの抽出(inTrees) 1. ルールの列挙 2. ルールの要約 URL https://github.com/katokohaku/EDAxgboost/blob/master/300_rule_extraction_xgbPlots.Rmd
  • 50. Extract rules from of trees Rule Extraction: {inTrees} https://arxiv.org/abs/1408.5456 • Using inTrees
  • 51. Enumerate rules from of trees Rule Extraction: {inTrees}
  • 52. Build a simplified tree ensemble learner (STEL) Rule Extraction: {inTrees} ALL of sample code are: https://github.com/katokohaku/EDAxgboost/blob/master/310_rule_extraction_inTrees.md
  • 53. 5-1.FEATURE CONTRIBUTIONにもとづくプロファイル 1. 個別の観察の説明 (prediction breakdown) 1. Shapley value: predict(..., predcontrib = TRUE, predapprox = FALSE) 2. Structure based: predict(..., predcontrib = TRUE, predapprox = TRUE) 3. 予測に基づく観察対象の次元削減 4. クラスタリングによるグループ化 5. グループ内の観察の可視化 URL https://github.com/katokohaku/EDAxgboost/blob/master/400_breakdown_individual-explanation_and_clustering.Rmd
  • 54. EDA (or explanation) after modelling 1. Build XGBoost model 2. Feature importance • Structure based (Gain & Cover) • Permutation based 3. Variable response (1) • Partial Dependence Plot (PDP / ICE / ALE) 4. Rule Extraction • Xgb.model.dt.tree() • intrees 5. Individual explanation • Shapley value (predcontrib) • Structure based (predapprox) 6. Variable response (2) • Shapley value (predcontrib) • Structure based (predapprox) 7. Feature interaction • 2-way SHAP (predinteraction) URL EDA tools for XGBoost Suggestion(off topic)  Feature Tweaking
  • 55. Shapley value A method for assigning payouts to players depending on their contribution to the total payout. Players cooperate in a coalition and receive a certain profit from this cooperation. The “game” • is the prediction task for a single instance of the dataset. The “gain” • is the actual prediction for this instance minus the average prediction for all instances. The “players” • are the feature values of the instance that collaborate to receive the gain (= predict a certain value). • https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf • https://christophm.github.io/interpretable-ml-book/shapley.html Feature contribution based on cooperative game theory
  • 56. Shapley value Shapley value is the average of all the marginal contributions to all possible coalitions. • One solution to keep the computation time manageable is to compute contributions for only a few samples of the possible coalitions. • https://papers.nips.cc/paper/7062-a-unified-approach-to-interpreting-model-predictions.pdf • https://christophm.github.io/interpretable-ml-book/shapley.html Feature contribution based on cooperative game theory
  • 58. Breakdown individual explanation path Feature contribution based on tree structure Based on xgboost model structure, 1. Calculate weight when not split further for each node 2. Distribute weight differences to each node 3. Accumulate the weight of the path passed by each observation, for each booster for each feature (node)
  • 59. Feature contribution based on tree structure To get prediction path
  • 60. Feature contribution based on tree structure
  • 61. Individual explanation path Enumerate Feature contribution based on Shapley / tree structure Each row explains each observation (prediction breakdown)
  • 62. Explain single observation Individual explanation: Each row explains each observation (prediction breakdown)
  • 63. 5-2.FEATURE CONTRIBUTIONにもとづくプロファイル 1. 個別の観察の説明 (prediction breakdown) 1. Shapley value: predict(..., predcontrib = TRUE, predapprox = FALSE) 2. Structure based: predict(..., predcontrib = TRUE, predapprox = TRUE) 3. 予測に基づく観察対象の次元削減 4. クラスタリングによるグループ化 5. グループ内の観察の可視化 URL https://github.com/katokohaku/EDAxgboost/blob/master/400_breakdown_individual-explanation_and_clustering.Rmd
  • 64. Identify clusters based on xgboost Clustering of featurecontribution of each observation using t-SNE • Dimension reduction using t-SNE
  • 66. Identify clusters based on xgboost Rtsne::Rtsne() → hclust() → cutree() → ggrepel::geom_label_repel() • Class labeling using hierarchical clustering (hclust)
  • 67. Rtsne::Rtsne() → hclust() → cutree() → ggrepel::geom_label_repel()
  • 68. Rtsne::Rtsne() → hclust() → cutree() → ggrepel::geom_label_repel() Scatter plot with group label
  • 69. Similar observations in a cluster (1) Individual explanation URL
  • 70. Similar observations in a cluster (2) Individual explanation URL
  • 72. 6.FEATURE CONTRIBUTIONにもとづく感度分析 1. 変数値の変化に対するモデル出力の応答(感度分析)② 1. Shapley value: predict(..., predcontrib = TRUE, predapprox = FALSE) 2. Structure based: predict(..., predcontrib = TRUE, predapprox = TRUE) URL https://github.com/katokohaku/EDAxgboost/blob/master/410_breakdown_feature_response-interaction.Rmd
  • 73. EDA (or explanation) after modelling 1. Build XGBoost model 2. Feature importance • Structure based (Gain & Cover) • Permutation based 3. Variable response (1) • Partial Dependence Plot (PDP / ICE / ALE) 4. Rule Extraction • Xgb.model.dt.tree() • intrees 5. Individual explanation • Shapley value (predcontrib) • Structure based (predapprox) 6. Variable response (2) • Shapley value (predcontrib) • Structure based (predapprox) 7. Feature interaction • 2-way SHAP (predinteraction) URL EDA tools for XGBoost Suggestion(off topic)  Feature Tweaking
  • 74. Individual explanation path Individual explanation Each column explains each feature impact (variable response)
  • 75. Individual Feature Impact (1) Sensitivity Analysis Each column explains each feature impact (variable response)
  • 76. Individual Feature Impact (2-1) Sensitivity Analysis Each column explains each feature impact (variable response)
  • 77.
  • 78. Individual Feature Impact (2-2) Sensitivity Analysis Each column explains each feature impact (variable response)
  • 79.
  • 80. Contribution dependency plots Sensitivity Analysis URL xgb.plot.shap() • display the estimated contributions (Shapley value) of a feature to model prediction for each individual case.
  • 81. Feature Impact Summary Sensitivity Analysis http://www.f1-predictor.com/model-interpretability-with-shap/ Similar to SHAPR, • contribution breakdown from prediction path (model structure).
  • 82.
  • 83.
  • 84. 6.CONTRIBUTIONにもとづく相互作用分析 1. 変数同士の相互作用 1. 2変数の相互作用の強さ: predict(..., predinteraction = TRUE) URL https://github.com/katokohaku/EDAxgboost/blob/master/410_breakdown_feature_response-interaction.Rmd
  • 85. EDA (or explanation) after modelling 1. Build XGBoost model 2. Feature importance • Structure based (Gain & Cover) • Permutation based 3. Variable response (1) • Partial Dependence Plot (PDP / ICE / ALE) 4. Rule Extraction • Xgb.model.dt.tree() • intrees 5. Individual explanation • Shapley value (predcontrib) • Structure based (predapprox) 6. Variable response (2) • Shapley value (predcontrib) • Structure based (predapprox) 7. Feature interaction • 2-way SHAP (predinteraction) URL EDA tools for XGBoost Suggestion(off topic)  Feature Tweaking
  • 86. Feature interaction of single observation • Feature contribution can be decomposed as 2-way feature interaction. Feature interaction
  • 87. 2-way featue interaction: Feature contribution for feature contribution Individual explanation Each row shows breakdown of contribution
  • 88. Feature interaction of single observation • xgboost:::predict.xgb.Booster(..., predinteraction = TRUE) xgboost:::predict.xgb.Booster(..., predinteraction = TRUE)
  • 89. Individual explanation Feature contribution for feature contribution of single instance
  • 90. Absolute mean of all interaction • SHAP can be decomposed as 2-way feature interaction. xgboost:::predict.xgb.Booster(..., predinteraction = TRUE)
  • 91.
  • 92. xgboost Original Paper • https://www.kdd.org/kdd2016/subtopic/view/xgboost-a-scalable-tree- boosting-system Tasks, Metrics & other Parameters • https://xgboost.readthedocs.io/en/latest/ For R • http://dmlc.ml/rstats/2016/03/10/xgboost.html • https://xgboost.readthedocs.io/en/latest/R- package/xgboostPresentation.html • https://xgboost.readthedocs.io/en/latest/R-package/discoverYourData.html 解説ブログ記事・スライド(日本語) • http://kefism.hatenablog.com/entry/2017/06/11/182959 • https://speakerdeck.com/hoxomaxwell/dive-into-xgboost References
  • 93. Data & Model explanation Generic interpretability/explainability • Iml book • https://christophm.github.io/interpretable-ml-book/ Exploratory Data Analysis (EDA) • What is EDA? • https://www.itl.nist.gov/div898/handbook/eda/section1/eda11.htm • DALEX • Descriptive mAchine Learning EXplanations • https://pbiecek.github.io/DALEX/ • DrWhy • the collection of tools for Explainable AI (XAI) • https://pbiecek.github.io/DALEX/ References

Editor's Notes

  1. ある予測が得られる過程を協力ゲームと考える: 予測値=「報酬」 各変数=ゲームの「プレーヤー」 各変数の貢献度を、特徴間で「報酬」を公平に配分する 協力した = 元の予測値 協力しない= 変数をシャッフルしたときの予測値 両者の差分をすべての組み合わせで評価する 特徴量を取り除いて学習したモデルの予測値と元のモデルの予測値との差ではないことに注意。
  2. ある予測が得られる過程を協力ゲームと考える: 予測値=「報酬」 各変数=ゲームの「プレーヤー」 各変数の貢献度を、特徴間で「報酬」を公平に配分する 協力した = 元の予測値 協力しない= 変数をシャッフルしたときの予測値 両者の差分をすべての組み合わせで評価する 特徴量を取り除いて学習したモデルの予測値と元のモデルの予測値との差ではないことに注意。