Successfully reported this slideshow.                                          Upcoming SlideShare
×

# Introduction to Modeling

3,387 views

Published on

This slide deck presents an introduction to statistical modeling by Don McCormack of JMP. Don presents at Building Better Models seminars throughout the world. Upcoming complimentary US seminars are listed here: http://jmp.com/about/events/seminars/

Published in: Technology
• Full Name
Comment goes here.

Are you sure you want to Yes No • Be the first to comment

### Introduction to Modeling

2. 2. 2 Copyright © 2010, SAS Institute Inc. All rights reserved. What is a Model?  An empirical representation that relates a set of inputs (predictors, X) to one or more outcomes (responses, Y)  Separates the response variation into signal and noise Y = f(X) + E  Y is one or more continuous or categorical response outcomes  X is one or more continuous or categorical predictors  f(X) describes predictable variation in Y (signal)  E describes non-predictable variation in Y (noise)  The mathematical form of f(X) can be based on domain knowledge or mathematical convenience.  “Remember that all models are wrong; the practical question is how wrong do they have to be to not be useful.” – George Box
3. 3. 3 Copyright © 2010, SAS Institute Inc. All rights reserved. What makes a Good Model?  Accurate and precise estimation of predictor effects.  Focus is on the right hand side of the above equation.  Interest is in the influence predictors have on the response(s).  Design of experiments is an important tool for achieving this goal. Good designs lead to good models.  Accurate and precise prediction.  Focus is on the left hand side of the above equation.  Interest is in best predicting outcomes from a set of inputs, often from historical or in situ data.  Predictive modelling (data mining) methodologies are important tools for achieving this goal.  These two goals typically occur at different points in the data discovery process.
4. 4. 4 Copyright © 2010, SAS Institute Inc. All rights reserved. Modeling in Action What we think is happening Historical/In situ Data Collection Real World ModelY = f(X) + E Historical/In situ Data Collection Historical/In situ Data Collection Experimental Data Collection What is really happening Experimental Data Collection Experimental Data Collection Adapted from Box, Hunter, & Hunter
5. 5. 5 Copyright © 2010, SAS Institute Inc. All rights reserved. Introduction to Predictive Models  A type of statistical model where the focus is on predicting Y independent of the form used for f(X).  There is less concern about the form of the model – parameter estimation isn’t important. The focus is on how well it predicts.  Very flexible models are often used to allow for a greater range of possibilities.  Variable selection/reduction is typically important.  http://en.wikipedia.org/wiki/Predictive_modelling
7. 7. 7 Copyright © 2010, SAS Institute Inc. All rights reserved. Introduction to Predictive Models  Two competing methods.  Results are for a single sample Regression Misclassification Rate = 32.5% Nearest Neighbor Misclassification Rate = 0% Example from Hastie, Tibshirani, & Freidman: Elements of Statistical Learning
8. 8. 8 Copyright © 2010, SAS Institute Inc. All rights reserved. Introduction to Predictive Models  f(X): A majority of techniques are captured by two general approaches:  Global function of data » More stable, less flexible » Captures smooth relationship between continuous inputs and outputs » Examples: Regression, Generalized Regression, Neural Networks, PLS, Discriminant Analysis  Local function of data » More flexible, less stable » Captures local relationships in data (e.g., discrete shifts and discontinuities) » Examples: Nearest Neighbors, Bootstrap Forrest, Boosted Trees
9. 9. 9 Copyright © 2010, SAS Institute Inc. All rights reserved. Introduction to Predictive Models  Second sample results  Nearest neighbor seems to overfit sample 1 Regression Misclassification Rate = 40% Nearest Neighbor Misclassification Rate = 41%
10. 10. 10 Copyright © 2010, SAS Institute Inc. All rights reserved. Preventing Model Overfitting  If the model is flexible what guards against overfitting (i.e., producing predictions that are too optimistic)?  Put another way, how do we protect from trying to model the noise variability as part of f(X)?  Solution – Hold back part of the data, using it to check against overfitting. Break the data into two or three sets:  The model is built on the training set.  The validation set is used to select model by determining when the model is becoming too complex  The test set is often used to evaluate how well model predicts independent of training and validation sets.  Common methods include k-fold and random holdback.
11. 11. 11 Copyright © 2010, SAS Institute Inc. All rights reserved. Handling Missing Predictor Values  Case-wise deletion – Easy, but reduces the sample  Simple imputation – Replace the value with the variable mean or median  Multivariate imputation – Use the correlation between multiple variables to determine what the replacement value should be  Model based imputation – Model with the non-missing values, replace missing values based on similar cases  Model free imputation – e.g., distance based, hot hand, etc.  Methods insensitive to missing values
12. 12. 12 Copyright © 2010, SAS Institute Inc. All rights reserved. How Does JMP Account for Missing Data?  Categorical  Creates a separate level for missing data and treats it as such.  Continuous  Informative Missing/Missing Value Coding: » Regression and Neural Network: The column mean is substituted for the missing value. An indicator column is included in the predictors where rows are 1 where data is missing, 0 otherwise. This can significantly improve the fit when data is missing not at random. » Partition: the missing observations are considered on both sides of the split. It is grouped with the side providing the better fit.  Save Tolerant Prediction Formula (Partition): » The predictor is randomly assigned to one of the splits. » Only available if Informative Missing is not selected.
13. 13. 13 Copyright © 2010, SAS Institute Inc. All rights reserved. How does JMP Handle Missing Data?  Multivariate Imputation – Available in the Multivariate platform and Column Utilities: Explore Missing Values. Based on the correlation structure of the continuous predictors (the expectation conditional on the non- missing data).  Model Based Imputation – Available in Fit Model and PLS (JMP Pro). Impute missing predictors based on partial least squares model.
15. 15. 15 Copyright © 2010, SAS Institute Inc. All rights reserved. Regression  General linear regression typically uses simple polynomial functions for f(X).  For continuous y:  For categorical y, the logistic function of f(X) is typically used.        p i iiji p i p ij ji p i ii xxxxxf 1 2 1 1 , 1 0  )( 1 1 xf e 
16. 16. 16 Copyright © 2010, SAS Institute Inc. All rights reserved. Model Selection  Stepwise Regression  Start with a base model: intercept only or all terms.  If intercept only, find term not included that explains the most variation and enter it into the model.  If all terms, remove the term that explains the least.  Continue until a (typically) p-value based stopping criterion is met.  A variation of stepwise regression is all possible subsets (best subset) regression.  Examine all 2, 3, 4, …, etc. term models and pick the best out of each. Sometimes statistical heredity is imposed to make the problem more tractable.
17. 17. 17 Copyright © 2010, SAS Institute Inc. All rights reserved. Generalized Regression  For many engineers and scientists, modeling begins and ends with ordinary least squares (OLS) and stepwise or best subsets regression.  Unfortunately, when data are highly correlated, OLS may lead to estimates that are less stable with higher prediction variance.  In addition, there are common situations when OLS assumptions are violated:  Responses and/or errors are not normally distributed.  The response is not a linear combination of the predictors.  Error variance is not constant across the prediction range.
18. 18. 18 Copyright © 2010, SAS Institute Inc. All rights reserved. Penalized Regression  Generalized regression (aka penalized regression or shrinkage methods) can circumvent these problems.  For correlated data, penalized methods produce more stable estimates by biasing the estimates in an effort to reduce prediction estimate variability.  Provides a continuous approach to variable selection. More stable than stepwise regression.  Can be used for problems when there are more variables then observations, which cannot be estimated using OLS.  Three common techniques:  Ridge Regression: Stabilizes the estimates, but can’t be used for variable selection.  LASSO  Elastic Net: Weighted average of Ridge and LASSO.
19. 19. 19 Copyright © 2010, SAS Institute Inc. All rights reserved. Quantile Regression  Makes no distributional or variance-based assumptions.  Allows the response/predictor relationship to vary with response.  Allows modeling of not just the median, but of any quantile.  Is more robust than OLS, lessening the influence of high-leverage points.
21. 21. 21 Copyright © 2010, SAS Institute Inc. All rights reserved. Decision Trees  Also known as Recursive Partitioning, CHAID, CART  Models are a series of nested IF() statements, where each condition in the IF() statement can be viewed as a separate branch in a tree.  Branches are chosen so that the difference in the average response (or average response rate) between paired branches is maximized.  Tree models are “grown” by adding more branches to the tree so the more of the variability in the response is explained by the model
22. 22. 22 Copyright © 2010, SAS Institute Inc. All rights reserved. Decision Tree Step-by-Step Goal is to predict those with a code of “1” Overall Rate is 3.23% Candidate “X’s” • Search through each of these • Examine Splits for each unique level in each X • Find Split that maximizes “LogWorth” • Will find split that maximizes difference in proportions of the target variable
23. 23. 23 Copyright © 2010, SAS Institute Inc. All rights reserved. Decision Tree Step-by-Step 1st Split: Optimal Split at Age<28 Notice the difference in the rates in each branch of the tree Repeat “Split Search” across both “Partitions” of the data. Find optimal split across both branches.
24. 24. 24 Copyright © 2010, SAS Institute Inc. All rights reserved. 2nd split on CARDS (no CC vs some CC’s) Notice variation in proportion of “1” in each branch Decision Tree Step by Step
25. 25. 25 Copyright © 2010, SAS Institute Inc. All rights reserved. Decision Tree (Step by Step) 3rd split on TEL (# of handsets owned) Notice variation in proportion of “1” in each branch
26. 26. 26 Copyright © 2010, SAS Institute Inc. All rights reserved. Bootstrap Forest  Bootstrap Forest  For each tree, take a random sample (with replacement) of the data table. Build out a decision tree on that sample.  Make many trees and average their predictions (bagging)  This is also know as a random forest technique  Works very well on wide tables.  Can be used for both predictive modeling and variable selection.
27. 27. 27 Copyright © 2010, SAS Institute Inc. All rights reserved. See the Trees in the Forest Tree on 1st Bootstrap Sample Tree on 2nd Bootstrap Sample Tree on 3rd Bootstrap Sample … Tree on 100th Bootstrap Sample
29. 29. 29 Copyright © 2010, SAS Institute Inc. All rights reserved. Boosted Tree  Beginning with the first tree (layer) build a small simple tree.  From the residuals of the first tree, build another small simple tree.  This continues until a specified number of layers has been fit, or a determination has been made that adding successive layers doesn’t improve the fit of the model.  The final model is the weighted accumulation of all of the model layers.
30. 30. 30 Copyright © 2010, SAS Institute Inc. All rights reserved. Boosted Tree Illustrated … M1 M2 M3 M49 𝑀 = 𝑀1 + 𝜀 ∙ 𝑀2 + 𝜀 ∙ 𝑀3 + ⋯ + 𝜀 ∙ 𝑀49 Models Final Model 𝜀 is the learning rate