Your SlideShare is downloading. ×
Tree methods
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Tree methods

449
views

Published on


0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
449
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
20
Comments
0
Likes
0
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide

Transcript

  • 1.  
  • 2. Ronny Kohavi, ICML 1998
  • 3. Ronny Kohavi, ICML 1998
  • 4. Ronny Kohavi, ICML 1998
  • 5. What is Data mining? Research Question Find Data Internal Databases Data Warehouses Internet Online databases Data Collection Data Processing Extract Information Data Analysis Answer Research Question
  • 6. Outline
    • Data Mining
    • Methodology for CART
    • Data mining trees, tree sketches
    • Applications to clinical data.
      • Categorical Response - OAB data
      • Continuous Response - Geodon RCT data
    • Robustness issues
  • 7.
    • Processing “capacity” doubles every couple of years (Exponential)
    • Hard Disk storage “capacity” doubles every 18 months (Use to be every 36 months)
    • Bottle necks are not speed anymore.
    • Processing capacity is not growing as fast as data acquisition.
    Moore’s law: + -
  • 8.
    • 50’s-80’s: EDA, Data Visualization.
    • 1990 Ripley “that is not statistics, that’s data mining”
    • 90’s-06: Data Mining: Large Datasets, EDA, DV, Machine Learning, Vision,…
    • Example: Biopharmaceutical Area. Data repositories:
    • - Data from many clinical, monitoring, marketing.
    • - Data is largely unexplored.
    • Data mining objective: “To extract valuable information.”
      • “ To identify nuggets, clusters of observations in these data that contain potentially valuable information.”
    • Example: Biopharmaceutical Data:
    • - Extract new information from existing databases.
    • - Answer questions of clinicians, marketing.
    • - Help design new studies.
    Mining Data
  • 9. Data Mining Software and Recursive Partition SOFTWARE: Splus / Insightful Miner: Tree, CART, C4.5, BG, RF, BT R: Rpart, CART, BG, RF, BT SAS / Enterprise Miner: CART, C4.5, CHAID, Tree browser SPSS : CART, CHAID Clementine : CART, C4.5, Rule Finder HelixTree : CART, FIRM, BG, RF, BT
  • 10.
    • I. Dependent variable is categorical
    •  
    • Classification Trees, Decision Trees
    • Example: A doctor might have a rule for choosing which drug to prescribe to high cholesterol patients.
    Recursive Partition (CART)
    • II. Dependent variable is numerical
    •  
    • Regression Tree
    20 80 80 40 65 18 BMI AGE Dose=Function (BMI,AGE) 24 High Blood Pressure? Age>60 DrugA Age<30 Drug A Y Y Y N N N Drug B Drug B AGE<65 80 80 BMI<24 AGE<18 40 20 Y Y Y N N N
  • 11. Classic Example of CART: Pima Indians Diabetes
    • 768 Pima Indian females, 21+ years old ; 268 tested positive to diabetes
    • 8 predictors: PRG, PLASMA, BP, THICK, INSULIN, BODY, PEDIGREE, AGE
    • OBJECTIVE: PREDICT DIABETES
    ? Y N PLASMA  127 Y N AGE  28 Y N BODY  29.9 Y N DIABETES
  • 12. Classic Example of CART: Pima Indians Diabetes
    • CART Algorithm
    •  
    • Grow tree
    • Stop when node
    • sizes are small
    • Prune tree
  • 13. CART criteria functions For classification trees: criteria functions For regression trees :
  • 14. a.    Make sure that all the factors are declared as factors Some times factor variables are read into R as numeric or as character variables. Suppose that a variable RACE on a SAS dataset is coded as 1, 2, 3, 4 representing 4 race groups. We need to be sure that it was not read as a numeric variable, therefore we will first check the types of the variables. We may use the functions “ class” and “is.factor” combined with “sapply” in the following way. sapply(w,is.factor) or sapply(w,class) Suppose that the variable “x” is numeric when it is supposed to be a factor. Then we convert it into factor: w$x = factor(w$x) b.   Recode factors: Sometimes the codes assigned to factor levels are very long phrases and when those codes are inserted into the tree the resulting graph can be very messy. We prefer to use short words to represent the codes. To recode the factor levels you may use the function “f.recode”: > levels(w$Muscle) [1] &quot;&quot; &quot;Mild Weakness&quot; [3] &quot;Moderate Weakness&quot; &quot;Normal&quot; > musc =f.recode(w$Muscle,c(&quot;&quot;,&quot;Mild&quot;,&quot;Mod&quot;,&quot;Norm&quot;)) > w$Musclenew = musc DATA PREPROCESSING RECOMMENDATIONS FOR TREES
  • 15. hospital = read.table(&quot;project2/hospital.txt&quot;,sep=&quot;,&quot;) colnames(hospital) <- c(&quot;ZIP&quot;,&quot;HID&quot;,&quot;CITY&quot;,&quot;STATE&quot;,&quot;BEDS&quot;,&quot;RBEDS&quot;,&quot;OUTV&quot;,&quot;ADM&quot;, &quot;SIR&quot;, &quot;SALESY&quot;,&quot;SALES12&quot;,&quot;HIP95&quot;,&quot;KNEE95&quot;,&quot;TH&quot;,&quot;TRAUMA&quot;,&quot;REHAB&quot;,&quot;HIP96&quot;, &quot;KNEE96&quot;,&quot;FEMUR96&quot;) hosp = hospital[,-c(1:4,10)] hosp$TH = factor(hosp$TH) hosp$TRAUMA = factor(hosp$TRAUMA) hosp$REHAB = factor(hosp$REHAB)   u<-rpart(log(1+SALES12)~.,data=hosp,control=rpart.control(cp=.01)) plot(u); text(u) u=rpart(log(1+SALES12)~.,data=hosp,control=rpart.control(cp=.001)) plot(u,uniform=T) ; text(u) Example Hospital data
  • 16. HIP95 < 40.5 [Ave: 1.074, Effect: -0.76 ] HIP96 < 16.5 [Ave: 0.775, Effect: -0.298 ] RBEDS < 59 [Ave: 0.659, Effect: -0.117 ] HIP95 < 0.5 [Ave: 1.09, Effect: +0.431 ] -> 1.09 HIP95 >= 0.5 [Ave: 0.551, Effect: -0.108 ] KNEE96 < 3.5 [Ave: 0.375, Effect: -0.175 ] -> 0.375 KNEE96 >= 3.5 [Ave: 0.99, Effect: +0.439 ] -> 0.99 RBEDS >= 59 [Ave: 1.948, Effect: +1.173 ] -> 1.948 HIP96 >= 16.5 [Ave: 1.569, Effect: +0.495 ] FEMUR96 < 27.5 [Ave: 1.201, Effect: -0.368 ] -> 1.201 FEMUR96 >= 27.5 [Ave: 1.784, Effect: +0.215 ] -> 1.784 HIP95 >= 40.5 [Ave: 2.969, Effect: +1.136 ] KNEE95 < 77.5 [Ave: 2.493, Effect: -0.475 ] BEDS < 217.5 [Ave: 2.128, Effect: -0.365 ] -> 2.128 BEDS >= 217.5 [Ave: 2.841, Effect: +0.348 ] OUTV < 53937.5 [Ave: 3.108, Effect: +0.267 ] -> 3.108 OUTV >= 53937.5 [Ave: 2.438, Effect: -0.404 ] -> 2.438 KNEE95 >= 77.5 [Ave: 3.625, Effect: +0.656 ] SIR < 9451 [Ave: 3.213, Effect: -0.412 ] -> 3.213 SIR >= 9451 [Ave: 3.979, Effect: +0.354 ] -> 3.979 Regression Tree for log(1+Sales)
  • 17. Regression Tree
  • 18.
    • Classification tree:
    • data(tissue)
    • gr = rep(1:3, c( 11,11,19))
    • > x <- f.pca(f.toarray(tissue))$scores[,1:4]
    • > x= data.frame(x,gr=gr)
    • > library(rpart)
    • > tr =rpart(factor(gr)~., data=x)
    • n= 41
    •  
    • node), split, n, loss, yval, (yprob)
    • * denotes terminal node
    •  
    • 1) root 41 22 3 (0.26829268 0.26829268 0.46341463)
    • 2) PC3< -0.9359889 23 12 1 (0.47826087 0.47826087 0.04347826)
    • 4) PC2< -1.154355 12 1 1 (0.91666667 0.00000000 0.08333333) *
    • 5) PC2>=-1.154355 11 0 2 (0.00000000 1.00000000 0.00000000) *
    • 3) PC3>=-0.9359889 18 0 3 (0.00000000 0.00000000 1.00000000) *
    • > plot(tr)
    • > text(tr)
    • >
  • 19. Random forest Algorithm (A variant of bagging)
    • Select ntree, the number of trees to grow, and mtry, a number no larger than number of variables.
    • For i = 1 to ntree:
    • Draw a bootstrap sample from the data. Call those not in the bootstrap sample the &quot;out-of-bag&quot; data.
    • Grow a &quot;random&quot; tree, where at each node, the best split is chosen among mtry randomly selected variables. The tree is grown to maximum size and not pruned back.
    • Use the tree to predict out-of-bag data.
    • In the end, use the predictions on out-of-bag data to form majority votes.
    • Prediction of test data is done by majority votes from predictions from the ensemble of trees.
    • R-package: randomForest with function called also randomForest
  • 20.
    • Input:
    • Data (x i ,y i ) i=1,…,n ; w i =1/n
    • Fit tree or any other learning method: h 1 (x i )
    • Calculate misclassification error E 1
    • If E 1 > 0.5 stop and abort loop
    • b 1 = E 1 /(1- E 1 )
    • for i=1,…,n if h 1 (x i ) =y i w i = w i b 1 else w i = w i
    • Normalize the w i ’s to add up to 1.
    • Go back to 1. and repeat until no change in prediction error.
    • R-package: bagboost with function called also bagboost and also adaboost
    Boosting (Ada boosting)
  • 21. i=sample(nrow(hosp),1000,rep=F) xlearn = f.toarray((hospital[-c(1:4,10:11),])) ylearn = 1*( hospital$SALES12 > 50) xtest = xlearn[i,] xlearn = xlearn[-i,] ytest = ylearn[i] ylearn = ylearn[-i] ## BOOSTING EXAMPLE u = bagboost(xlearn[1:100,], ylearn[1:100], xtest,presel=0,mfinal=20) summarize(u,ytest) ## RANDOM FOREST EXAMPLE u = randomForest(xlearn[1:100,], ylearn[1:100], xtest,ytest) round(importance(u),2) Boosting (Ada boosting)
  • 22. Competing methods Bump Hunting: Find subsets that optimize some criterion Subsets are more “robust” Not all interesting subsets are found Recursive Partition: Find the partition that best approximates the response. For moderate/large datasets partition tree may be too big Data Mining Trees
  • 23. Paradigm for data mining: Selection of interesting subsets Bump Hunting Recursive Partition Data Mining Trees Var 2 Other Data High Resp Var 1 High Resp Low Resp
  • 24. Naive thought: For the j th descriptor variable x j , an “interesting” subset { a<x ji <b } is one such that p = Prob[ Z =1 | a<x ji <b ] is much larger than  = Prob[ Z =1 ] T= (p-  )/  p measures how interesting a subset is.  Add a penalty term to prevent selection of subsets that are too small or too large. Data Mining Trees ARF (Active Region Finder) a b
  • 25. ARF algorithm diagram - Create NodeList with one node = FullData - Set NodeType=FollowUp - Set CurrentNode= 1 Split CurrentNode Center Bucket Is Split Significant? T or F BucketSize >Min? Yes: NodeType=Followup No: NodeType=Terminal Add Bucket to NodeList Left Bucket BucketSize >Min? Yes: NodeType=Followup No: NodeType=Terminal BucketSize > 0? Y: Add Node to NodeList Right Bucket BucketSize >Min? Yes: NodeType=Followup No: NodeType=Terminal BucketSize > 0? Y: Add Node to NodeList Set CurrentNode= +1 If NodeType = Terminal Y N if CurrentNode> LastNode Y EXIT Print Report N
  • 26. Comparing CART & ARF ARF: Captures subset with small variance (but not the rest). CART Needs both subset with small variance relative to mean diff.   ARF: captures interior subsets.
  • 27. Two Examples Subset that are Point density is important hidden in the middle
  • 28.
    • Methodology Objective:
    • The Data Space is divided between
    • - High response subsets
    • - Low Response subsets
    • - Other
    • Categorical Responses: Subsets that have high response on one of the categories.
    • T= (p-  )/  p
    • Continuous Responses: High mean response measured by
    • Statistical significance should be based on the entire tree building process.
    • Categorical Predictors
    • Data Visualization
    • PDF report.
    Methodology Var 2 Other Data High Resp Var 1 High Resp Low Resp
  • 29. Report Simple Tree or Tree sketch : Only statistically significant nodes. Full Tree: All nodes. Table of Numerical Outputs: Detailed statistics of each node List of Interesting Subsets: List of significant subsets Conditional Scatter Plot (optional): Data Visualization.
  • 30. How about outliers?
    • For Regression trees
    • Popular belief: Trees are not affected by outlier (are robust)
    • Outlier detection:
    • Run the data mining tree allowing for small buckets.
    • For observation X i in terminal node j calculate the score
    • Z i is the number of std dev away from the mean
    • Z i > 3.5 then X i is noted as an outlier.
  • 31. Tree with Outliers
  • 32. After Outlier removal
  • 33. Robustness issues ISSUE In regulatory environments outliers are rarely omitted. Our method is easily adaptable to robust splits by calculating the robust version of the criterion by replacing the mean and std dev by suitable estimators of location and scale: Binary/Categorical Response - How do we think about Robustness of trees? - One outlier might not make any difference. - 5% , 10% or more outliers could make a difference.
  • 34. Further work Binary/Categorical Response - How do we think about Robustness of trees? - One outlier might not make any difference. - 5% , 10% or more outliers could make a difference. Alvir, Cabrera, Caridi and Nguyen (2006) Mining Clinical Trial data. R-package: www.rci.rutgers.edu/DM/ARF_1.0.zip