Your SlideShare is downloading. ×
0
Tree Based Methods for Analyzing  Tissue Microarray Data Steve Horvath Human Genetics and Biostatistics University of Cali...
Acknowledgements <ul><li>Horvath Lab </li></ul><ul><ul><li>Yunda Huang  </li></ul></ul><ul><ul><li>Xueli Liu Ph.D. </li></...
Contents <ul><li>Statistical issues with tissue microarray (TMA) data  </li></ul><ul><li>Random forest (RF) predictors  </...
Background  TMA data
Description of TMA data <ul><li>TMA data are a high-throughput tool in validating newly-identified biomarker in genome wid...
donor block  array block  slide   Tissue Microarray (TMA) Technology Kononen et al.  Nature Medicine  1998 <ul><li>Hundred...
Non-normal and highly correlated
Several Spots per Pathology Case   Several “Scores” per Spot <ul><li>Maximum intensity = Max (1 – 4) </li></ul><ul><li>Per...
P53 CA9 EpCam Percent of Cells Staining(POS) Maximum Intensity (MAX) Histogram of tumor marker expression scores: POS and ...
P53 and Ki67: Max versus Pos
Characteristics of TMA data <ul><li>Non-normal, discrete, strongly correlated </li></ul><ul><li>Mixed variable types  </li...
Our main tool are random forest predictors <ul><li>Unsupervised analysis of TMA data </li></ul><ul><ul><li>RF clustering <...
Background  random forest predictors L. Breiman 1999
Random Forests (RFs) <ul><li>RFs are a collection of tree predictors such that each tree depends on the values of an indep...
Classification and Regression Trees (CART) <ul><li>by </li></ul><ul><ul><li>Leo Breiman,  </li></ul></ul><ul><ul><li>UC Be...
An example of CART <ul><li>Goal: For the patients admitted into ER, to predict who is at higher risk of heart attack  </li...
High 12% Low  88% High 17% Low  83% Is BP <= 91? High 70% Low  30% High 11% Low  89% High 50% Low  50% High 2% Low  98% Hi...
CART Construction <ul><li>BINARY RECURSIVE PARTITIONING </li></ul><ul><li>Binary: split parent node into two child nodes <...
RF Construction …
Prediction by plurality voting <ul><li>The forest consists of N trees.  </li></ul><ul><li>Class prediction:  </li></ul><ul...
Clustering with  random forest predictors
Intrinsic Proximity Measure <ul><li>Terminal tree nodes contain few observations </li></ul><ul><li>If case i and case j bo...
Casting an unsupervised problem into a supervised RF problem  <ul><li>Key Idea (Breiman 1999) </li></ul><ul><ul><li>Label ...
How to generate synthetic observations <ul><li>Synthetic observations are simulated to contain no clusters </li></ul><ul><...
RF clustering <ul><li>Compute distance matrix from RF </li></ul><ul><ul><li>distance matrix = sqrt(1-proximity matrix) </l...
Theoretical Study of RF Clustering  Ref:  Using random forest proximity for unsupervised learning, BIOKDD-CBGI'03, 7th Joi...
Applying  Random Forest Clustering  to Tissue Microarray Data -- Application to Kidney Cancer Tao Shi and Steve Horvath
Scientific Question: Can one discover cancer subtypes based on the protein expression patterns of tumor markers?
Why use RF clustering  for TMA data? <ul><li>no need to transform the often highly skewed features </li></ul><ul><ul><li>b...
Kidney Multi-marker Data  <ul><li>366 patients  with Renal Cell Carcinoma (RCC) admitted to UCLA between 1989 and 2000. </...
MDS plot of clear cell patients <ul><li>Labeled and colored </li></ul><ul><li>by their RF cluster </li></ul>
Interpreting the clusters in terms of survival 9 30 3 215 20 2 92 0 1 Clear cell patients Non clear Cell patients Clusteri...
Hierarchical clustering with Euclidean distance leads to less satisfactory results * RF clustering grouping in red 30  (9)...
Euclidean vs. RF Distance RF distance Euclidean distance
Molecular grouping vs. Pathological grouping Message:  molecular grouping is superior to pathological grouping 0 2 4 6 8 1...
Identify “irregular” patients Message: molecular grouping  can be used to refine clear cell definition. 0 2 4 6 8 10 12 0....
Detect novel cancer subtypes <ul><li>Group clear cell grade 2 patients into two clusters with significantly different surv...
Results TMA clustering <ul><li>Clusters reproduce well known clinical subgroups </li></ul><ul><ul><li>Ex: global expressio...
Boxplots of tumor marker expression vs. cluster Message: clusters can be explained in terms of tumor expression values, i....
Conclusions <ul><li>There is a need to develop tailor made data mining methods for TMA data </li></ul><ul><ul><li>Major di...
Upcoming SlideShare
Loading in...5
×

Tree Based Methods for Analyzing

192

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
192
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
9
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Each case is typically represented by 4 spots 3 malignant lesions 1 matched normal tissue Each spot has a spot grade Sometime through H&amp;E staining Multiple spots approach Reduce the impact of possible missing spots Control for the heterogeneity of the tumor Several “scores” per spot Max  Maximum intensity Pos  Percent of cells staining PosMax  Percent of cells staining with the maximum intensity Spot grade  NL,1,2,.. -- 4-10 spots of different grades per patient
  • One of the by-products of random forest is intrinsic proximity. Since……
  • Random forest not only works for supervised problem, but also can be used in unsupervised problem, where the class outcome is not specified. The key idea was again formulated by Breiman. It goes like this …
  • Breiman also proposed the two standard ways of generating synthetic observations. First, … Second,
  • Based on these ideas, we formulate the general Random Forest clustering framework as show here.
  • Here we show a MDS plot of all patients who have clear cell RCC, the most common histological subtype of RCC. RF clustering group the patients into three clusters.
  • We also tried hierarchical clustering using Euclidean distance. The result is not as nice as we got using RF clustering. It has more patients misclassified.
  • Here we compare the Euclidean distance vs. RF distance. We find that they are usually correlated. But since RF distance
  • We can also use the clustering result to identify “we-called” “irregular” patients. Let’s focus our attention to this table first. Using RF clustering, we group all 366 RCC patients into 3 cluster. The cluster 1 and 2 contain mainly clear cell patients and cluster 3 contains mainly non-clear cell patients. We find that the cluster 1 and 2 share similar survivorship function and they do much worse, while the cluster 3 patients have a different survival profile but they do much better. This is consistent with the fact that clear cell patients usually do worse than the non-clear cell patients. We notice that there are 9 clear cell patients in cluster 3, the better surviving cluster. When we plot the Kaplan-Meir estimate of these we-called “irregular” clear cell patients, other clear cell patients and non-clear cell patients, we find that these 9 patients behave very much like those non-clear cell patients. This is why we call them “irregular”.
  • Here we just show the boxplot of protein expression of the 8 tumor markers across the three clusters. And this is a way of interpreting the clusters biologically.
  • Transcript of "Tree Based Methods for Analyzing "

    1. 1. Tree Based Methods for Analyzing Tissue Microarray Data Steve Horvath Human Genetics and Biostatistics University of California, Los Angeles
    2. 2. Acknowledgements <ul><li>Horvath Lab </li></ul><ul><ul><li>Yunda Huang </li></ul></ul><ul><ul><li>Xueli Liu Ph.D. </li></ul></ul><ul><ul><li>Zeke Fang Ph.D. </li></ul></ul><ul><ul><li>Tuyen Hoang </li></ul></ul><ul><li>UCLA Tissue Microarray Core </li></ul><ul><ul><li>David Seligson </li></ul></ul><ul><ul><li>Aarno Palotie </li></ul></ul><ul><li>Clinicians </li></ul><ul><ul><li>Hyung Kim </li></ul></ul><ul><ul><li>Arie Belldegrun </li></ul></ul>
    3. 3. Contents <ul><li>Statistical issues with tissue microarray (TMA) data </li></ul><ul><li>Random forest (RF) predictors </li></ul><ul><li>RF clustering </li></ul><ul><li>Application of RF clustering to TMA data </li></ul><ul><li>Supervised Learning Methods </li></ul>
    4. 4. Background TMA data
    5. 5. Description of TMA data <ul><li>TMA data are a high-throughput tool in validating newly-identified biomarker in genome wide discovery </li></ul><ul><li>Basic technique was summarized in Kononen et al. 1998 </li></ul>
    6. 6. donor block array block slide Tissue Microarray (TMA) Technology Kononen et al. Nature Medicine 1998 <ul><li>Hundreds of tiny (typically 0.6 mm diameter) cylindrical tissue cores </li></ul><ul><ul><li>densely and precisely arrayed into a single histologic paraffin block. </li></ul></ul><ul><li>From this new array block, up to 300 serial 4-8  m thick sections may be produced. </li></ul><ul><li>Targets for fluorescence in situ hybridization (FISH) and protein expression by immunohistochemical studies. </li></ul>
    7. 7. Non-normal and highly correlated
    8. 8. Several Spots per Pathology Case Several “Scores” per Spot <ul><li>Maximum intensity = Max (1 – 4) </li></ul><ul><li>Percent of cells staining = Pos (0 – 100) </li></ul><ul><li>Percent of cells staining with the </li></ul><ul><li>maximum intensity = PosMax (0 – 100) </li></ul><ul><li>Spots have a spot grade: NL,1,2,.. </li></ul><ul><li>Indicator of informativeness </li></ul><ul><li>Each case is usually represented by 4 or more spots </li></ul><ul><ul><li>>3 malignant lesions, 1 matched normal </li></ul></ul>
    9. 9. P53 CA9 EpCam Percent of Cells Staining(POS) Maximum Intensity (MAX) Histogram of tumor marker expression scores: POS and MAX
    10. 10. P53 and Ki67: Max versus Pos
    11. 11. Characteristics of TMA data <ul><li>Non-normal, discrete, strongly correlated </li></ul><ul><li>Mixed variable types </li></ul><ul><li>Pooling (combining) spot measurements across every patient </li></ul><ul><ul><li>between 1 to 10 spots of different grade </li></ul></ul><ul><ul><li>current strategy pools tumor spots and forms median, mean, minimum or max </li></ul></ul><ul><li>Message : tumor marker intensity is measured by up to 12 highly correlated staining scores  multicollinearity </li></ul>
    12. 12. Our main tool are random forest predictors <ul><li>Unsupervised analysis of TMA data </li></ul><ul><ul><li>RF clustering </li></ul></ul><ul><li>Supervised Analysis </li></ul><ul><ul><li>RF based pre-validation method </li></ul></ul>
    13. 13. Background random forest predictors L. Breiman 1999
    14. 14. Random Forests (RFs) <ul><li>RFs are a collection of tree predictors such that each tree depends on the values of an independently sampled random vector </li></ul>
    15. 15. Classification and Regression Trees (CART) <ul><li>by </li></ul><ul><ul><li>Leo Breiman, </li></ul></ul><ul><ul><li>UC Berkeley </li></ul></ul><ul><ul><li>Jerry Friedman, Stanford University </li></ul></ul><ul><ul><li>Charles J. Stone, </li></ul></ul><ul><ul><li>UC Berkeley </li></ul></ul><ul><ul><li>Richard Olshen, Stanford University </li></ul></ul>
    16. 16. An example of CART <ul><li>Goal: For the patients admitted into ER, to predict who is at higher risk of heart attack </li></ul><ul><li>Training data set: </li></ul><ul><ul><li># of subjects = 215 </li></ul></ul><ul><ul><li>Outcome variable = High/Low Risk determined </li></ul></ul><ul><ul><li>19 noninvasive clinical and lab variables were used as the predictors </li></ul></ul>
    17. 17. High 12% Low 88% High 17% Low 83% Is BP <= 91? High 70% Low 30% High 11% Low 89% High 50% Low 50% High 2% Low 98% High 23% Low 77% Is age <= 62.5? Classified as high risk! Classified as low risk! Classified as high risk! Classified as low risk! Is ST present? CART construction Yes No No No Yes Yes
    18. 18. CART Construction <ul><li>BINARY RECURSIVE PARTITIONING </li></ul><ul><li>Binary: split parent node into two child nodes </li></ul><ul><li>Recursive: each child node can be treated as parent node </li></ul><ul><li>Partitioning: data set is partitioned into mutually exclusive subsets in each split </li></ul>
    19. 19. RF Construction …
    20. 20. Prediction by plurality voting <ul><li>The forest consists of N trees. </li></ul><ul><li>Class prediction: </li></ul><ul><ul><li>Each tree votes for a class; the predicted class C for an observation is the plurality, max C  k [f k ( x , T ) == C] </li></ul></ul><ul><li>Regression random forest: </li></ul><ul><ul><li>predicted value is the average prediction </li></ul></ul>
    21. 21. Clustering with random forest predictors
    22. 22. Intrinsic Proximity Measure <ul><li>Terminal tree nodes contain few observations </li></ul><ul><li>If case i and case j both land in the same terminal node, increase the proximity between i and j by 1. </li></ul><ul><li>At the end of the run divide by 2* no. of trees. </li></ul><ul><li>Dissimilarity=sqrt(1-Proximity) </li></ul>
    23. 23. Casting an unsupervised problem into a supervised RF problem <ul><li>Key Idea (Breiman 1999) </li></ul><ul><ul><li>Label observed data as class 1 </li></ul></ul><ul><ul><li>Generate synthetic observations and label them as class 2 </li></ul></ul><ul><ul><li>Construct a RF predictor to distinguish class 1 from class 2 </li></ul></ul><ul><ul><li>Use the resulting dissimilarity measure in unsupervised analysis </li></ul></ul>
    24. 24. How to generate synthetic observations <ul><li>Synthetic observations are simulated to contain no clusters </li></ul><ul><ul><li>e.g. randomly sampling from the product of empirical marginal distributions of the input. </li></ul></ul>
    25. 25. RF clustering <ul><li>Compute distance matrix from RF </li></ul><ul><ul><li>distance matrix = sqrt(1-proximity matrix) </li></ul></ul><ul><li>Compute the first 2~3 classical multi-dimensional scaling coordinates based on the distance matrix </li></ul><ul><li>Conduct partitioning around medoid (PAM) clustering analysis </li></ul><ul><ul><li>input parameter=no. of clusters k </li></ul></ul><ul><ul><li>use the Euclidean distance between the resulting scaling points </li></ul></ul>
    26. 26. Theoretical Study of RF Clustering Ref: Using random forest proximity for unsupervised learning, BIOKDD-CBGI'03, 7th Joint Conference on Information Sciences, Cary, North Carolina.
    27. 27. Applying Random Forest Clustering to Tissue Microarray Data -- Application to Kidney Cancer Tao Shi and Steve Horvath
    28. 28. Scientific Question: Can one discover cancer subtypes based on the protein expression patterns of tumor markers?
    29. 29. Why use RF clustering for TMA data? <ul><li>no need to transform the often highly skewed features </li></ul><ul><ul><li>based on ranks of features </li></ul></ul><ul><li>natural way of weighing tumor marker contributions to the dissimilarity </li></ul><ul><li>elegant way to deal with missing covariates </li></ul><ul><li>intrinsic proximity matrix handles mixed variable types well </li></ul>
    30. 30. Kidney Multi-marker Data <ul><li>366 patients with Renal Cell Carcinoma (RCC) admitted to UCLA between 1989 and 2000. </li></ul><ul><li>Immuno-histological measures of total 8 tumor markers were obtained from tissue microarrays constructed from the tumor samples of these patients. </li></ul>
    31. 31. MDS plot of clear cell patients <ul><li>Labeled and colored </li></ul><ul><li>by their RF cluster </li></ul>
    32. 32. Interpreting the clusters in terms of survival 9 30 3 215 20 2 92 0 1 Clear cell patients Non clear Cell patients Clustering label
    33. 33. Hierarchical clustering with Euclidean distance leads to less satisfactory results * RF clustering grouping in red 30 (9) 41 (30) 2 286 (307) 9 (20) 1 Clear cell patients Non clear Cell patients Cluster- ing label
    34. 34. Euclidean vs. RF Distance RF distance Euclidean distance
    35. 35. Molecular grouping vs. Pathological grouping Message: molecular grouping is superior to pathological grouping 0 2 4 6 8 10 12 0.0 0.2 0.4 0.6 0.8 1.0 Time to death (years) Survival 327 patients in cluster 1 and 2 39 patients in cluster 3 0 2 4 6 8 10 12 0.0 0.2 0.4 0.6 0.8 1.0 Time to death (years) Survival 316 non-clear cell patients 50 clear cell patients p = 0.0229 p = 9.03e-05 Molecular Grouping Pathological Grouping
    36. 36. Identify “irregular” patients Message: molecular grouping can be used to refine clear cell definition. 0 2 4 6 8 10 12 0.0 0.2 0.4 0.6 0.8 1.0 Time to death (years) Survival p = 0.00522 9 irregular clear cell patients 307 regular clear cell patients 50 non-clear cell patients 9 30 3 215 20 2 92 0 1 Clear cell patients Non clear Cell patients Clustering label
    37. 37. Detect novel cancer subtypes <ul><li>Group clear cell grade 2 patients into two clusters with significantly different survival. </li></ul>0 2 4 6 8 10 12 0.0 0.2 0.4 0.6 0.8 1.0 K-M curves Time to death (years) Survival p value= 0.0125
    38. 38. Results TMA clustering <ul><li>Clusters reproduce well known clinical subgroups </li></ul><ul><ul><li>Ex: global expression differences between clear cell and non-clear cell patients </li></ul></ul><ul><ul><li>RF clustering works better than clustering based on the Euclidean distance for TMA data </li></ul></ul><ul><li>RF clustering allows one to identify “outlying” tumor samples. </li></ul><ul><li>Can detect previously unknown sub-groups </li></ul>
    39. 39. Boxplots of tumor marker expression vs. cluster Message: clusters can be explained in terms of tumor expression values, i..e in terms of biological pathways.
    40. 40. Conclusions <ul><li>There is a need to develop tailor made data mining methods for TMA data </li></ul><ul><ul><li>Major differences: </li></ul></ul><ul><ul><ul><li>highly non-normal data </li></ul></ul></ul><ul><ul><ul><li>Euclidean distance metrics seems to be sub-optimal for TMA data </li></ul></ul></ul><ul><li>tree or forest based methods work well for kidney and prostate TMA data </li></ul>
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×