Your SlideShare is downloading. ×
0
A Comprehensive Evaluation     of Multicategory Classification Methods forMicroarray Gene Expression     Cancer Diagnosis ...
Outline•   Motivation•   Major Concerns•   Methods     – SVMs     – Non-SVMs     – Ensemble Classification•   Datasets•   ...
Why?                                Clinical                             Applications of                            Gene E...
GEMS (Gene Expression Model Selector)Microarray data                                    Creation of powerful and          ...
Major Concerns• The studies conducted limited experiments in terms of the number of  classifiers, gene selection algorithm...
Goals for the Development of an AutomatedSystem that creates high-quality diagnosticmodels for use in clinical application...
Why use Support Vector Machines             (SVMs)?• Achieve superior classification performance  compared to other learni...
How SVMs Work•   Objects in the input space are mapped using a set of mathematical    functions (kernels).•   The mapped o...
SVM Classification Methods               SVMs Binary SVMs          Multiclass SVMs    OVR        OVO       DAGSVM         ...
Binary SVMs                      •   Main idea is to identify the                          maximum-margin hyperplane     S...
1. Multiclass SVMs: one-versus-rest (OVR)                      •   Simplest MC-SVM                      •   Construct k bi...
2. Multiclass SVMs: one-versus-one (OVO)                     • Involves construction of                       binary SVM c...
3. Multiclass SVMs: DAGSVM                    •   Constructs a decision tree                    •   Each node is a binary ...
4 & 5. Multiclass SVMs: Weston & Watkins(WW) and Crammer & Singer (CS)                     •   Constructs a single classif...
Non-SVM Classification Methods              Non-SVMs    KNN         NN         PNN
K-Nearest Neighbors (KNN)                  •   For each case to be classified,                      locate the k closest m...
Backpropagation Neural Networks (NN) &      Probabilistic Neural Networks (PNNs)•   Back Propagation Neural Networks:     ...
Ensemble Classification MethodsIn order to improve performance:          Classifier 1             Classifier 2            ...
Datasets & Data Preparatory Steps• Nine multicategory cancer diagnosis datasets• Two binary cancer diagnosis datasets• All...
Datasets
Experimental Designs                 •   Two Experimental Designs to obtain                     reliable performance estim...
Experimental Designs• Design I uses stratified 10 fold cross-validation in both loops  while Design II uses 10 fold cross-...
Gene Selection                           Gene Selection                               MethodsRatio of genes               ...
Performance Metrics• Accuracy   –   Easy to interpret   –   Simplifies statistical testing   –   Sensitive to prior class ...
Overall Research Design   Stage 1:Conducted a Factorial design involving datasets & classifiers w/o gene                  ...
Statistical Comparison among classifiers To test that differences b/t the best method and the other methods are non-random...
Performance Results (Accuracies) without Gene Selection                     Using Design I
Performance Results (RCI) without Gene Selection Using                       Design I
Total Time of Classification Experiments w/o geneselection for all 11 datasets and two experimentaldesigns                ...
Performance Results (Accuracies) with Gene Selection                                                  Using Design IImprov...
Performance Results (RCI) with Gene Selection Using                                                     Design IImprovemen...
Discussion & Limitations• Limitations:   – Use of the two performance metrics   – Choice of KNN, PNN and NN classifiers• F...
Contributions of Study• Conducted the most comprehensive systematic evaluation to  date of multicategory diagnosis algorit...
Conclusions• MSVMs are the best family of algorithms for these types of data  and medical tasks. They outperform non-SVM m...
Upcoming SlideShare
Loading in...5
×

Renikko

160

Published on

Renikko

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
160
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
6
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide
  • Cancer diagnosis is one of the most important emerging clinical applications of gene expression microarray technology.
  • Most real life diagnostic tasks are not binary
  • Transcript of "Renikko"

    1. 1. A Comprehensive Evaluation of Multicategory Classification Methods forMicroarray Gene Expression Cancer Diagnosis Presented by: Renikko Alleyne
    2. 2. Outline• Motivation• Major Concerns• Methods – SVMs – Non-SVMs – Ensemble Classification• Datasets• Experimental Design• Gene Selection• Performance Metrics• Overall Design• Results• Discussion & Limitations• Contributions• Conclusions
    3. 3. Why? Clinical Applications of Gene Expression Microarray Technology Prediction of clinical outcomesGene Discovery Disease Diagnosis Drug Discovery in response to treatment Cancer Infectious Diseases
    4. 4. GEMS (Gene Expression Model Selector)Microarray data Creation of powerful and reliable cancer diagnostic models Equip with best classifier, gene selection, and cross-validation methods 11 datasets spanning 74 Evaluation of major algorithms diagnostic categories & 41 for multicategory cancer types & 12 normal classification, gene selection tissue types methods, ensemble classifier methods & 2 cross validation designs
    5. 5. Major Concerns• The studies conducted limited experiments in terms of the number of classifiers, gene selection algorithms, number of datasets and types of cancer involved.• Cannot determine which classifier performs best.• It is poorly understood what are the best combinations of classification and gene selection algorithms across most array-based cancer datasets.• Overfitting.• Underfitting.
    6. 6. Goals for the Development of an AutomatedSystem that creates high-quality diagnosticmodels for use in clinical applications• Investigate which classifier currently available for gene expression diagnosis performs the best across many cancer types• How classifiers interact with existing gene selection methods in datasets with varying sample size, number of genes and cancer types• Whether it is possible to increase diagnostic performance further using meta-learning in the form of ensemble classification• How to parameterize the classifiers and gene selection procedures to avoid overfitting
    7. 7. Why use Support Vector Machines (SVMs)?• Achieve superior classification performance compared to other learning algorithms• Fairly insensitive to the curse of dimensionality• Efficient enough to handle very large-scale classification in both sample and variables
    8. 8. How SVMs Work• Objects in the input space are mapped using a set of mathematical functions (kernels).• The mapped objects in the feature (transformed) space are linearly separable, and instead of drawing a complex curve, an optimal line (maximum-margin hyperplane) can be found to separate the two classes.
    9. 9. SVM Classification Methods SVMs Binary SVMs Multiclass SVMs OVR OVO DAGSVM WW SW
    10. 10. Binary SVMs • Main idea is to identify the maximum-margin hyperplane Support Vector that separates training instances. • Selects a hyperplane that maximizes the width of the gap between the two classes. • The hyperplane is specified by support vectors. • New classes are classified Hyperplane depending on the side of the hyperplane they belong to.
    11. 11. 1. Multiclass SVMs: one-versus-rest (OVR) • Simplest MC-SVM • Construct k binary SVM classifiers: – Each class (positive) vs all other classes (negatives). • Computationally Expensive because there are k quadratic programming (QP) optimization problems of size n to solve.
    12. 12. 2. Multiclass SVMs: one-versus-one (OVO) • Involves construction of binary SVM classifiers for all pairs of classes • A decision function assigns an instance to a class that has the largest number of votes (Max Wins strategy) • Computationally less expensive
    13. 13. 3. Multiclass SVMs: DAGSVM • Constructs a decision tree • Each node is a binary SVM for a pair of classes • k leaves: k classification decisions • Non-leaf (p, q): two edges – Left edge: not p decision – Right edge: not q decision
    14. 14. 4 & 5. Multiclass SVMs: Weston & Watkins(WW) and Crammer & Singer (CS) • Constructs a single classifier by maximizing the margin between all the classes simultaneously • Both require the solution of a single QP problem of size (k-1)n, but the CS MC-SVM uses less slack variables in the constraints of the optimization problem, thereby making it computationally less expensive
    15. 15. Non-SVM Classification Methods Non-SVMs KNN NN PNN
    16. 16. K-Nearest Neighbors (KNN) • For each case to be classified, locate the k closest members of the training dataset. ? • A Euclidean Distance measure is used to calculate the distance between the training dataset members and the target case. ? • The weighted sum of the variable of interest is found for the k nearest neighbors. • Repeat this procedure for the other target set cases.
    17. 17. Backpropagation Neural Networks (NN) & Probabilistic Neural Networks (PNNs)• Back Propagation Neural Networks: – Feed forward neural networks with signals propagated forward through the layers of units. – The unit connections have weights which are adjusted when there is an error, by the backpropagation learning algorithm.• Probabilistic Neural Networks: – Design similar to NNs except that the hidden layer is made up of a competitive layer and a pattern layer and the unit connections do not have weights.
    18. 18. Ensemble Classification MethodsIn order to improve performance: Classifier 1 Classifier 2 Classifier N Output 1 Output 2 Output N Techniques: Major Voting, Decision Trees, MC-SVM (OVR, OVO, DAGSVM) Ensembled Classifiers
    19. 19. Datasets & Data Preparatory Steps• Nine multicategory cancer diagnosis datasets• Two binary cancer diagnosis datasets• All datasets were produced by oligonucleotide-based technology• The oligonucleotides or genes with absent calls in all samples were excluded from analysis to reduce any noise.
    20. 20. Datasets
    21. 21. Experimental Designs • Two Experimental Designs to obtain reliable performance estimates and avoid overfitting. • Data split into mutually exclusive sets. • Outer Loop estimates performance by: – Training on all splits but one (use for testing). • Inner Loop determines the best parameter of the classifier.
    22. 22. Experimental Designs• Design I uses stratified 10 fold cross-validation in both loops while Design II uses 10 fold cross-validation in its inner loop and leave-one-out-cross-validation in its outer loop.• Building the final diagnostic model involves: – Finding the best parameters for the classification using a single loop of cross-validation – Building the classifier on all data using the previously found best parameters – Estimating a conservative bound on the classifier’s accuracy by using either Designs
    23. 23. Gene Selection Gene Selection MethodsRatio of genes Kruskal-Wallis non- Signal-to-noise scoresbetween-categories parametric one-wayto within-category (S2N) ANOVA (KW)sum of squares (BW) S2N-OVR S2N-OVO
    24. 24. Performance Metrics• Accuracy – Easy to interpret – Simplifies statistical testing – Sensitive to prior class probabilities – Does not describe the actual difficulty of the decision problem for unbalanced distributions• Relative classifier information (RCI) – Corrects for the differences in: • Prior probabilities of the diagnostic categories • Number of categories
    25. 25. Overall Research Design Stage 1:Conducted a Factorial design involving datasets & classifiers w/o gene selectionStage 2: Conducted a Factorial Design w/ gene selection using datasets for which the full gene sets yielded poor performance 2.6 million diagnostic models generated Selection of one model for each combination of algorithm and dataset
    26. 26. Statistical Comparison among classifiers To test that differences b/t the best method and the other methods are non-random Null Hypothesis: Classification algorithm X is as good as Y Obtain permutation distribution of XY ∆ by repeatedly rearranging the outcomes of X and Y at random Compute the p-value of XY ∆ being greater than or equal to observed difference XY ∆ over 10000 permutations If p < 0.05  Reject H0 If p > 0.05  Accept H0Algorithm X is not as good as Y in terms Algorithm X is as good as Y in terms of of classification accuracy classification accuracy
    27. 27. Performance Results (Accuracies) without Gene Selection Using Design I
    28. 28. Performance Results (RCI) without Gene Selection Using Design I
    29. 29. Total Time of Classification Experiments w/o geneselection for all 11 datasets and two experimentaldesigns • Executed in a Matlab R13 environment on 8 dual-CPU workstations connected in a cluster. • Fastest MC-SVMs: WW & CS • Fastest overall algorithm: KNN • Slowest MC-SVM: OVR • Slowest overall algorithms: NN and PNN
    30. 30. Performance Results (Accuracies) with Gene Selection Using Design IImprovement by gene selection Applied the 4 gene selection methods to the 4 most challenging datasets
    31. 31. Performance Results (RCI) with Gene Selection Using Design IImprovement by gene selection Applied the 4 gene selection methods to the 4 most challenging datasets
    32. 32. Discussion & Limitations• Limitations: – Use of the two performance metrics – Choice of KNN, PNN and NN classifiers• Future Research: – Improve existing gene selection procedures with the selection of optimal number of genes by cross-validation – Applying multivariate Markov blanket and local neighborhood algorithms – Extend comparisons with more MC-SVMs as they become available – Updating GEMS system to make it more user-friendly.
    33. 33. Contributions of Study• Conducted the most comprehensive systematic evaluation to date of multicategory diagnosis algorithms applied to the majority of multicategory cancer-related gene expression human datasets.• Creation of the GEMS system that automates the experimental procedures in the study in order to: – Develop optimal classification models for the domain of cancer diagnosis with microarray gene expression data. – Estimate their performance in future patients.
    34. 34. Conclusions• MSVMs are the best family of algorithms for these types of data and medical tasks. They outperform non-SVM machine learning techniques• Among MC-SVM methods OVR, CS and WW are the best w.r.t classification performance• Gene selection can improve the performance of MC and non- SVM methods• Ensemble classification does not further improve the classification performance of the best MC-SVM methods
    1. A particular slide catching your eye?

      Clipping is a handy way to collect important slides you want to go back to later.

    ×