. An introduction to machine learning and probabilistic ...

917 views
826 views

Published on

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
917
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
30
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

. An introduction to machine learning and probabilistic ...

  1. 1. An introduction to machine learning and probabilistic graphical models Kevin Murphy MIT AI Lab Presented at Intel’s workshop on “Machine learning for the life sciences”, Berkeley, CA, 3 November 2003
  2. 2. Overview <ul><li>Supervised learning </li></ul><ul><li>Unsupervised learning </li></ul><ul><li>Graphical models </li></ul><ul><li>Learning relational models </li></ul>Thanks to Nir Friedman, Stuart Russell, Leslie Kaelbling and various web sources for letting me use many of their slides
  3. 3. Supervised learning yes no N Small Arrow Red Y Small Star Blue Y Small Square Blue Y Big Torus Blue Output Size Shape Color F(x1, x2, x3) -> t Learn to approximate function from a training set of (x,t) pairs
  4. 4. Supervised learning Learner Training data Hypothesis Testing data Prediction N S A R Y S S B Y S S B Y B T B T X3 X2 X1 ? S C Y ? S A B T X3 X2 X1 N Y T
  5. 5. Key issue: generalization yes no ? ? Can’t just memorize the training set (overfitting)
  6. 6. Hypothesis spaces <ul><li>Decision trees </li></ul><ul><li>Neural networks </li></ul><ul><li>K-nearest neighbors </li></ul><ul><li>Naïve Bayes classifier </li></ul><ul><li>Support vector machines (SVMs) </li></ul><ul><li>Boosted decision stumps </li></ul><ul><li>… </li></ul>
  7. 7. Perceptron (neural net with no hidden layers) Linearly separable data
  8. 8. Which separating hyperplane?
  9. 9. The linear separator with the largest margin is the best one to pick margin
  10. 10. What if the data is not linearly separable?
  11. 11. Kernel trick kernel Kernel implicitly maps from 2D to 3D, making problem linearly separable x 1 x 2 z 1 z 2 z 3
  12. 12. Support Vector Machines (SVMs) <ul><li>Two key ideas: </li></ul><ul><ul><li>Large margins </li></ul></ul><ul><ul><li>Kernel trick </li></ul></ul>
  13. 13. Boosting Simple classifiers (weak learners) can have their performance boosted by taking weighted combinations Boosting maximizes the margin
  14. 14. Supervised learning success stories <ul><ul><li>Face detection </li></ul></ul><ul><ul><li>Steering an autonomous car across the US </li></ul></ul><ul><ul><li>Detecting credit card fraud </li></ul></ul><ul><ul><li>Medical diagnosis </li></ul></ul><ul><ul><li>… </li></ul></ul>
  15. 15. Unsupervised learning <ul><li>What if there are no output labels? </li></ul>
  16. 16. K-means clustering <ul><li>Guess number of clusters, K </li></ul><ul><li>Guess initial cluster centers,  1 ,  2 </li></ul><ul><li>Assign data points x i to nearest cluster center </li></ul><ul><li>Re-compute cluster centers based on assignments </li></ul>Reiterate
  17. 17. AutoClass (Cheeseman et al, 1986) <ul><li>EM algorithm for mixtures of Gaussians </li></ul><ul><li>“ Soft” version of K-means </li></ul><ul><li>Uses Bayesian criterion to select K </li></ul><ul><li>Discovered new types of stars from spectral data </li></ul><ul><li>Discovered new classes of proteins and introns from DNA/protein sequence databases </li></ul>
  18. 18. Hierarchical clustering
  19. 19. Principal Component Analysis (PCA) PCA seeks a projection that best represents the data in a least-squares sense. PCA reduces the dimensionality of feature space by restricting attention to those directions along which the scatter of the cloud is greatest.
  20. 20. Discovering nonlinear manifolds
  21. 21. Combining supervised and unsupervised learning
  22. 22. Discovering rules (data mining) Find the most frequent patterns (association rules) Num in household = 1 ^ num children = 0 => language = English Language = English ^ Income < $40k ^ Married = false ^ num children = 0 => education {college, grad school} HS MD PhD MA Educ. $30k $80k $20k $10k Income Retired Doctor Student Student Occup. 60 M F 30 M M 24 S F 22 S M Age Married Sex
  23. 23. Unsupervised learning: summary <ul><li>Clustering </li></ul><ul><li>Hierarchical clustering </li></ul><ul><li>Linear dimensionality reduction (PCA) </li></ul><ul><li>Non-linear dim. Reduction </li></ul><ul><li>Learning rules </li></ul>
  24. 24. Discovering networks ? From data visualization to causal discovery
  25. 25. Networks in biology <ul><li>Most processes in the cell are controlled by networks of interacting molecules: </li></ul><ul><ul><li>Metabolic Network </li></ul></ul><ul><ul><li>Signal Transduction Networks </li></ul></ul><ul><ul><li>Regulatory Networks </li></ul></ul><ul><li>Networks can be modeled at multiple levels of detail/ realism </li></ul><ul><ul><li>Molecular level </li></ul></ul><ul><ul><li>Concentration level </li></ul></ul><ul><ul><li>Qualitative level </li></ul></ul>Decreasing detail
  26. 26. Molecular level: Lysis-Lysogeny circuit in Lambda phage Arkin et al. (1998), Genetics 149(4):1633-48 <ul><ul><li>5 genes, 67 parameters based on 50 years of research </li></ul></ul><ul><ul><li>Stochastic simulation required supercomputer </li></ul></ul>
  27. 27. Concentration level: metabolic pathways <ul><li>Usually modeled with differential equations </li></ul>w 23 g1 g2 g3 g4 g5 w 12 w 55
  28. 28. Qualitative level: Boolean Networks
  29. 29. Probabilistic graphical models <ul><li>Supports graph-based modeling at various levels of detail </li></ul><ul><li>Models can be learned from noisy, partial data </li></ul><ul><li>Can model “inherently” stochastic phenomena, e.g., molecular-level fluctuations… </li></ul><ul><li>But can also model deterministic, causal processes. </li></ul>&quot;The actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful. Therefore the true logic for this world is the calculus of probabilities.&quot; -- James Clerk Maxwell &quot;Probability theory is nothing but common sense reduced to calculation.&quot; -- Pierre Simon Laplace
  30. 30. Graphical models: outline <ul><li>What are graphical models? </li></ul><ul><li>Inference </li></ul><ul><li>Structure learning </li></ul>
  31. 31. Simple probabilistic model: linear regression Y Y =  +  X + noise Deterministic (functional) relationship X
  32. 32. Simple probabilistic model: linear regression Y Y =  +  X + noise Deterministic (functional) relationship X “ Learning” = estimating parameters  ,  ,  from (x,y) pairs. Can be estimate by least squares Is the empirical mean Is the residual variance
  33. 33. Piecewise linear regression Latent “switch” variable – hidden process at work
  34. 34. Probabilistic graphical model for piecewise linear regression <ul><li>Hidden variable Q chooses which set of parameters to use for predicting Y. </li></ul><ul><li>Value of Q depends on value of input X. </li></ul>output input <ul><li>This is an example of “mixtures of experts” </li></ul>Learning is harder because Q is hidden, so we don’t know which data points to assign to each line; can be solved with EM (c.f., K-means) X Y Q
  35. 35. Classes of graphical models Probabilistic models Graphical models Directed Undirected Bayes nets MRFs DBNs
  36. 36. Bayesian Networks <ul><li>Qualitative part : </li></ul><ul><li>Directed acyclic graph (DAG) </li></ul><ul><li>Nodes - random variables </li></ul><ul><li>Edges - direct influence </li></ul>Quantitative part : Set of conditional probability distributions Earthquake Radio Burglary Alarm Call Compact representation of probability distributions via conditional independence Together: Define a unique distribution in a factored form Family of Alarm 0.9 0.1 e b e 0.2 0.8 0.01 0.99 0.9 0.1 b e b b e B E P(A | E,B)
  37. 37. Example: “ICU Alarm” network <ul><li>Domain: Monitoring Intensive-Care Patients </li></ul><ul><li>37 variables </li></ul><ul><li>509 parameters </li></ul><ul><li>…instead of 2 54 </li></ul>PCWP CO HRBP HREKG HRSAT ERRCAUTER HR HISTORY CATECHOL SAO2 EXPCO2 ARTCO2 VENTALV VENTLUNG VENITUBE DISCONNECT MINVOLSET VENTMACH KINKEDTUBE INTUBATION PULMEMBOLUS PAP SHUNT ANAPHYLAXIS MINOVL PVSAT FIO2 PRESS INSUFFANESTH TPR LVFAILURE ERRBLOWOUTPUT STROEVOLUME LVEDVOLUME HYPOVOLEMIA CVP BP
  38. 38. Success stories for graphical models <ul><li>Multiple sequence alignment </li></ul><ul><li>Forensic analysis </li></ul><ul><li>Medical and fault diagnosis </li></ul><ul><li>Speech recognition </li></ul><ul><li>Visual tracking </li></ul><ul><li>Channel coding at Shannon limit </li></ul><ul><li>Genetic pedigree analysis </li></ul><ul><li>… </li></ul>
  39. 39. Graphical models: outline <ul><li>What are graphical models? p </li></ul><ul><li>Inference </li></ul><ul><li>Structure learning </li></ul>
  40. 40. Probabilistic Inference <ul><li>Posterior probabilities </li></ul><ul><ul><li>Probability of any event given any evidence </li></ul></ul><ul><li>P(X|E) </li></ul>Radio Call Earthquake Radio Burglary Alarm Call
  41. 41. Viterbi decoding Y 1 Y 3 X 1 X 2 X 3 Y 2 Compute most probable explanation (MPE) of observed data Hidden Markov Model (HMM) “ Tomato” hidden observed
  42. 42. Inference: computational issues Easy Hard Chains Trees Grids Dense, loopy graphs PCWP CO HRBP HREKG HRSAT ERRCAUTER HR HISTORY CATECHOL SAO2 EXPCO2 ARTCO2 VENTALV VENTLUNG VENITUBE DISCONNECT MINVOLSET VENTMACH KINKEDTUBE INTUBATION PULMEMBOLUS PAP SHUNT MINOVL PVSAT PRESS INSUFFANESTH TPR LVFAILURE ERRBLOWOUTPUT STROEVOLUME LVEDVOLUME HYPOVOLEMIA CVP BP
  43. 43. Inference: computational issues Easy Hard Chains Trees Grids Dense, loopy graphs Many difference inference algorithms, both exact and approximate PCWP CO HRBP HREKG HRSAT ERRCAUTER HR HISTORY CATECHOL SAO2 EXPCO2 ARTCO2 VENTALV VENTLUNG VENITUBE DISCONNECT MINVOLSET VENTMACH KINKEDTUBE INTUBATION PULMEMBOLUS PAP SHUNT MINOVL PVSAT PRESS INSUFFANESTH TPR LVFAILURE ERRBLOWOUTPUT STROEVOLUME LVEDVOLUME HYPOVOLEMIA CVP BP
  44. 44. Bayesian inference <ul><li>Bayesian probability treats parameters as random variables </li></ul><ul><li>Learning/ parameter estimation is replaced by probabilistic inference P(  |D) </li></ul><ul><li>Example: Bayesian linear regression; parameters are  = (  ,  ,  ) </li></ul> X 1 Y 1 X n Y n Parameters are tied (shared) across repetitions of the data
  45. 45. Bayesian inference <ul><li>+ Elegant – no distinction between parameters and other hidden variables </li></ul><ul><li>+ Can use priors to learn from small data sets (c.f., one-shot learning by humans) </li></ul><ul><li>- Math can get hairy </li></ul><ul><li>- Often computationally intractable </li></ul>
  46. 46. Graphical models: outline <ul><li>What are graphical models? </li></ul><ul><li>Inference </li></ul><ul><li>Structure learning </li></ul>p p
  47. 47. Why Struggle for Accurate Structure? <ul><li>Increases the number of parameters to be estimated </li></ul><ul><li>Wrong assumptions about domain structure </li></ul><ul><li>Cannot be compensated for by fitting parameters </li></ul><ul><li>Wrong assumptions about domain structure </li></ul>Adding an arc Missing an arc Earthquake Alarm Set Sound Burglary Earthquake Alarm Set Sound Burglary Earthquake Alarm Set Sound Burglary Truth
  48. 48. Score ­b ased Learning E B A E B A E B A Search for a structure that maximizes the score Define scoring function that evaluates how well a structure matches the data E, B, A <Y,N,N> <Y,Y,Y> <N,N,Y> <N,Y,Y> . . <N,Y,Y>
  49. 49. Learning Trees <ul><li>Can find optimal tree structure in O(n 2 log n) time: just find the max-weight spanning tree </li></ul><ul><li>If some of the variables are hidden, problem becomes hard again, but can use EM to fit mixtures of trees </li></ul>
  50. 50. Heuristic Search <ul><li>Learning arbitrary graph structure is NP-hard. So it is common to resort to heuristic search </li></ul><ul><li>Define a search space: </li></ul><ul><ul><li>search states are possible structures </li></ul></ul><ul><ul><li>operators make small changes to structure </li></ul></ul><ul><li>Traverse space looking for high-scoring structures </li></ul><ul><li>Search techniques: </li></ul><ul><ul><li>Greedy hill-climbing </li></ul></ul><ul><ul><li>Best first search </li></ul></ul><ul><ul><li>Simulated Annealing </li></ul></ul><ul><ul><li>... </li></ul></ul>
  51. 51. Local Search Operations <ul><li>Typical operations: </li></ul>Reverse C  E Delete C  E Add C  D  score = S({C,E}  D) - S({E}  D) S C E D S C E D S C E D S C E D
  52. 52. Problems with local search S(G|D) Easy to get stuck in local optima “ truth” you
  53. 53. Problems with local search II Picking a single best model can be misleading E R B A C P(G|D)
  54. 54. Problems with local search II <ul><ul><li>Small sample size  many high scoring models </li></ul></ul><ul><ul><li>Answer based on one model often useless </li></ul></ul><ul><ul><li>Want features common to many models </li></ul></ul>Picking a single best model can be misleading E R B A C E R B A C E R B A C E R B A C E R B A C P(G|D)
  55. 55. Bayesian Approach to Structure Learning <ul><li>Posterior distribution over structures </li></ul><ul><li>Estimate probability of features </li></ul><ul><ul><li>Edge X  Y </li></ul></ul><ul><ul><li>Path X  …  Y </li></ul></ul><ul><ul><li>… </li></ul></ul>Feature of G , e.g., X  Y Indicator function for feature f Bayesian score for G
  56. 56. Bayesian approach: computational issues <ul><li>Posterior distribution over structures </li></ul>How compute sum over super-exponential number of graphs? <ul><li>MCMC over networks </li></ul><ul><li>MCMC over node-orderings (Rao-Blackwellisation) </li></ul>
  57. 57. Structure learning: other issues <ul><li>Discovering latent variables </li></ul><ul><li>Learning causal models </li></ul><ul><li>Learning from interventional data </li></ul><ul><li>Active learning </li></ul>
  58. 58. Discovering latent variables a) 17 parameters b) 59 parameters There are some techniques for automatically detecting the possible presence of latent variables
  59. 59. Learning causal models <ul><li>So far, we have only assumed that X -> Y -> Z means that Z is independent of X given Y. </li></ul><ul><li>However, we often want to interpret directed arrows causally. </li></ul><ul><li>This is uncontroversial for the arrow of time. </li></ul><ul><li>But can we infer causality from static observational data? </li></ul>
  60. 60. Learning causal models <ul><li>We can infer causality from static observational data if we have at least four measured variables and certain “tetrad” conditions hold. </li></ul><ul><li>See books by Pearl and Spirtes et al. </li></ul><ul><li>However, we can only learn up to Markov equivalence, not matter how much data we have. </li></ul>X Y Z X Y Z X Y Z X Y Z
  61. 61. Learning from interventional data <ul><li>The only way to distinguish between Markov equivalent networks is to perform interventions, e.g., gene knockouts. </li></ul><ul><li>We need to (slightly) modify our learning algorithms. </li></ul>smoking Yellow fingers P(smoker|observe(yellow)) >> prior smoking Yellow fingers P(smoker | do(paint yellow)) = prior Cut arcs coming into nodes which were set by intervention
  62. 62. Active learning <ul><li>Which experiments (interventions) should we perform to learn structure as efficiently as possible? </li></ul><ul><li>This problem can be modeled using decision theory. </li></ul><ul><li>Exact solutions are wildly computationally intractable. </li></ul><ul><li>Can we come up with good approximate decision making techniques? </li></ul><ul><li>Can we implement hardware to automatically perform the experiments? </li></ul><ul><li>“AB: Automated Biologist” </li></ul>
  63. 63. Learning from relational data Can we learn concepts from a set of relations between objects, instead of/ in addition to just their attributes?
  64. 64. Learning from relational data: approaches <ul><li>Probabilistic relational models (PRMs) </li></ul><ul><ul><li>Reify a relationship (arcs) between nodes (objects) by making into a node (hypergraph) </li></ul></ul><ul><li>Inductive Logic Programming (ILP) </li></ul><ul><ul><li>Top-down, e.g., FOIL (generalization of C4.5) </li></ul></ul><ul><ul><li>Bottom up, e.g., PROGOL (inverse deduction) </li></ul></ul>
  65. 65. ILP for learning protein folding: input yes no TotalLength(D2mhr, 118) ^ NumberHelices(D2mhr, 6) ^ … 100 conjuncts describing structure of each pos/neg example
  66. 66. ILP for learning protein folding: results <ul><li>PROGOL learned the following rule to predict if a protein will form a “four-helical up-and-down bundle”: </li></ul><ul><li>In English: “The protein P folds if it contains a long helix h 1 at a secondary structure position between 1 and 3 and h 1 is next to a second helix” </li></ul>
  67. 67. ILP: Pros and Cons <ul><li>+ Can discover new predicates (concepts) automatically </li></ul><ul><li>+ Can learn relational models from relational (or flat) data </li></ul><ul><li>- Computationally intractable </li></ul><ul><li>- Poor handling of noise </li></ul>
  68. 68. The future of machine learning for bioinformatics? Oracle
  69. 69. The future of machine learning for bioinformatics Learner Prior knowledge Replicated experiments Biological literature Hypotheses Expt. design Real world <ul><li>“ Computer assisted pathway refinement” </li></ul>
  70. 70. The end
  71. 71. Decision trees blue? big? oval? no no yes yes
  72. 72. Decision trees blue? big? oval? no no yes yes + Handles mixed variables + Handles missing data + Efficient for large data sets + Handles irrelevant attributes + Easy to understand - Predictive power
  73. 73. Feedforward neural network input Hidden layer Output Weights on each arc Sigmoid function at each node
  74. 74. Feedforward neural network input Hidden layer Output - Handles mixed variables - Handles missing data - Efficient for large data sets - Handles irrelevant attributes - Easy to understand + Predicts poorly
  75. 75. Nearest Neighbor <ul><ul><li>Remember all your data </li></ul></ul><ul><ul><li>When someone asks a question, </li></ul></ul><ul><ul><ul><li>find the nearest old data point </li></ul></ul></ul><ul><ul><ul><li>return the answer associated with it </li></ul></ul></ul>
  76. 76. Nearest Neighbor ? - Handles mixed variables - Handles missing data - Efficient for large data sets - Handles irrelevant attributes - Easy to understand + Predictive power
  77. 77. Support Vector Machines (SVMs) <ul><li>Two key ideas: </li></ul><ul><ul><li>Large margins are good </li></ul></ul><ul><ul><li>Kernel trick </li></ul></ul>
  78. 78. SVM: mathematical details <ul><li>Training data : l -dimensional vector with flag of true or false </li></ul><ul><li>Separating hyperplane : </li></ul><ul><li>Inequalities : </li></ul><ul><li>Margin : </li></ul><ul><li>Support vectors : </li></ul><ul><li>Support vector expansion: </li></ul><ul><li>Decision: </li></ul>margin
  79. 79. Replace all inner products with kernels Kernel function
  80. 80. SVMs: summary - Handles mixed variables - Handles missing data - Efficient for large data sets - Handles irrelevant attributes - Easy to understand + Predictive power <ul><li>Kernel trick can be used to make many linear methods non-linear e.g., kernel PCA, kernelized mutual information </li></ul><ul><li>Large margin classifiers are good </li></ul>General lessons from SVM success:
  81. 81. Boosting: summary <ul><li>Can boost any weak learner </li></ul><ul><li>Most commonly: boosted decision “stumps” </li></ul>+ Handles mixed variables + Handles missing data + Efficient for large data sets + Handles irrelevant attributes - Easy to understand + Predictive power
  82. 82. Supervised learning: summary <ul><li>Learn mapping F from inputs to outputs using a training set of (x,t) pairs </li></ul><ul><li>F can be drawn from different hypothesis spaces, e.g., decision trees, linear separators, linear in high dimensions, mixtures of linear </li></ul><ul><li>Algorithms offer a variety of tradeoffs </li></ul><ul><li>Many good books, e.g., </li></ul><ul><ul><li>“ The elements of statistical learning”, Hastie, Tibshirani, Friedman, 2001 </li></ul></ul><ul><ul><li>“ Pattern classification”, Duda, Hart, Stork, 2001 </li></ul></ul>
  83. 83. Inference <ul><li>Posterior probabilities </li></ul><ul><ul><li>Probability of any event given any evidence </li></ul></ul><ul><li>Most likely explanation </li></ul><ul><ul><li>Scenario that explains evidence </li></ul></ul><ul><li>Rational decision making </li></ul><ul><ul><li>Maximize expected utility </li></ul></ul><ul><ul><li>Value of Information </li></ul></ul><ul><li>Effect of intervention </li></ul>Radio Call Earthquake Radio Burglary Alarm Call
  84. 84. Assumption needed to make learning work <ul><li>We need to assume “Future futures will resemble past futures” (B. Russell) </li></ul><ul><li>Unlearnable hypothesis: “All emeralds are grue”, where “grue” means: green if observed before time t, blue afterwards. </li></ul>
  85. 85. Structure learning success stories: gene regulation network (Friedman et al.) <ul><li>Yeast data [Hughes et al 2000] </li></ul><ul><li>600 genes </li></ul><ul><li>300 experiments </li></ul>
  86. 86. Structure learning success stories II: Phylogenetic Tree Reconstruction (Friedman et al.) <ul><li>Input: Biological sequences </li></ul><ul><ul><ul><li>Human CGTTGC… </li></ul></ul></ul><ul><ul><ul><li>Chimp CCTAGG… </li></ul></ul></ul><ul><ul><ul><li>Orang CGAACG… </li></ul></ul></ul><ul><ul><ul><li>… . </li></ul></ul></ul><ul><li>Output: a phylogeny </li></ul>10 billion years Uses structural EM, with max-spanning-tree in the inner loop leaf
  87. 87. Instances of graphical models Probabilistic models Graphical models Directed Undirected Bayes nets MRFs DBNs Hidden Markov Model (HMM) Naïve Bayes classifier Mixtures of experts Kalman filter model Ising model
  88. 88. ML enabling technologies <ul><li>Faster computers </li></ul><ul><li>More data </li></ul><ul><ul><li>The web </li></ul></ul><ul><ul><li>Parallel corpora (machine translation) </li></ul></ul><ul><ul><li>Multiple sequenced genomes </li></ul></ul><ul><ul><li>Gene expression arrays </li></ul></ul><ul><li>New ideas </li></ul><ul><ul><li>Kernel trick </li></ul></ul><ul><ul><li>Large margins </li></ul></ul><ul><ul><li>Boosting </li></ul></ul><ul><ul><li>Graphical models </li></ul></ul><ul><ul><li>… </li></ul></ul>

×