Part 2: Unsupervised Learning Machine Learning Techniques

1,879 views
1,730 views

Published on

0 Comments
2 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
1,879
On SlideShare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
81
Comments
0
Likes
2
Embeds 0
No embeds

No notes for slide

Part 2: Unsupervised Learning Machine Learning Techniques

  1. 1. Part 2: Unsupervised Learning Machine Learning Techniques for Computer Vision Microsoft Research Cambridge ECCV 2004, Prague Christopher M. Bishop
  2. 2. Overview of Part 2 <ul><li>Mixture models </li></ul><ul><li>EM </li></ul><ul><li>Variational Inference </li></ul><ul><li>Bayesian model complexity </li></ul><ul><li>Continuous latent variables </li></ul>
  3. 3. The Gaussian Distribution <ul><li>Multivariate Gaussian </li></ul><ul><li>Maximum likelihood </li></ul>mean covariance
  4. 4. Gaussian Mixtures <ul><li>Linear super-position of Gaussians </li></ul><ul><li>Normalization and positivity require </li></ul>
  5. 5. Example: Mixture of 3 Gaussians
  6. 6. Maximum Likelihood for the GMM <ul><li>Log likelihood function </li></ul><ul><li>Sum over components appears inside the log </li></ul><ul><ul><li>no closed form ML solution </li></ul></ul>
  7. 7. EM Algorithm – Informal Derivation
  8. 8. EM Algorithm – Informal Derivation <ul><li>M step equations </li></ul>
  9. 9. EM Algorithm – Informal Derivation <ul><li>E step equation </li></ul>
  10. 10. EM Algorithm – Informal Derivation <ul><li>Can interpret the mixing coefficients as prior probabilities </li></ul><ul><li>Corresponding posterior probabilities ( responsibilities ) </li></ul>
  11. 11. Old Faithful Data Set Duration of eruption (minutes) Time between eruptions (minutes)
  12. 18. Latent Variable View of EM <ul><li>To sample from a Gaussian mixture: </li></ul><ul><ul><li>first pick one of the components with probability </li></ul></ul><ul><ul><li>then draw a sample from that component </li></ul></ul><ul><ul><li>repeat these two steps for each new data point </li></ul></ul>
  13. 19. Latent Variable View of EM <ul><li>Goal: given a data set, find </li></ul><ul><li>Suppose we knew the colours </li></ul><ul><ul><li>maximum likelihood would involve fitting each component to the corresponding cluster </li></ul></ul><ul><li>Problem: the colours are latent (hidden) variables </li></ul>
  14. 20. Incomplete and Complete Data complete incomplete
  15. 21. Latent Variable Viewpoint
  16. 22. Latent Variable Viewpoint <ul><li>Binary latent variables describing which component generated each data point </li></ul><ul><li>Conditional distribution of observed variable </li></ul><ul><li>Prior distribution of latent variables </li></ul><ul><li>Marginalizing over the latent variables we obtain </li></ul>
  17. 23. Graphical Representation of GMM
  18. 24. Latent Variable View of EM <ul><li>Suppose we knew the values for the latent variables </li></ul><ul><ul><li>maximize the complete-data log likelihood </li></ul></ul><ul><ul><li>trivial closed-form solution: fit each component to the corresponding set of data points </li></ul></ul><ul><li>We don’t know the values of the latent variables </li></ul><ul><ul><li>however, for given parameter values we can compute the expected values of the latent variables </li></ul></ul>
  19. 25. Posterior Probabilities (colour coded)
  20. 26. Over-fitting in Gaussian Mixture Models <ul><li>Infinities in likelihood function when a component ‘collapses’ onto a data point: with </li></ul><ul><li>Also, maximum likelihood cannot determine the number K of components </li></ul>
  21. 27. Cross Validation <ul><li>Can select model complexity using an independent validation data set </li></ul><ul><li>If data is scarce use cross-validation : </li></ul><ul><ul><li>partition data into S subsets </li></ul></ul><ul><ul><li>train on S  1 subsets </li></ul></ul><ul><ul><li>test on remainder </li></ul></ul><ul><ul><li>repeat and average </li></ul></ul><ul><li>Disadvantages </li></ul><ul><ul><li>computationally expensive </li></ul></ul><ul><ul><li>can only determine one or two complexity parameters </li></ul></ul>
  22. 28. Bayesian Mixture of Gaussians <ul><li>Parameters and latent variables appear on equal footing </li></ul><ul><li>Conjugate priors </li></ul>
  23. 29. Data Set Size <ul><li>Problem 1: learn the function for from 100 (slightly) noisy examples </li></ul><ul><ul><li>data set is computationally small but statistically large </li></ul></ul><ul><li>Problem 2: learn to recognize 1,000 everyday objects from 5,000,000 natural images </li></ul><ul><ul><li>data set is computationally large but statistically small </li></ul></ul><ul><li>Bayesian inference </li></ul><ul><ul><li>computationally more demanding than ML or MAP (but see discussion of Gaussian mixtures later) </li></ul></ul><ul><ul><li>significant benefit for statistically small data sets </li></ul></ul>
  24. 30. Variational Inference <ul><li>Exact Bayesian inference intractable </li></ul><ul><li>Markov chain Monte Carlo </li></ul><ul><ul><li>computationally expensive </li></ul></ul><ul><ul><li>issues of convergence </li></ul></ul><ul><li>Variational Inference </li></ul><ul><ul><li>broadly applicable deterministic approximation </li></ul></ul><ul><ul><li>let denote all latent variables and parameters </li></ul></ul><ul><ul><li>approximate true posterior using a simpler distribution </li></ul></ul><ul><ul><li>minimize Kullback-Leibler divergence </li></ul></ul>
  25. 31. General View of Variational Inference <ul><li>For arbitrary where </li></ul><ul><li>Maximizing over would give the true posterior </li></ul><ul><ul><li>this is intractable by definition </li></ul></ul>
  26. 32. Variational Lower Bound
  27. 33. Factorized Approximation <ul><li>Goal: choose a family of q distributions which are: </li></ul><ul><ul><li>sufficiently flexible to give good approximation </li></ul></ul><ul><ul><li>sufficiently simple to remain tractable </li></ul></ul><ul><li>Here we consider factorized distributions </li></ul><ul><li>No further assumptions are required! </li></ul><ul><li>Optimal solution for one factor, keeping the remainder fixed </li></ul><ul><ul><li>coupled solutions so initialize then cyclically update </li></ul></ul><ul><ul><li>message passing view (Winn and Bishop, 2004) </li></ul></ul>
  28. 35. Lower Bound <ul><li>Can also be evaluated </li></ul><ul><li>Useful for maths/code verification </li></ul><ul><li>Also useful for model comparison: </li></ul>
  29. 36. Illustration: Univariate Gaussian <ul><li>Likelihood function </li></ul><ul><li>Conjugate prior </li></ul><ul><li>Factorized variational distribution </li></ul>
  30. 37. Initial Configuration
  31. 38. After Updating
  32. 39. After Updating
  33. 40. Converged Solution
  34. 41. Variational Mixture of Gaussians <ul><li>Assume factorized posterior distribution </li></ul><ul><li>No other approximations needed! </li></ul>
  35. 42. Variational Equations for GMM
  36. 43. Lower Bound for GMM
  37. 44. VIBES <ul><li>Bishop, Spiegelhalter and Winn (2002) </li></ul>
  38. 45. ML Limit <ul><li>If instead we choose we recover the maximum likelihood EM algorithm </li></ul>
  39. 46. Bound vs. K for Old Faithful Data
  40. 47. Bayesian Model Complexity
  41. 48. Sparse Bayes for Gaussian Mixture <ul><li>Corduneanu and Bishop (2001) </li></ul><ul><li>Start with large value of K </li></ul><ul><ul><li>treat mixing coefficients as parameters </li></ul></ul><ul><ul><li>maximize marginal likelihood </li></ul></ul><ul><ul><li>prunes out excess components </li></ul></ul>
  42. 51. Summary: Variational Gaussian Mixtures <ul><li>Simple modification of maximum likelihood EM code </li></ul><ul><li>Small computational overhead compared to EM </li></ul><ul><li>No singularities </li></ul><ul><li>Automatic model order selection </li></ul>
  43. 52. Continuous Latent Variables <ul><li>Conventional PCA </li></ul><ul><ul><li>data covariance matrix </li></ul></ul><ul><ul><li>eigenvector decomposition </li></ul></ul><ul><li>Minimizes sum-of-squares projection </li></ul><ul><ul><li>not a probabilistic model </li></ul></ul><ul><ul><li>how should we choose L ? </li></ul></ul>
  44. 53. Probabilistic PCA <ul><li>Tipping and Bishop (1998) </li></ul><ul><li>L dimensional continuous latent space </li></ul><ul><li>D dimensional data space </li></ul>PCA factor analysis
  45. 54. Probabilistic PCA <ul><li>Marginal distribution </li></ul><ul><li>Advantages </li></ul><ul><ul><li>exact ML solution </li></ul></ul><ul><ul><li>computationally efficient EM algorithm </li></ul></ul><ul><ul><li>captures dominant correlations with few parameters </li></ul></ul><ul><ul><li>mixtures of PPCA </li></ul></ul><ul><ul><li>Bayesian PCA </li></ul></ul><ul><ul><li>building block for more complex models </li></ul></ul>
  46. 55. EM for PCA
  47. 56. EM for PCA
  48. 57. EM for PCA
  49. 58. EM for PCA
  50. 59. EM for PCA
  51. 60. EM for PCA
  52. 61. EM for PCA
  53. 62. Bayesian PCA <ul><li>Bishop (1998) </li></ul><ul><li>Gaussian prior over columns of </li></ul><ul><li>Automatic relevance determination (ARD) </li></ul>ML PCA Bayesian PCA
  54. 63. Non-linear Manifolds <ul><li>Example: images of a rigid object </li></ul>
  55. 64. Bayesian Mixture of BPCA Models
  56. 66. Flexible Sprites <ul><li>Jojic and Frey (2001) </li></ul><ul><li>Automatic decomposition of video sequence into </li></ul><ul><ul><li>background model </li></ul></ul><ul><ul><li>ordered set of masks (one per object per frame) </li></ul></ul><ul><ul><li>foreground model (one per object per frame) </li></ul></ul>
  57. 68. Transformed Component Analysis <ul><li>Generative model </li></ul><ul><li>Now include transformations (translations) </li></ul><ul><li>Extend to L layers </li></ul><ul><li>Inference intractable so use variational framework </li></ul>
  58. 70. Bayesian Constellation Model <ul><li>Li, Fergus and Perona (2003) </li></ul><ul><li>Object recognition from small training sets </li></ul><ul><li>Variational treatment of fully Bayesian model </li></ul>
  59. 71. Bayesian Constellation Model
  60. 72. Summary of Part 2 <ul><li>Discrete and continuous latent variables </li></ul><ul><ul><li>EM algorithm </li></ul></ul><ul><li>Build complex models from simple components </li></ul><ul><ul><li>represented graphically </li></ul></ul><ul><ul><li>incorporates prior knowledge </li></ul></ul><ul><li>Variational inference </li></ul><ul><ul><li>Bayesian model comparison </li></ul></ul>

×