Linear regression Machine Learning; Mon Apr 21, 2008
Motivation
Motivation Prediction for target? New observed predictor value
Motivation Problem:  We want a general way of obtaining a distribution  p ( x , t )  fitted to observed data. If we don't ...
Motivation Problem:  We want a general way of obtaining a distribution  p ( x , t )  fitted to observed data. If we don't ...
Linear Gaussian Models In a linear Guassian model, we model  p ( x , t )  as a conditional Guassian distribution where the...
Example
Example
General linear in input ...or adding a pseudo input x 0 =1
Non-linear in input (but still in weights)
Non-linear in input (but still in weights) But remember that we do not know the “true” underlying function...
Non-linear in input (but still in weights) ...nor the noise around the function...
General linear model Basis functions. Sometimes called “features”.
Examples of basis functions Polynomials Gaussians Sigmoids
Estimating parameters Log likelihood: Observed data:
Estimating parameters Log likelihood: Maximizing wrt  w  means minimizing  E   – the   error function . Observed data:
Estimating parameters
Estimating parameters
Estimating parameters Notice:  This is not just pure mathematics but an actual algorithm for estimating (learning) the par...
Estimating parameters Notice:  This is not just pure mathematics but an actual algorithm for estimating (learning) the par...
Estimating parameters Notice:  This is not just pure mathematics but an actual algorithm for estimating (learning) the par...
Geometrical interpretation Geometrically  y  is the projection of  t  onto the space spanned by the features:
Bayesian linear regression For the Bayesian approach we need a prior over the parameters  w  and  b  =  1/ s 2 Conjugate f...
Bayesian linear regression For the Bayesian approach we need a prior over the parameters  w  and  b  =  1/ s 2 Conjugate f...
Example
Bayesian linear regression Predictor for future observations is also Guassian (again result from 2.3.3):
Bayesian linear regression Predictor for future observations is also Guassian (again result from 2.3.3):
Bayesian linear regression Predictor for future observations is also Guassian (again result from 2.3.3): Both  mean and va...
Example
Over fitting Problem:  Over-fitting is always a problem when we fit data to generic models. With nested models, the ML par...
Maximum likelihood problems
Bayesian model selection We can take a more Bayesian approach and select model based on posterior model probabilities:
Bayesian model selection We can take a more Bayesian approach and select model based on posterior model probabilities: The...
Bayesian model selection We can take a more Bayesian approach and select model based on posterior model probabilities: The...
Bayesian model selection We can take a more Bayesian approach and select model based on posterior model probabilities: The...
The marginal likelihood The likelihood of the model is the integral over all the models parameters:
The marginal likelihood The likelihood of the model is the integral over all the models parameters: which is also the norm...
Implicit over-fitting penalty Assume this is the shape of prior and posterior prior posterior
Implicit over-fitting penalty Assume this is the shape of prior and posterior By proportionality prior p( D | w )p( w )
Implicit over-fitting penalty Assume this is the shape of prior and posterior By proportionality Integral approximately “w...
Implicit over-fitting penalty Increasingly negative as posterior becomes “pointy” compared to prior Close fitting to data ...
Implicit over-fitting penalty Penalty increases with number of parameters  M   Close fitting to data is implicitly penaliz...
On average we prefer the true model Penalty increases with number of parameters  M   This doesn't mean we always prefer th...
On average we prefer the true model Penalty increases with number of parameters  M   This doesn't mean we always prefer th...
On average we prefer the true model Penalty increases with number of parameters  M   This doesn't mean we always prefer th...
On average we prefer the true model Penalty increases with number of parameters  M   Close fitting to data is implicitly p...
On average we prefer the true model
Summary <ul><li>Linear Gaussians as generic densities </li></ul><ul><li>ML or Bayesian estimation for training </li></ul><...
Upcoming SlideShare
Loading in...5
×

Linear Regression

809

Published on

Published in: Technology, Business
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
809
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
52
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Transcript of "Linear Regression"

  1. 1. Linear regression Machine Learning; Mon Apr 21, 2008
  2. 2. Motivation
  3. 3. Motivation Prediction for target? New observed predictor value
  4. 4. Motivation Problem: We want a general way of obtaining a distribution p ( x , t ) fitted to observed data. If we don't try to interpret the distribution, then any distribution with non-zero value at the data points will do. We will use theory from last week to construct generic approaches to learning distributions from data.
  5. 5. Motivation Problem: We want a general way of obtaining a distribution p ( x , t ) fitted to observed data. If we don't try to interpret the distribution, then any distribution with non-zero value at the data points will do. We will use theory from last week to construct generic approaches to learning distributions from data. In this lecture: linear (normal/Gaussian) models.
  6. 6. Linear Gaussian Models In a linear Guassian model, we model p ( x , t ) as a conditional Guassian distribution where the x dependent mean depends linearly on a set of weights w .
  7. 7. Example
  8. 8. Example
  9. 9. General linear in input ...or adding a pseudo input x 0 =1
  10. 10. Non-linear in input (but still in weights)
  11. 11. Non-linear in input (but still in weights) But remember that we do not know the “true” underlying function...
  12. 12. Non-linear in input (but still in weights) ...nor the noise around the function...
  13. 13. General linear model Basis functions. Sometimes called “features”.
  14. 14. Examples of basis functions Polynomials Gaussians Sigmoids
  15. 15. Estimating parameters Log likelihood: Observed data:
  16. 16. Estimating parameters Log likelihood: Maximizing wrt w means minimizing E – the error function . Observed data:
  17. 17. Estimating parameters
  18. 18. Estimating parameters
  19. 19. Estimating parameters Notice: This is not just pure mathematics but an actual algorithm for estimating (learning) the parameters!
  20. 20. Estimating parameters Notice: This is not just pure mathematics but an actual algorithm for estimating (learning) the parameters! C with GSL and CBLAS
  21. 21. Estimating parameters Notice: This is not just pure mathematics but an actual algorithm for estimating (learning) the parameters! Octave/ Matlab
  22. 22. Geometrical interpretation Geometrically y is the projection of t onto the space spanned by the features:
  23. 23. Bayesian linear regression For the Bayesian approach we need a prior over the parameters w and b = 1/ s 2 Conjugate for Gaussian is Gaussian: Functions of observed values
  24. 24. Bayesian linear regression For the Bayesian approach we need a prior over the parameters w and b = 1/ s 2 Conjugate for Gaussian is Gaussian: Proof not exactly like before, but similar, and uses linearity results from Gaussians' from 2.3.3.
  25. 25. Example
  26. 26. Bayesian linear regression Predictor for future observations is also Guassian (again result from 2.3.3):
  27. 27. Bayesian linear regression Predictor for future observations is also Guassian (again result from 2.3.3):
  28. 28. Bayesian linear regression Predictor for future observations is also Guassian (again result from 2.3.3): Both mean and variance of this distribution depends on y !
  29. 29. Example
  30. 30. Over fitting Problem: Over-fitting is always a problem when we fit data to generic models. With nested models, the ML parameters will never prefer a simple model over a more complex model...
  31. 31. Maximum likelihood problems
  32. 32. Bayesian model selection We can take a more Bayesian approach and select model based on posterior model probabilities:
  33. 33. Bayesian model selection We can take a more Bayesian approach and select model based on posterior model probabilities: The normalizing factor is the same for all models:
  34. 34. Bayesian model selection We can take a more Bayesian approach and select model based on posterior model probabilities: The prior captures our preferences in the models.
  35. 35. Bayesian model selection We can take a more Bayesian approach and select model based on posterior model probabilities: The likelihood captures the data's preferences in models.
  36. 36. The marginal likelihood The likelihood of the model is the integral over all the models parameters:
  37. 37. The marginal likelihood The likelihood of the model is the integral over all the models parameters: which is also the normalizing factor for the posterior:
  38. 38. Implicit over-fitting penalty Assume this is the shape of prior and posterior prior posterior
  39. 39. Implicit over-fitting penalty Assume this is the shape of prior and posterior By proportionality prior p( D | w )p( w )
  40. 40. Implicit over-fitting penalty Assume this is the shape of prior and posterior By proportionality Integral approximately “width” times “height”
  41. 41. Implicit over-fitting penalty Increasingly negative as posterior becomes “pointy” compared to prior Close fitting to data is implicitly penalized, and the marginal likelihood is a trade-off between maximizing the posterior and minimizing this penalty.
  42. 42. Implicit over-fitting penalty Penalty increases with number of parameters M Close fitting to data is implicitly penalized, and the marginal likelihood is a trade-off between maximizing the posterior and minimizing this penalty.
  43. 43. On average we prefer the true model Penalty increases with number of parameters M This doesn't mean we always prefer the simplest model! One can show with zero only when i.e. on average the right model is the preferred model.
  44. 44. On average we prefer the true model Penalty increases with number of parameters M This doesn't mean we always prefer the simplest model! One can show with zero only when i.e. on average the right model is the preferred model. Negative when we prefer the second model positive when we prefer the first
  45. 45. On average we prefer the true model Penalty increases with number of parameters M This doesn't mean we always prefer the simplest model! One can show with zero only when i.e. on average the right model is the preferred model. On average, we will not prefer the second model when the first is true...
  46. 46. On average we prefer the true model Penalty increases with number of parameters M Close fitting to data is implicitly penalized, and the marginal likelihood is a trade-off between maximizing the posterior and minimizing this penalty. This doesn't mean we always prefer the simplest model! One can show with zero only when i.e. on average the right model is the preferred model.
  47. 47. On average we prefer the true model
  48. 48. Summary <ul><li>Linear Gaussians as generic densities </li></ul><ul><li>ML or Bayesian estimation for training </li></ul><ul><li>Over-fitting is an inherent problem in ML estimation </li></ul><ul><li>Bayesian methods avoid the maximization caused over-fitting problem </li></ul><ul><ul><li>(but is still vulnerable to model mis-specification) </li></ul></ul>
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×