02 cv mil_intro_to_probability

305 views

Published on

Published in: Technology, Education
0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
305
On SlideShare
0
From Embeds
0
Number of Embeds
2
Actions
Shares
0
Downloads
3
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

02 cv mil_intro_to_probability

  1. 1. Computer vision:models, learning and inference Chapter 2 Introduction to probability Please send errata to s.prince@cs.ucl.ac.uk
  2. 2. Random variables• A random variable x denotes a quantity that is uncertain• May be result of experiment (flipping a coin) or a real world measurements (measuring temperature)• If observe several instances of x we get different values• Some values occur more than others and this information is captured by a probability distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 2
  3. 3. Discrete Random Variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 3
  4. 4. Continuous Random Variable Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 4
  5. 5. Joint Probability• Consider two random variables x and y• If we observe multiple paired instances, then some combinations of outcomes are more likely than others• This is captured in the joint probability distribution• Written as Pr(x,y)• Can read Pr(x,y) as “probability of x and y” Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 5
  6. 6. Joint ProbabilityComputer vision: models, learning and inference. ©2011 Simon J.D. Prince 6
  7. 7. MarginalizationWe can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 7
  8. 8. MarginalizationWe can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 8
  9. 9. MarginalizationWe can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 9
  10. 10. MarginalizationWe can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variablesWorks in higher dimensions as well – leaves joint distribution betweenwhatever variables are left Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 10
  11. 11. Conditional Probability• Conditional probability of x given that y=y1 is relative propensity of variable x to take different outcomes given that y is fixed to be equal to y1.• Written as Pr(x|y=y1) 11 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
  12. 12. Conditional Probability• Conditional probability can be extracted from joint probability• Extract appropriate slice and normalize Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 12
  13. 13. Conditional Probability• More usually written in compact form• Can be re-arranged to give Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 13
  14. 14. Conditional Probability• This idea can be extended to more than two variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 14
  15. 15. Bayes’ RuleFrom before:Combining:Re-arranging: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 15
  16. 16. Bayes’ Rule Terminology Likelihood – propensity for Prior – what we know observing a certain value of about y before seeing x x given a certain value of yPosterior – what we Evidence –a constant toknow about y after ensure that the left handseeing x side is a valid distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 16
  17. 17. Independence• If two variables x and y are independent then variable x tells us nothing about variable y (and vice-versa) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 17
  18. 18. Independence• If two variables x and y are independent then variable x tells us nothing about variable y (and vice-versa) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 18
  19. 19. Independence• When variables are independent, the joint factorizes into a product of the marginals: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 19
  20. 20. ExpectationExpectation tell us the expected or average value of somefunction f [x] taking into account the distribution of xDefinition: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 20
  21. 21. ExpectationExpectation tell us the expected or average value of somefunction f [x] taking into account the distribution of xDefinition in two dimensions: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 21
  22. 22. Expectation: Common Cases Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 22
  23. 23. Expectation: RulesRule 1:Expected value of a constant is the constant Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 23
  24. 24. Expectation: RulesRule 2:Expected value of constant times function is constant timesexpected value of function Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 24
  25. 25. Expectation: RulesRule 3:Expectation of sum of functions is sum of expectation offunctions Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 25
  26. 26. Expectation: RulesRule 4:Expectation of product of functions in variables x and yis product of expectations of functions if x and y are independent Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 26
  27. 27. Conclusions• Rules of probability are compact and simple• Concepts of marginalization, joint and conditional probability, Bayes rule and expectation underpin all of the models in this book• One remaining concept – conditional expectation – discussed later Computer vision: models, learning and inference. ©2011 Simon J.D. Prince 27

×