Upcoming SlideShare
×

# Introduction to Probability

504 views

Published on

Presentation for reading session of Computer Vision: Models, Learning, and Inference

Published in: Technology
• Full Name
Comment goes here.

Are you sure you want to Yes No
• Be the first to comment

• Be the first to like this

### Introduction to Probability

1. 1. CHAPTER 2:INTRODUCTION TOPROBABILITYCOMPUTER VISION: MODELS, LEARNING ANDINFERENCELukas Tencer
2. 2. Random variables2  A random variable x denotes a quantity that is uncertain  May be result of experiment (flipping a coin) or a real world measurements (measuring temperature)  If observe several instances of x we get different values  Some values occur more than others and this information is captured by a probability distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
3. 3. Discrete Random Variables3 Ordered - histogram Unordered - hintongram Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
4. 4. Continuous Random Variable4 - Probability density function (PDF) of a distribution, integrates to 1 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
5. 5. Joint Probability5  Consider two random variables x and y  If we observe multiple paired instances, then some combinations of outcomes are more likely than others  This is captured in the joint probability distribution  Written as Pr(x,y)  Can read Pr(x,y) as “probability of x and y” Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
6. 6. Joint Probability6 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
7. 7. Marginalization7 We can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
8. 8. Marginalization8 We can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
9. 9. Marginalization9 We can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
10. 10. Marginalization10 We can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Works in higher dimensions as well – leaves joint distribution between whatever variables are left Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
11. 11. Conditional Probability11  Conditional probability of x given that y=y1 is relative propensity of variable x to take different outcomes given that y is fixed to be equal to y1.  Written as Pr(x|y=y1) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
12. 12. Conditional Probability12  Conditional probability can be extracted from joint probability  Extract appropriate slice and normalize (because dos not sums to 1) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
13. 13. Conditional Probability13  More usually written in compact form • Can be re-arranged to give Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
14. 14. Conditional Probability14  This idea can be extended to more than two variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
15. 15. Bayes’ Rule15 From before: Combining: Re-arranging: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
16. 16. Bayes’ Rule Terminology16 Likelihood – propensity Prior – what we know for observing a certain about y before seeing value of x given a certain x value of y Posterior – what we Evidence –a constant to know about y after ensure that the left hand seeing x side is a valid distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
17. 17. Independence17  If two variables x and y are independent then variable x tells us nothing about variable y (and vice-versa) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
18. 18. Independence18  If two variables x and y are independent then variable x tells us nothing about variable y (and vice-versa) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
19. 19. Independence19  When variables are independent, the joint factorizes into a product of the marginals: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
20. 20. Expectation20 Expectation tell us the expected or average value of some function f [x] taking into account the distribution of x Definition: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
21. 21. Expectation21 Expectation tell us the expected or average value of some function f [x] taking into account the distribution of x Definition in two dimensions: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
22. 22. Expectation: Common Cases22 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
23. 23. Expectation: Rules23 Rule 1: Expected value of a constant is the constant Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
24. 24. Expectation: Rules24 Rule 2: Expected value of constant times function is constant times expected value of function Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
25. 25. Expectation: Rules25 Rule 3: Expectation of sum of functions is sum of expectation of functions Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
26. 26. Expectation: Rules26 Rule 4: Expectation of product of functions in variables x and y is product of expectations of functions if x and y are independen Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
27. 27. Conclusions27 • Rules of probability are compact and simple • Concepts of marginalization, joint and conditional probability, Bayes rule and expectation underpin all of the models in this book • One remaining concept – conditional expectation – discussed later Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
28. 28. 28 Thank You for you attentionBased on:Computer vision: models, learning and inference. ©2011 Simon J.D. Princehttp://www.computervisionmodels.com/
29. 29. Additional29  Calculating k-th moment