Like this presentation? Why not share!

# Introduction to Probability

## on Dec 03, 2012

• 221 views

Presentation for reading session of Computer Vision: Models, Learning, and Inference

Presentation for reading session of Computer Vision: Models, Learning, and Inference

### Views

Total Views
221
Views on SlideShare
221
Embed Views
0

Likes
0
3
0

No embeds

### Categories

Uploaded via as Microsoft PowerPoint

### Report content

• Comment goes here.
Are you sure you want to
Your message goes here

## Introduction to ProbabilityPresentation Transcript

• CHAPTER 2:INTRODUCTION TOPROBABILITYCOMPUTER VISION: MODELS, LEARNING ANDINFERENCELukas Tencer
• Random variables2  A random variable x denotes a quantity that is uncertain  May be result of experiment (flipping a coin) or a real world measurements (measuring temperature)  If observe several instances of x we get different values  Some values occur more than others and this information is captured by a probability distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Discrete Random Variables3 Ordered - histogram Unordered - hintongram Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Continuous Random Variable4 - Probability density function (PDF) of a distribution, integrates to 1 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Joint Probability5  Consider two random variables x and y  If we observe multiple paired instances, then some combinations of outcomes are more likely than others  This is captured in the joint probability distribution  Written as Pr(x,y)  Can read Pr(x,y) as “probability of x and y” Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Joint Probability6 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Marginalization7 We can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Marginalization8 We can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Marginalization9 We can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Marginalization10 We can recover probability distribution of any variable in a joint distribution by integrating (or summing) over the other variables Works in higher dimensions as well – leaves joint distribution between whatever variables are left Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Conditional Probability11  Conditional probability of x given that y=y1 is relative propensity of variable x to take different outcomes given that y is fixed to be equal to y1.  Written as Pr(x|y=y1) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Conditional Probability12  Conditional probability can be extracted from joint probability  Extract appropriate slice and normalize (because dos not sums to 1) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Conditional Probability13  More usually written in compact form • Can be re-arranged to give Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Conditional Probability14  This idea can be extended to more than two variables Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Bayes’ Rule15 From before: Combining: Re-arranging: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Bayes’ Rule Terminology16 Likelihood – propensity Prior – what we know for observing a certain about y before seeing value of x given a certain x value of y Posterior – what we Evidence –a constant to know about y after ensure that the left hand seeing x side is a valid distribution Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Independence17  If two variables x and y are independent then variable x tells us nothing about variable y (and vice-versa) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Independence18  If two variables x and y are independent then variable x tells us nothing about variable y (and vice-versa) Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Independence19  When variables are independent, the joint factorizes into a product of the marginals: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Expectation20 Expectation tell us the expected or average value of some function f [x] taking into account the distribution of x Definition: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Expectation21 Expectation tell us the expected or average value of some function f [x] taking into account the distribution of x Definition in two dimensions: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Expectation: Common Cases22 Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Expectation: Rules23 Rule 1: Expected value of a constant is the constant Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Expectation: Rules24 Rule 2: Expected value of constant times function is constant times expected value of function Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Expectation: Rules25 Rule 3: Expectation of sum of functions is sum of expectation of functions Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Expectation: Rules26 Rule 4: Expectation of product of functions in variables x and y is product of expectations of functions if x and y are independen Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• Conclusions27 • Rules of probability are compact and simple • Concepts of marginalization, joint and conditional probability, Bayes rule and expectation underpin all of the models in this book • One remaining concept – conditional expectation – discussed later Computer vision: models, learning and inference. ©2011 Simon J.D. Prince
• 28 Thank You for you attentionBased on:Computer vision: models, learning and inference. ©2011 Simon J.D. Princehttp://www.computervisionmodels.com/
• Additional29  Calculating k-th moment