This document provides an overview of key probability concepts:
1) It defines probability as the number of times an event occurs divided by the total number of trials. Sample space and events of interest are also introduced.
2) Independence and conditional probability are covered. Bayes' theorem is explained as a way to convert between prior and posterior probabilities.
3) Random variables and their distributions like the cumulative distribution function and probability density function are defined. Common distributions like uniform and Gaussian are described.
4) Statistical concepts such as expectation, variance, and their estimation from samples are summarized. Noise models for images and optical flow are briefly outlined.
2. Intuitive Development
• Intuitively, the probability of an event a
could be defined as:
Where N(a) is the number that event a happens in n trialsWhere N(a) is the number that event a happens in n trials
3. More Formal:
• Ω is the Sample Space:
– Contains all possible outcomes of an experiment
• ω2 Ω is a single outcome
• A 2 Ω is a set of outcomes of interest
4. Independence
• The probability of independent events A, B
and C is given by:
P(ABC) = P(A)P(B)P(C)
A and B are independent, if knowing that A has happenedA and B are independent, if knowing that A has happened
does not say anything about B happeningdoes not say anything about B happening
8. Random Variables
• A (scalar) random variable X is a function
that maps the outcome of a random event
into real scalar values
ωω
ΩΩ X(X(ωω))
9. Random Variables Distributions
• Cumulative Probability Distribution (CDF):
• Probability Density Function (PDF):Probability Density Function (PDF):
19. Measuring Noise
• Noise Amount: SNR = σs/ σn
• Noise Estimation:
– Given a sequence of images I0,I1, … IN-1
∑∑
∑
∑
−
=
−
=
=
−
=
−
=
−
−
=
=
1
0
1
0
2
1
0
1
0
),(
1
)),(),((
1
1
),(
),(
1
),(
R
i
C
j
n
k
N
k
N
k
k
ji
RC
jiIjiI
N
ji
jiI
N
jiI
σσ
σ
20. Good estimators
Data values z are random variables
A parameter θ describes the distribution
We have an estimator ϕ (z) of the unknown parameter θ.
If
E(ϕ (z) − θ ) = 0 or
E(ϕ (z) ) = E(θ) the estimator ϕ (z) is unbiased
24. Least Squares (LS)
bias
Larger variance in δA,,ill-conditioned A,
u oriented close to the eigenvector of the
smallest eigenvalue increase the bias
Generally underestimation
25. (a) (b)
Estimation of optical flow
(a) Local information determines the component of flow perpendicular to edges
(b) The optical flow as best intersection of the flow constraints is biased.
26. Optical flow
• One patch gives a system:
0
0
0
0
0
1
2
1
22
11
=+
=
+
ts
t
t
t
yx
yx
yx
IuI
I
I
I
v
u
II
II
II
nnn
tyx IvIuI −=+
27. Noise model
• additive, identically, independently distributed,
symmetric noise:
iii
iii
iii
ttt
yyy
xxx
NII
NII
NII
+′=
+′=
+′=
22
)()()( tttsyyxx iiiiii
NNENNENNE σσ ===