This document provides an overview of probability theory concepts related to random variables. It defines random variables and their probability mass functions and cumulative distribution functions. It describes different types of random variables including discrete, continuous, Bernoulli, binomial, geometric, Poisson, uniform, exponential, gamma, and normal random variables. It also covers concepts of joint and marginal distributions as well as independent and conditional random variables. The document uses mathematical notation to formally define these concepts.
At times it is useful to consider a function whose derivative is a given function. We look at the general idea of reversing the differentiation process and its applications to rectilinear motion.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
At times it is useful to consider a function whose derivative is a given function. We look at the general idea of reversing the differentiation process and its applications to rectilinear motion.
Image sciences, image processing, image restoration, photo manipulation. Image and videos representation. Digital versus analog imagery. Quantization and sampling. Sources and models of noises in digital CCD imagery: photon, thermal and readout noises. Sources and models of blurs. Convolutions and point spread functions. Overview of other standard models, problems and tasks: salt-and-pepper and impulse noises, half toning, inpainting, super-resolution, compressed sensing, high dynamic range imagery, demosaicing. Short introduction to other types of imagery: SAR, Sonar, ultrasound, CT and MRI. Linear and ill-posed restoration problems.
There are various reasons why we would want to find the extreme (maximum and minimum values) of a function. Fermat's Theorem tells us we can find local extreme points by looking at critical points. This process is known as the Closed Interval Method.
Integration by substitution is the chain rule in reverse.
NOTE: the final location is section specific. Section 1 (morning) is in SILV 703, Section 11 (afternoon) is in CANT 200
Lesson 15: Exponential Growth and Decay (slides)Matthew Leingang
Many problems in nature are expressible in terms of a certain differential equation that has a solution in terms of exponential functions. We look at the equation in general and some fun applications, including radioactivity, cooling, and interest.
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)Matthew Leingang
The exponential function is pretty much the only function whose derivative is itself. The derivative of the natural logarithm function is also beautiful as it fills in an important gap. Finally, the technique of logarithmic differentiation allows us to find derivatives without the product rule.
There are various reasons why we would want to find the extreme (maximum and minimum values) of a function. Fermat's Theorem tells us we can find local extreme points by looking at critical points. This process is known as the Closed Interval Method.
Integration by substitution is the chain rule in reverse.
NOTE: the final location is section specific. Section 1 (morning) is in SILV 703, Section 11 (afternoon) is in CANT 200
Lesson 15: Exponential Growth and Decay (slides)Matthew Leingang
Many problems in nature are expressible in terms of a certain differential equation that has a solution in terms of exponential functions. We look at the equation in general and some fun applications, including radioactivity, cooling, and interest.
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)Matthew Leingang
The exponential function is pretty much the only function whose derivative is itself. The derivative of the natural logarithm function is also beautiful as it fills in an important gap. Finally, the technique of logarithmic differentiation allows us to find derivatives without the product rule.
My attractive effective presentation is the proof of my hard work as i made it for those who can not take interest in their studies so as they can see this they will take interest too as well as for those who really want to do come thing different from others , they can use my presentation if any kind of help you want just mail me at ammara.aftab63@gmail.com
Describes the design, assumptions, and interpretations for one-way ANOVA, one-way repeated measures ANOVA, factorial ANOVA, SPANOVA, ANCOVA, and MANOVA. More info: http://en.wikiversity.org/wiki/Survey_research_and_design_in_psychology/Lectures/ANOVA_II
2. Random Variables
Definition 1. A random variable is a mapping X : S → R that associates
a unique numerical value X(ω) to each outcome ω.
Letting X denote the random variable that is defined as the sum of two
fair dice, then
1
P {X = 2) = P ({(1, 1))) = ,
36
2
P {X = 3) = P ({(1, 2), (2, 1))) = ,
36
3
P {X = 4) = P ({(1, 3), (2, 2), (3, 1))) =
36
– Typeset by FoilTEX – 1
3. Distribution Functions and Probability Functions
Definition 2. The cumulative distribution function CDF FX : R → [0, 1]
of a r.v X is defined by
FX (x) = P (X ≤ x).
Example 1. Flip a fair coin twice and let X be the number of heads. Then
P (X = 0) = P (X = 2) = 1/4 and P (X = 1) = 1/2. The distribution
function is
– Typeset by FoilTEX – 2
4.
0
x<0
1/4 0 ≤ x ≤ 1
FX (x) =
3/4 1 ≤ x ≤ 2
1 x ≥ 2.
– Typeset by FoilTEX – 3
5. Discrete Random Variables
Definition 3. X is discrete if it takes countably many values {x1, x2, . . .}.
We define the probability mass function p(a) or probability function for
r.v X by
fX (x) = P (X = x)
∞
Thus, fX (x) ≥ 0 ∀x ∈ R and i=1
p(xi) = 1. The CDF of X is
related to fX by
FX (x) = P (X ≤ x) = fX (xi)
all xi ≤x
– Typeset by FoilTEX – 4
6. The Bernoulli Random Variable
Suppose that a trail (or an experiment), whose outcome can be classified
as either a ”‘success”’ or as a ”‘failure”’ is performed. If we let X equal 1
if the outcome is a success and 0 if it is a failure, then the probability mass
function of X is given by
p(0) = P (X = 0) =1−p (1)
p(1) = P (X = 1) =p (2)
where p, 0 ≤ p ≤ 1, is the probability that the trial is a ”‘success”’
– Typeset by FoilTEX – 5
7. The Binomial Random Variable
• Suppose that n independent trials, each of which results in a ”‘success”’
with probability p and in a ”‘failure”’ with probability 1 − p.
• If X represents the number of successes that occur in the n trials, then
X is said to be a binomial random variable with parameters (n, p)
• The probability mass function of a binomial random variable is given by
n
p(i) = pi(1 − p)n−i, i = 0, 1, . . . , n
i
– Typeset by FoilTEX – 6
8. Example 2. Four fair coins are flipped. If the outcomes are assumed
independent, what is the probability that two heads and two tails are
obtained?
Example 3. It is known that all items produced by a certain machine will
be defective with probability 0.1, independently of each other. What is the
probability that in a sample of three items, at most one will be defective?
– Typeset by FoilTEX – 7
9. The Geometric Random Variable
• Suppose that independent trials, each having a probability p of being a
success, are performed until a success occurs.
• Let X be the number of trails required until the first success, then X is
said to be a geometric random variable with parameter p.
• Its probability mass function is given by
p(n) = P (X = n) = (1 − p)n−1p, n = 1, 2, . . .
– Typeset by FoilTEX – 8
10. The Poisson Random Variable
A random variable X, taking on one of the values 0, 1, 2, . . . is said to
be a Poisson random variable with parameter λ, if for some λ > 0,
i
−λ λ
p(i) = P (X = i) = e , i = 0, 1, . . .
i!
– Typeset by FoilTEX – 9
11. Continuous Random Variables
Definition 4. A r.v X is is continuous if there exists a function fX such
∞
that fX (x) ≥ 0∀x, −∞ fX (x)dx = 1 and for every a ≤ b,
b
P (a < X < b) = fX (x)dx
a
The function fX is called the probability density function(PDF). We
have that
x
FX (x) = fX (t)dt
−∞
and fX (x) = FX (x) at all points x at which FX is differentiable.
– Typeset by FoilTEX – 10
12. • If X is continuous then P (X = x) = 0∀x
• f (x) is different from P (X = 0)inthecontinuouscase
• a PDF can be bigger than 1 (unlike a mass function)
1
5 x ∈ [0, 5 ]
f (x) =
0 o.w
then f (x) ≥ 0 and f (x)dx = 1 so this is a well-defined PDF even
though f (x) = 5 in some places.
– Typeset by FoilTEX – 11
13. Lemma 1. Let F be the CDF for a r.v X. Then:
1. P (X = x) = F (x) − F (x−) where F (x−) = limy↑xF (y),
2. P (x < X ≤ y) = F (y) − F (x),
3. P (X > x) = 1 − F (x),
4. If X is continuous then
P (a < X < b) = P (a ≤ X < b) = P (a < X ≤ b) = P (a ≤ X ≤ b)
– Typeset by FoilTEX – 12
14. The Uniform Random Variable
An random variable is said to be uniformly distributed over the interval
(0, 1) if its probability density function is given by
1, 0≤x≤1
f (x) =
0, otherwise
In general case,
1
β−α , α≤x≤β
f (x) =
0, otherwise
– Typeset by FoilTEX – 13
15. Example 4. Calculate the cumulative distribution function of a random
variable uniformly distributed over (α, β).
– Typeset by FoilTEX – 14
16. Exponential Random Variables
A continuous random variable whose probability density function is given,
for some λ > 0, by
λeλx, if x ≥ 0
f (x) =
0, if x ≤ 0
is said to be an exponential random variable with parameter λ.
– Typeset by FoilTEX – 15
17. Gamma Random Variables
A continuous random variable whose density is given by
λeλx (λx)α−1
Γ(α) , if x ≥ 0
f (x) =
0, if x ≤ 0
for some λ > 0, α > 0 is said to be a gamma random variable with
parameter α, λ. The quantity Γ(α) is called the gamma function and is
defined by
∞
Γ(α) = e−xxα−1dx
0
– Typeset by FoilTEX – 16
18. Normal Random Variables
X is a normal random variable with parameters (µ, σ 2) if the density of
X is given by
1 −(x−µ)2 /2σ 2
f (x) = √ e −∞ ≤ x ≤ ∞ (3)
2πσ
– Typeset by FoilTEX – 17
19. Remarks
• Read X ∼ F as ”‘X has distribution F ”’.
• X is a r.v; x denotes a particular value of the r.v; n and p (i.e Binomial
distribution) are parameters, that is, fixed real numbers. Parameters is
usually unknown and must be estimated from data.
• In practice, we think of r.v like a random number but formally it is a
mapping defined on some sample space.
– Typeset by FoilTEX – 18
20. Jointly Distributed Random Variables
Given a pair of discrete r.vs X and Y , define the joint mass function by
f (x, y) = P (X = x, Y = y).
Definition 5. In the continuous case, we call a function f (x, y) a pdf for
the r.vs (X, Y ) if
1. f (x, y) ≥ 0 ∀(x, y),
∞ ∞
2. −∞ −∞
f (x, y)dxdy = 1 and, for any set A ⊂ R × R, P ((X, Y ) ∈
A) = A
f (x, y)dxdy.
In the discrete or continuous case we define the joint CDF as
FX,Y (x, y) = P (X ≤ x, Y ≤ y).
– Typeset by FoilTEX – 19
21. Example 5. At a party N men throw their hats into the center of a
room. The hats are mixed up and each man randomly selects one. Find
the expected number of men that select their own hats.
Example 6. Suppose there are 25 different types of coupons and suppose
that each time one obtains a coupon, it is equally likely to be any one of
the 25 types. Compute the expected number of different types that are
contained in a set of 10 coupons.
– Typeset by FoilTEX – 20
22. Marginal Distributions
Definition 6. If (X, Y ) have a joint distribution with mass function fX,Y ,
then the marginal mass function for X is defined by
fX (x) = P (X = x) = P (X = x, Y = y) = f (x, y)
y y
and the marginal mass function for Y is defined by
fY (y) = P (Y = y) = P (X = x, Y = y) = f (x, y)
x x
– Typeset by FoilTEX – 21
23. Example 7. Calculate the marginal distributions for X and Y from table
below
Y=0 Y=1
X=0 1/10 2/10 3/10
X=1 3/10 4/10 7/10
4/10 6/10
Definition 7. For continuous r.vs, the marginal densities are
fX (x) = f (x, y)dy and fY (y) = f (x, y)dx
The corresponding marginal distribution functions are denoted by FX
and FY .
Example 8. Suppose that
– Typeset by FoilTEX – 22
24. x+y if 0 ≤ x ≤ 1, 0 ≤ y ≤ 1
f (x, y) =
0 otherwise
Then
1 1 1
1
fY (y) = (x + y)dx = xdx + ydx = + y.
0 0 0 2
– Typeset by FoilTEX – 23
25. Independent Random Variables
Definition 8. Two r.vs X and Y are said to be independent if, for every
A and B,
P (X ∈ A, Y ∈ B) = P (X ∈ A)P (Y ∈ B)
Theorem 1. Let X and Y have joint pdf fX,Y . Then X and Y are
independent is and only if fX,Y (x, y) = fX (x)fY (y) ∀x, y.
Example 9. Suppose that X and Y are independent and both have the
same density
– Typeset by FoilTEX – 24
26. 2x if 0 ≤ x ≤ 1
f (x) =
0 otherwise
Let find P (X + Y ≤ 1)?
Theorem 2. Suppose that the range of X and Y is a rectangle (possibly
infinite). If f (x, y) = g(x)h(y) for some functions g and h (not necessarily
probability density functions) then X and Y are independent.
Example 10. Let X and Y have density
2e−(x+2y) if x > 0 and y > 0
f (x, y) =
0 otherwise.
The range of X and Y is the rectangle (0, ∞) × (0, ∞). We can write
– Typeset by FoilTEX – 25
27. f (x, y) = g(x)h(y) where g(x) = 2e−x and h(y) = e−2y . Thus, X and Y
are independent.
– Typeset by FoilTEX – 26
28. Conditional Distributions
• One of the most useful concepts in probability theory
• We are often interested in calculating probabilities when some partial
information is available
• Calculating a desired probability or expectation it is useful to first
”‘condition”’ on some appropriate r.v
Definition 9. The redconditional probability mass function is
P (X = x, Y = y) fX,Y (x, y)
fX|Y (x|y) = P (X = x|Y = y) = =
P (Y = y) fY (y)
– Typeset by FoilTEX – 27
29. if fY (y) > 0.
Definition 10. For continuous r.vs, the conditional probability density
function is
fX|Y (x|y)
fX|Y (x|y) =
fY (y)
assuming that fY (y) > 0. Then,
P (X ∈ A|Y = y) = fX|Y (x|y)dx.
A
Example 11. Suppose that X ∼ U nif (0, 1). After obtaining a value of
X we generate Y |X = x ∼ U nif (x, 1). What is the marginal distribution
of Y ?
– Typeset by FoilTEX – 28
30. Multivariate Distributions and IID Samples
• Let call X(X1, . . . , Xn), where X1, . . . , Xn are r.vs, a random vector. If
X1, . . . , Xn are independent and each has the same marginal distribution
with density f , we say that X1, . . . , Xn are IID (independent and
identically distributed).
• Much of statistical theory and practice begins with IID observations.
– Typeset by FoilTEX – 29
31. Transformations of Random Variables
• Suppose that X is a r.v, Y = r(X) be a function of X, i.e. Y = X 2 or
Y = ex. How do we compute the PDF and CDF of Y ?
• In the discrete case
f −Y (y) = P (Y = y) = P (r(X) = y) = P ({x; r(x) = y}) = P (X ∈ r−1(y))
• In the continuous case
1. For each y, find the set Ay = {x : r(x) ≤ y}
– Typeset by FoilTEX – 30
32. 2. Find the CDF
FY (y) = P (Y ≤ y) = P (r(X) ≤ y) (4)
= P ({x; r(x) ≤ y}) = fX (x)dx (5)
Ay
3. The PDF is fY (y) = FY (y)
x
Example 12. Let fX (x) = e−x for x > 0. Then FX (x) = 0 fX (s)ds =
1 − e−x. Let Y = r(X) = logX. Then Ay = {x : x ≤ ey } and
y
FY (y) = P (Y ≤ y) = P (logX ≤ y) = P (X ≤ ey ) = FX (ey ) = 1 − e−e .
– Typeset by FoilTEX – 31
33. y
Therefore, fY (y) = ey e−e for y ∈ R.
– Typeset by FoilTEX – 32