SlideShare a Scribd company logo
Conditional Probability Mass Function
• The probability distribution of a discrete random variable can be characterized by its probability mass function.
• When the probability distribution of the random variable is updated, in order to consider some information that
gives rise to a conditional probability distribution, then such a conditional distribution can be characterized by a
conditional probability mass function.
• Conditional PMF is especially appropriate when the experiment is a compound one, in which the second part of
the experiment depends upon the outcome of the first part.
• It has the usual properties of a PMF, that of being between 0 and 1 and also summing to one.
Conditional Probability Mass Function
• We recall that a conditional probability P[AIB] is the probability of an event A, given that we know that some
other event B has occurred.
• Except for the case when the two events are independent of each other , the knowledge that B has occurred will
change the probability P[A]. In other words, P[AIB] is our new probability in light of the additional knowledge.
• In many practical situations, two random mechanisms are at work and are described by events A and B .
• To compute probabilities for a complex experiment it is usually convenient to use a conditioning argument to
simplify the reasoning.
• For example, say we choose one of two coins and toss it 4 times. We might inquire as to the probability of
observing 2 or more heads. However, this probability will depend upon which coin was chosen, as for example in
the situation where one coin is fair and the other coin is weighted.
• It is therefore convenient to define conditional probability mass functions, Px [klcoin 1 chosen] and Px[klcoin 2
chosen], since once we know which coin is chosen, we can easily specify the PMF.
• In particular, for this example the conditional PMF is a binomial one whose value of P depends upon which coin is
chosen and with k denoting the number of heads. Once the conditional PMFs are known, we have by the law of
total probability that the probability of observing k heads for this experiment is given by the PMF:
Px [k] = Px [klcoin 1 chosen]P[coin 1 chosen] + Px[klcoin 2 chosen]P[coin 2 chosen]
Conditional Probability Mass Function cont.
• The PMF that is required depends directly on the conditional PMFs (of which there are two).
• The use of conditional PMFs greatly simplifies our task in that given the event, i.e., the coin chosen, the PMF of
the number of heads observed readily follows. Also, in many problems, including this one, it is actually the
conditional PMFs that are specified in the description of the experimental procedure.
• It makes sense, therefore, to define a conditional PMF and study its properties.
• For the most part, the definitions and properties will mirror those of the conditional probability P[AIB], where A
and B are events defined on SX,Y.
• Just as we used conditional probabilities to evaluate the likelihood of one event given another, we develop here
the concepts of continuous conditional distributions and continuous conditional probability mass functions and
probability density functions to evaluate the behavior of one random variable given knowledge of another.
• Conditional probability P[A|B] is a number that expresses our new knowledge about the occurrence of event A,
when we learn that another event B occurs.
• In Conditional Probability Mass Function we consider event A to be the observation of a particular value of a
random variable: A = {X = x}.
• The conditioning event B contains information about X but not the precise value of X.
• If X ≤ 33 we learn of the occurrence of an event B that describes some property of X. The occurrence of the
conditioning event B changes the probabilities of the event {X = x}.
Conditional Probability Mass Function cont.
We can find the conditional probabilities P [A|B] = P [X = x|B] for all real numbers x. This collection of probabilities is
a function of x.
It is the conditional probability mass function of random variable X, given that B occurred.
Definition 2.19 Conditional PMF
Given the event B, with P[B] > 0, the conditional probability mass function of X is
PX|B (x) = P [X = x|B]
About notation:
The name of a PMF is the letter P with a subscript containing the name of the random variable. For a conditional
PMF, the subscript contains the name of the random variable followed by a vertical bar followed by a statement of
the conditioning event.
The argument of the function is usually the lowercase letter corresponding to the variable name. The argument is a
dummy variable. It could be any letter, so that PX|B(x) is the same function as PX|B(u).
Sometimes we write the function with no specified argument at all, PX|B(·).
In some applications, we begin with a set of conditional PMFs, PX|Bi(x), i = 1, 2, . . ,m, where B1, B2, . . . , Bm is an event
space.
We then use the law of total probability to find the PMF PX (x):
P [A] = ∑i=1,m P [A|Bi ] P [Bi ]
where: P(A|B) = P(A ∩ B) / P(B)
Conditional Probability Mass Function cont.
Theorem 2.16
A random variable X resulting from an experiment with event space B1, B2, . . . , Bm has
PMF > PX (x) = ∑i=1,m PX|Bi (x) P [Bi ]
Proof: The theorem follows directly from Theorem 1.10 (Law of Total Probability)
( P [A] = ∑i=1,m P [A|Bi ] P [Bi ] ) with A denoting the event {X = x} )
When a conditioning event B ⊂ SX , the PMF PX (x) determines both the probability of B as well as the conditional
PMF:
PX|B (x) = P [X = x, B] / P [B]
Now either the event X = x is contained in the event B or it is not.
If x ∈ B, then {X = x}∩ B = {X = x} and P[X = x, B] = PX (x). Otherwise, if x ∈ B, then {X = x}∩ B = ∅ and P[X = x, B] = 0.
The next theorem uses Equation PX|B (x) = P [X = x, B] / P [B] to calculate the conditional PMF.
Conditional Probability Mass Function cont.
Theorem 2.17
PX|B (x) =
PX(x)
P[B]
, x∈B
0, otherwise
The theorem states that when we learn that an outcome x∈B, the probabilities of all x ∉ B are zero in our conditional
model and the probabilities of all x∈B are proportionally higher than they were before we learned x ∈ B.
PX|B(x) is a perfectly respectable PMF.
Because the conditioning event B tells us that all possible outcomes are in B, we rewrite Theorem 2.1. which is:
For a discrete random variable X with PMF PX (x) and range SX :
(a) For any x, PX(x) ≥ 0
(b) ∑x∈SX PX (x) = 1
(c) For any event B ⊂ SX , the probability that X is in the set B is P[B] =∑x∈B PX(x)
using B in place of S, we have next Theorem.
Conditional Probability Mass Function cont.
Theorem 2.18
(a) For any x ∈ B, PX|B(x) ≥ 0
(b) ∑x∈B PX|B (x) = 1
(c) For any event C ⊂ B, P[C|B], the conditional probability that X is in the set C, is P[C|B] =∑x∈C PX|B (x)
Therefore, we can compute averages of the conditional random variable X|B and averages of functions of X|B in the
same way that we compute averages of X.
The only difference is that we use the conditional PMF PX|B(·) in place of PX (·)
Conditional Probability Mass Function cont.
Definition 2.20
Conditional Expected Value
The conditional expected value of random variable X given condition B is
E [X|B] = μX|B =∑x∈B x PX|B(x)
When we are given a family of conditional probability models PX|Bi (x) for an event space B1, B2, . . . , Bm, we can
compute the expected value E[X] in terms of the conditional expected values E[X|Bi]
Theorem 2.19 For a random variable X resulting from an experiment with event space B1, B2, . . . , Bm,
E [X] = ∑i=1,m E [X|Bi ] P [Bi ]
Proof
Since E[X] = ∑x x PX (x), we can use (Theorem 2.16 ) PX (x) = ∑i=1,m PX|Bi (x) P [Bi ]
to write
E [X] = ∑x x∑i=1,m PX|Bi (x) P [Bi ] =∑i=1,m P [Bi ] ∑x xPX|Bi (x) = ∑i=1,m P [Bi ] E [X|Bi ]
Conditional Probability Mass Function cont.
For a derived random variable Y = g(X), we have the equivalent of (Theorem 2.10):
E [Y ] = μY = ∑x∈SX
g(x) PX (x)
Theorem 2.20 The conditional expected value of Y = g(X) given condition B is
E [Y | B] = E [g(X) | B] = ∑x∈B g(x) PX|B (x)
The function g(Xi) = EYlx[Ylxi] is the mean of the conditional PMF PYlx[Yjlxi]. Alternatively, it is known as the conditional
mean.
This terminology is widespread and so we will adhere to it , although we should keep in mind that it is meant to
denote the usual mean of the conditional PMF.
It is also of interest to determine the expectation of other quantities besides Y with respect to the conditional PMF.
This is called the conditional expectation and is symbolized by EYlx[g(Y)lxi].
The latter is called the conditional expectation of g(Y). For example, if g(Y) = y2, then it becomes the conditional
expectation of y2 or equivalently the conditional second moment.
Lastly, the we should be aware that the conditional mean is the optimal predictor of a random variable based on
observation of a second random variable.
Conditional Probability Mass Function cont.
It follows that the conditional variance and conditional standard deviation conform to Definitions 2.16
Var (X) = E [(X- μX)2]
and Definitions 2.17
𝜎𝑋= 𝑉𝑎𝑟[𝑋] with X|B replacing X:
𝜎X|B
= 𝑉𝑎𝑟[X|B]
To conclude: The utility of defining a conditional PMF is especially appropriate when the experiment is a compound
one, in which the second part of the experiment depends upon the outcome of the first part.
The definition of the conditional PMF has the usual properties of a PMF, that of being between 0 and 1 and also
summing to one.
Chapter Summary
With all of the concepts and formulas introduced in this chapter, there is a high probability that the beginning
student will be confused at this point.
Part of the problem is that we are dealing with several different mathematical entities including random variables,
probability functions, and parameters.
Before plugging numbers or symbols into a formula, it is good to know what the entities are.
The random variable X transforms outcomes of an experiment to real numbers. Note that X is the name of the
random variable.
A possible observation is x, which is a number. SX is the range of X, the set of all possible observations x.
The PMF PX (x) is a function that contains the probability model of the random variable X.
The PMF gives the probability of observing any x. PX (·) contains our information about the randomness of X.
Chapter Summary
The expected value E[X] = μX and the variance Var[X] are numbers that describe the entire probability model.
Mathematically, each is a property of the PMF PX (·). The expected value is a typical value of the random variable.
The variance describes the dispersion of sample values about the expected value.
A function of a random variable Y = g(X) transforms the random variable X into a different random variable Y .
For each observation X = x, g(·) is a rule that tells you how to calculate y = g(x), a sample value of Y .
Although PX (·) and g(·) are both mathematical functions, they serve different purposes here. PX (·) describes the
randomness in an experiment.
On the other hand, g(·) is a rule for obtaining a new random variable from a random variable you have observed.
The Conditional PMF PX|B(x) is the probability model that we obtain when we gain partial knowledge of the outcome
of an experiment.
The partial knowledge is that the outcome x ∈ B ⊂ SX . The conditional probability model has its own expected value,
E[X|B], and its own variance, Var[X|B].
Thank you.

More Related Content

Similar to CMF.pptx

Prob review
Prob reviewProb review
Prob review
Gopi Saiteja
 
chap2.pdf
chap2.pdfchap2.pdf
chap2.pdf
eseinsei
 
IJSRED-V2I5P56
IJSRED-V2I5P56IJSRED-V2I5P56
IJSRED-V2I5P56
IJSRED
 
Moment-Generating Functions and Reproductive Properties of Distributions
Moment-Generating Functions and Reproductive Properties of DistributionsMoment-Generating Functions and Reproductive Properties of Distributions
Moment-Generating Functions and Reproductive Properties of Distributions
IJSRED
 
Probability and Statistics
Probability and StatisticsProbability and Statistics
Probability and Statistics
Malik Sb
 
ENGINEERING MATERIALS CHAPTER THREE.pdf
ENGINEERING MATERIALS  CHAPTER THREE.pdfENGINEERING MATERIALS  CHAPTER THREE.pdf
ENGINEERING MATERIALS CHAPTER THREE.pdf
Berento
 
Random variables
Random variablesRandom variables
Random variables
MenglinLiu1
 
Probable
ProbableProbable
An Algorithm For The Combined Distribution And Assignment Problem
An Algorithm For The Combined Distribution And Assignment ProblemAn Algorithm For The Combined Distribution And Assignment Problem
An Algorithm For The Combined Distribution And Assignment Problem
Andrew Parish
 
Chapter 1 - Probability Distributions.pdf
Chapter 1 - Probability Distributions.pdfChapter 1 - Probability Distributions.pdf
Chapter 1 - Probability Distributions.pdf
MayurMandhub
 
Communication Theory - Random Process.pdf
Communication Theory - Random Process.pdfCommunication Theory - Random Process.pdf
Communication Theory - Random Process.pdf
RajaSekaran923497
 
Friedrichs1958
Friedrichs1958Friedrichs1958
Friedrichs1958
staros11
 
Conditional mixture model and its application for regression model
Conditional mixture model and its application for regression modelConditional mixture model and its application for regression model
Conditional mixture model and its application for regression model
Loc Nguyen
 
Chapter-4 combined.pptx
Chapter-4 combined.pptxChapter-4 combined.pptx
Chapter-4 combined.pptx
HamzaHaji6
 
Error Estimates for Multi-Penalty Regularization under General Source Condition
Error Estimates for Multi-Penalty Regularization under General Source ConditionError Estimates for Multi-Penalty Regularization under General Source Condition
Error Estimates for Multi-Penalty Regularization under General Source Condition
csandit
 
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdfa) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
petercoiffeur18
 
AJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdfAJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdf
BRNSS Publication Hub
 
On Analytic Review of Hahn–Banach Extension Results with Some Generalizations
On Analytic Review of Hahn–Banach Extension Results with Some GeneralizationsOn Analytic Review of Hahn–Banach Extension Results with Some Generalizations
On Analytic Review of Hahn–Banach Extension Results with Some Generalizations
BRNSS Publication Hub
 
03_AJMS_279_20_20210128_V2.pdf
03_AJMS_279_20_20210128_V2.pdf03_AJMS_279_20_20210128_V2.pdf
03_AJMS_279_20_20210128_V2.pdf
BRNSS Publication Hub
 
Functionalanalysis ejemplos infinitos
Functionalanalysis ejemplos infinitosFunctionalanalysis ejemplos infinitos
Functionalanalysis ejemplos infinitos
Sualín Rojas
 

Similar to CMF.pptx (20)

Prob review
Prob reviewProb review
Prob review
 
chap2.pdf
chap2.pdfchap2.pdf
chap2.pdf
 
IJSRED-V2I5P56
IJSRED-V2I5P56IJSRED-V2I5P56
IJSRED-V2I5P56
 
Moment-Generating Functions and Reproductive Properties of Distributions
Moment-Generating Functions and Reproductive Properties of DistributionsMoment-Generating Functions and Reproductive Properties of Distributions
Moment-Generating Functions and Reproductive Properties of Distributions
 
Probability and Statistics
Probability and StatisticsProbability and Statistics
Probability and Statistics
 
ENGINEERING MATERIALS CHAPTER THREE.pdf
ENGINEERING MATERIALS  CHAPTER THREE.pdfENGINEERING MATERIALS  CHAPTER THREE.pdf
ENGINEERING MATERIALS CHAPTER THREE.pdf
 
Random variables
Random variablesRandom variables
Random variables
 
Probable
ProbableProbable
Probable
 
An Algorithm For The Combined Distribution And Assignment Problem
An Algorithm For The Combined Distribution And Assignment ProblemAn Algorithm For The Combined Distribution And Assignment Problem
An Algorithm For The Combined Distribution And Assignment Problem
 
Chapter 1 - Probability Distributions.pdf
Chapter 1 - Probability Distributions.pdfChapter 1 - Probability Distributions.pdf
Chapter 1 - Probability Distributions.pdf
 
Communication Theory - Random Process.pdf
Communication Theory - Random Process.pdfCommunication Theory - Random Process.pdf
Communication Theory - Random Process.pdf
 
Friedrichs1958
Friedrichs1958Friedrichs1958
Friedrichs1958
 
Conditional mixture model and its application for regression model
Conditional mixture model and its application for regression modelConditional mixture model and its application for regression model
Conditional mixture model and its application for regression model
 
Chapter-4 combined.pptx
Chapter-4 combined.pptxChapter-4 combined.pptx
Chapter-4 combined.pptx
 
Error Estimates for Multi-Penalty Regularization under General Source Condition
Error Estimates for Multi-Penalty Regularization under General Source ConditionError Estimates for Multi-Penalty Regularization under General Source Condition
Error Estimates for Multi-Penalty Regularization under General Source Condition
 
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdfa) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
a) Use Newton’s Polynomials for Evenly Spaced data to derive the O(h.pdf
 
AJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdfAJMS_402_22_Reprocess_new.pdf
AJMS_402_22_Reprocess_new.pdf
 
On Analytic Review of Hahn–Banach Extension Results with Some Generalizations
On Analytic Review of Hahn–Banach Extension Results with Some GeneralizationsOn Analytic Review of Hahn–Banach Extension Results with Some Generalizations
On Analytic Review of Hahn–Banach Extension Results with Some Generalizations
 
03_AJMS_279_20_20210128_V2.pdf
03_AJMS_279_20_20210128_V2.pdf03_AJMS_279_20_20210128_V2.pdf
03_AJMS_279_20_20210128_V2.pdf
 
Functionalanalysis ejemplos infinitos
Functionalanalysis ejemplos infinitosFunctionalanalysis ejemplos infinitos
Functionalanalysis ejemplos infinitos
 

Recently uploaded

BÀI TẬP DẠY THÊM TIẾNG ANH LỚP 7 CẢ NĂM FRIENDS PLUS SÁCH CHÂN TRỜI SÁNG TẠO ...
BÀI TẬP DẠY THÊM TIẾNG ANH LỚP 7 CẢ NĂM FRIENDS PLUS SÁCH CHÂN TRỜI SÁNG TẠO ...BÀI TẬP DẠY THÊM TIẾNG ANH LỚP 7 CẢ NĂM FRIENDS PLUS SÁCH CHÂN TRỜI SÁNG TẠO ...
BÀI TẬP DẠY THÊM TIẾNG ANH LỚP 7 CẢ NĂM FRIENDS PLUS SÁCH CHÂN TRỜI SÁNG TẠO ...
Nguyen Thanh Tu Collection
 
HYPERTENSION - SLIDE SHARE PRESENTATION.
HYPERTENSION - SLIDE SHARE PRESENTATION.HYPERTENSION - SLIDE SHARE PRESENTATION.
HYPERTENSION - SLIDE SHARE PRESENTATION.
deepaannamalai16
 
Leveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit InnovationLeveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit Innovation
TechSoup
 
Benner "Expanding Pathways to Publishing Careers"
Benner "Expanding Pathways to Publishing Careers"Benner "Expanding Pathways to Publishing Careers"
Benner "Expanding Pathways to Publishing Careers"
National Information Standards Organization (NISO)
 
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
GeorgeMilliken2
 
Chapter wise All Notes of First year Basic Civil Engineering.pptx
Chapter wise All Notes of First year Basic Civil Engineering.pptxChapter wise All Notes of First year Basic Civil Engineering.pptx
Chapter wise All Notes of First year Basic Civil Engineering.pptx
Denish Jangid
 
Pharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brubPharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brub
danielkiash986
 
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptxRESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
zuzanka
 
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.ppt
Level 3 NCEA - NZ: A  Nation In the Making 1872 - 1900 SML.pptLevel 3 NCEA - NZ: A  Nation In the Making 1872 - 1900 SML.ppt
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.ppt
Henry Hollis
 
How Barcodes Can Be Leveraged Within Odoo 17
How Barcodes Can Be Leveraged Within Odoo 17How Barcodes Can Be Leveraged Within Odoo 17
How Barcodes Can Be Leveraged Within Odoo 17
Celine George
 
Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)
nitinpv4ai
 
The basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptxThe basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptx
heathfieldcps1
 
Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.
IsmaelVazquez38
 
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) Curriculum
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumPhilippine Edukasyong Pantahanan at Pangkabuhayan (EPP) Curriculum
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) Curriculum
MJDuyan
 
Juneteenth Freedom Day 2024 David Douglas School District
Juneteenth Freedom Day 2024 David Douglas School DistrictJuneteenth Freedom Day 2024 David Douglas School District
Juneteenth Freedom Day 2024 David Douglas School District
David Douglas School District
 
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...
indexPub
 
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
imrankhan141184
 
A Visual Guide to 1 Samuel | A Tale of Two Hearts
A Visual Guide to 1 Samuel | A Tale of Two HeartsA Visual Guide to 1 Samuel | A Tale of Two Hearts
A Visual Guide to 1 Samuel | A Tale of Two Hearts
Steve Thomason
 
Skimbleshanks-The-Railway-Cat by T S Eliot
Skimbleshanks-The-Railway-Cat by T S EliotSkimbleshanks-The-Railway-Cat by T S Eliot
Skimbleshanks-The-Railway-Cat by T S Eliot
nitinpv4ai
 
Bonku-Babus-Friend by Sathyajith Ray (9)
Bonku-Babus-Friend by Sathyajith Ray  (9)Bonku-Babus-Friend by Sathyajith Ray  (9)
Bonku-Babus-Friend by Sathyajith Ray (9)
nitinpv4ai
 

Recently uploaded (20)

BÀI TẬP DẠY THÊM TIẾNG ANH LỚP 7 CẢ NĂM FRIENDS PLUS SÁCH CHÂN TRỜI SÁNG TẠO ...
BÀI TẬP DẠY THÊM TIẾNG ANH LỚP 7 CẢ NĂM FRIENDS PLUS SÁCH CHÂN TRỜI SÁNG TẠO ...BÀI TẬP DẠY THÊM TIẾNG ANH LỚP 7 CẢ NĂM FRIENDS PLUS SÁCH CHÂN TRỜI SÁNG TẠO ...
BÀI TẬP DẠY THÊM TIẾNG ANH LỚP 7 CẢ NĂM FRIENDS PLUS SÁCH CHÂN TRỜI SÁNG TẠO ...
 
HYPERTENSION - SLIDE SHARE PRESENTATION.
HYPERTENSION - SLIDE SHARE PRESENTATION.HYPERTENSION - SLIDE SHARE PRESENTATION.
HYPERTENSION - SLIDE SHARE PRESENTATION.
 
Leveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit InnovationLeveraging Generative AI to Drive Nonprofit Innovation
Leveraging Generative AI to Drive Nonprofit Innovation
 
Benner "Expanding Pathways to Publishing Careers"
Benner "Expanding Pathways to Publishing Careers"Benner "Expanding Pathways to Publishing Careers"
Benner "Expanding Pathways to Publishing Careers"
 
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
What is Digital Literacy? A guest blog from Andy McLaughlin, University of Ab...
 
Chapter wise All Notes of First year Basic Civil Engineering.pptx
Chapter wise All Notes of First year Basic Civil Engineering.pptxChapter wise All Notes of First year Basic Civil Engineering.pptx
Chapter wise All Notes of First year Basic Civil Engineering.pptx
 
Pharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brubPharmaceutics Pharmaceuticals best of brub
Pharmaceutics Pharmaceuticals best of brub
 
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptxRESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
RESULTS OF THE EVALUATION QUESTIONNAIRE.pptx
 
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.ppt
Level 3 NCEA - NZ: A  Nation In the Making 1872 - 1900 SML.pptLevel 3 NCEA - NZ: A  Nation In the Making 1872 - 1900 SML.ppt
Level 3 NCEA - NZ: A Nation In the Making 1872 - 1900 SML.ppt
 
How Barcodes Can Be Leveraged Within Odoo 17
How Barcodes Can Be Leveraged Within Odoo 17How Barcodes Can Be Leveraged Within Odoo 17
How Barcodes Can Be Leveraged Within Odoo 17
 
Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)Oliver Asks for More by Charles Dickens (9)
Oliver Asks for More by Charles Dickens (9)
 
The basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptxThe basics of sentences session 7pptx.pptx
The basics of sentences session 7pptx.pptx
 
Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.Bossa N’ Roll Records by Ismael Vazquez.
Bossa N’ Roll Records by Ismael Vazquez.
 
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) Curriculum
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) CurriculumPhilippine Edukasyong Pantahanan at Pangkabuhayan (EPP) Curriculum
Philippine Edukasyong Pantahanan at Pangkabuhayan (EPP) Curriculum
 
Juneteenth Freedom Day 2024 David Douglas School District
Juneteenth Freedom Day 2024 David Douglas School DistrictJuneteenth Freedom Day 2024 David Douglas School District
Juneteenth Freedom Day 2024 David Douglas School District
 
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...
THE SACRIFICE HOW PRO-PALESTINE PROTESTS STUDENTS ARE SACRIFICING TO CHANGE T...
 
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
Traditional Musical Instruments of Arunachal Pradesh and Uttar Pradesh - RAYH...
 
A Visual Guide to 1 Samuel | A Tale of Two Hearts
A Visual Guide to 1 Samuel | A Tale of Two HeartsA Visual Guide to 1 Samuel | A Tale of Two Hearts
A Visual Guide to 1 Samuel | A Tale of Two Hearts
 
Skimbleshanks-The-Railway-Cat by T S Eliot
Skimbleshanks-The-Railway-Cat by T S EliotSkimbleshanks-The-Railway-Cat by T S Eliot
Skimbleshanks-The-Railway-Cat by T S Eliot
 
Bonku-Babus-Friend by Sathyajith Ray (9)
Bonku-Babus-Friend by Sathyajith Ray  (9)Bonku-Babus-Friend by Sathyajith Ray  (9)
Bonku-Babus-Friend by Sathyajith Ray (9)
 

CMF.pptx

  • 1. Conditional Probability Mass Function • The probability distribution of a discrete random variable can be characterized by its probability mass function. • When the probability distribution of the random variable is updated, in order to consider some information that gives rise to a conditional probability distribution, then such a conditional distribution can be characterized by a conditional probability mass function. • Conditional PMF is especially appropriate when the experiment is a compound one, in which the second part of the experiment depends upon the outcome of the first part. • It has the usual properties of a PMF, that of being between 0 and 1 and also summing to one.
  • 2. Conditional Probability Mass Function • We recall that a conditional probability P[AIB] is the probability of an event A, given that we know that some other event B has occurred. • Except for the case when the two events are independent of each other , the knowledge that B has occurred will change the probability P[A]. In other words, P[AIB] is our new probability in light of the additional knowledge. • In many practical situations, two random mechanisms are at work and are described by events A and B . • To compute probabilities for a complex experiment it is usually convenient to use a conditioning argument to simplify the reasoning. • For example, say we choose one of two coins and toss it 4 times. We might inquire as to the probability of observing 2 or more heads. However, this probability will depend upon which coin was chosen, as for example in the situation where one coin is fair and the other coin is weighted. • It is therefore convenient to define conditional probability mass functions, Px [klcoin 1 chosen] and Px[klcoin 2 chosen], since once we know which coin is chosen, we can easily specify the PMF. • In particular, for this example the conditional PMF is a binomial one whose value of P depends upon which coin is chosen and with k denoting the number of heads. Once the conditional PMFs are known, we have by the law of total probability that the probability of observing k heads for this experiment is given by the PMF: Px [k] = Px [klcoin 1 chosen]P[coin 1 chosen] + Px[klcoin 2 chosen]P[coin 2 chosen]
  • 3. Conditional Probability Mass Function cont. • The PMF that is required depends directly on the conditional PMFs (of which there are two). • The use of conditional PMFs greatly simplifies our task in that given the event, i.e., the coin chosen, the PMF of the number of heads observed readily follows. Also, in many problems, including this one, it is actually the conditional PMFs that are specified in the description of the experimental procedure. • It makes sense, therefore, to define a conditional PMF and study its properties. • For the most part, the definitions and properties will mirror those of the conditional probability P[AIB], where A and B are events defined on SX,Y. • Just as we used conditional probabilities to evaluate the likelihood of one event given another, we develop here the concepts of continuous conditional distributions and continuous conditional probability mass functions and probability density functions to evaluate the behavior of one random variable given knowledge of another. • Conditional probability P[A|B] is a number that expresses our new knowledge about the occurrence of event A, when we learn that another event B occurs. • In Conditional Probability Mass Function we consider event A to be the observation of a particular value of a random variable: A = {X = x}. • The conditioning event B contains information about X but not the precise value of X. • If X ≤ 33 we learn of the occurrence of an event B that describes some property of X. The occurrence of the conditioning event B changes the probabilities of the event {X = x}.
  • 4. Conditional Probability Mass Function cont. We can find the conditional probabilities P [A|B] = P [X = x|B] for all real numbers x. This collection of probabilities is a function of x. It is the conditional probability mass function of random variable X, given that B occurred. Definition 2.19 Conditional PMF Given the event B, with P[B] > 0, the conditional probability mass function of X is PX|B (x) = P [X = x|B] About notation: The name of a PMF is the letter P with a subscript containing the name of the random variable. For a conditional PMF, the subscript contains the name of the random variable followed by a vertical bar followed by a statement of the conditioning event. The argument of the function is usually the lowercase letter corresponding to the variable name. The argument is a dummy variable. It could be any letter, so that PX|B(x) is the same function as PX|B(u). Sometimes we write the function with no specified argument at all, PX|B(·). In some applications, we begin with a set of conditional PMFs, PX|Bi(x), i = 1, 2, . . ,m, where B1, B2, . . . , Bm is an event space. We then use the law of total probability to find the PMF PX (x): P [A] = ∑i=1,m P [A|Bi ] P [Bi ] where: P(A|B) = P(A ∩ B) / P(B)
  • 5. Conditional Probability Mass Function cont. Theorem 2.16 A random variable X resulting from an experiment with event space B1, B2, . . . , Bm has PMF > PX (x) = ∑i=1,m PX|Bi (x) P [Bi ] Proof: The theorem follows directly from Theorem 1.10 (Law of Total Probability) ( P [A] = ∑i=1,m P [A|Bi ] P [Bi ] ) with A denoting the event {X = x} ) When a conditioning event B ⊂ SX , the PMF PX (x) determines both the probability of B as well as the conditional PMF: PX|B (x) = P [X = x, B] / P [B] Now either the event X = x is contained in the event B or it is not. If x ∈ B, then {X = x}∩ B = {X = x} and P[X = x, B] = PX (x). Otherwise, if x ∈ B, then {X = x}∩ B = ∅ and P[X = x, B] = 0. The next theorem uses Equation PX|B (x) = P [X = x, B] / P [B] to calculate the conditional PMF.
  • 6. Conditional Probability Mass Function cont. Theorem 2.17 PX|B (x) = PX(x) P[B] , x∈B 0, otherwise The theorem states that when we learn that an outcome x∈B, the probabilities of all x ∉ B are zero in our conditional model and the probabilities of all x∈B are proportionally higher than they were before we learned x ∈ B. PX|B(x) is a perfectly respectable PMF. Because the conditioning event B tells us that all possible outcomes are in B, we rewrite Theorem 2.1. which is: For a discrete random variable X with PMF PX (x) and range SX : (a) For any x, PX(x) ≥ 0 (b) ∑x∈SX PX (x) = 1 (c) For any event B ⊂ SX , the probability that X is in the set B is P[B] =∑x∈B PX(x) using B in place of S, we have next Theorem.
  • 7. Conditional Probability Mass Function cont. Theorem 2.18 (a) For any x ∈ B, PX|B(x) ≥ 0 (b) ∑x∈B PX|B (x) = 1 (c) For any event C ⊂ B, P[C|B], the conditional probability that X is in the set C, is P[C|B] =∑x∈C PX|B (x) Therefore, we can compute averages of the conditional random variable X|B and averages of functions of X|B in the same way that we compute averages of X. The only difference is that we use the conditional PMF PX|B(·) in place of PX (·)
  • 8. Conditional Probability Mass Function cont. Definition 2.20 Conditional Expected Value The conditional expected value of random variable X given condition B is E [X|B] = μX|B =∑x∈B x PX|B(x) When we are given a family of conditional probability models PX|Bi (x) for an event space B1, B2, . . . , Bm, we can compute the expected value E[X] in terms of the conditional expected values E[X|Bi] Theorem 2.19 For a random variable X resulting from an experiment with event space B1, B2, . . . , Bm, E [X] = ∑i=1,m E [X|Bi ] P [Bi ] Proof Since E[X] = ∑x x PX (x), we can use (Theorem 2.16 ) PX (x) = ∑i=1,m PX|Bi (x) P [Bi ] to write E [X] = ∑x x∑i=1,m PX|Bi (x) P [Bi ] =∑i=1,m P [Bi ] ∑x xPX|Bi (x) = ∑i=1,m P [Bi ] E [X|Bi ]
  • 9. Conditional Probability Mass Function cont. For a derived random variable Y = g(X), we have the equivalent of (Theorem 2.10): E [Y ] = μY = ∑x∈SX g(x) PX (x) Theorem 2.20 The conditional expected value of Y = g(X) given condition B is E [Y | B] = E [g(X) | B] = ∑x∈B g(x) PX|B (x) The function g(Xi) = EYlx[Ylxi] is the mean of the conditional PMF PYlx[Yjlxi]. Alternatively, it is known as the conditional mean. This terminology is widespread and so we will adhere to it , although we should keep in mind that it is meant to denote the usual mean of the conditional PMF. It is also of interest to determine the expectation of other quantities besides Y with respect to the conditional PMF. This is called the conditional expectation and is symbolized by EYlx[g(Y)lxi]. The latter is called the conditional expectation of g(Y). For example, if g(Y) = y2, then it becomes the conditional expectation of y2 or equivalently the conditional second moment. Lastly, the we should be aware that the conditional mean is the optimal predictor of a random variable based on observation of a second random variable.
  • 10. Conditional Probability Mass Function cont. It follows that the conditional variance and conditional standard deviation conform to Definitions 2.16 Var (X) = E [(X- μX)2] and Definitions 2.17 𝜎𝑋= 𝑉𝑎𝑟[𝑋] with X|B replacing X: 𝜎X|B = 𝑉𝑎𝑟[X|B] To conclude: The utility of defining a conditional PMF is especially appropriate when the experiment is a compound one, in which the second part of the experiment depends upon the outcome of the first part. The definition of the conditional PMF has the usual properties of a PMF, that of being between 0 and 1 and also summing to one.
  • 11. Chapter Summary With all of the concepts and formulas introduced in this chapter, there is a high probability that the beginning student will be confused at this point. Part of the problem is that we are dealing with several different mathematical entities including random variables, probability functions, and parameters. Before plugging numbers or symbols into a formula, it is good to know what the entities are. The random variable X transforms outcomes of an experiment to real numbers. Note that X is the name of the random variable. A possible observation is x, which is a number. SX is the range of X, the set of all possible observations x. The PMF PX (x) is a function that contains the probability model of the random variable X. The PMF gives the probability of observing any x. PX (·) contains our information about the randomness of X.
  • 12. Chapter Summary The expected value E[X] = μX and the variance Var[X] are numbers that describe the entire probability model. Mathematically, each is a property of the PMF PX (·). The expected value is a typical value of the random variable. The variance describes the dispersion of sample values about the expected value. A function of a random variable Y = g(X) transforms the random variable X into a different random variable Y . For each observation X = x, g(·) is a rule that tells you how to calculate y = g(x), a sample value of Y . Although PX (·) and g(·) are both mathematical functions, they serve different purposes here. PX (·) describes the randomness in an experiment. On the other hand, g(·) is a rule for obtaining a new random variable from a random variable you have observed. The Conditional PMF PX|B(x) is the probability model that we obtain when we gain partial knowledge of the outcome of an experiment. The partial knowledge is that the outcome x ∈ B ⊂ SX . The conditional probability model has its own expected value, E[X|B], and its own variance, Var[X|B].