Bayes Theorem and Inference Reasoning for Project Managers
Upcoming SlideShare
Loading in...5
×
 

Bayes Theorem and Inference Reasoning for Project Managers

on

  • 2,421 views

An explanation with examples of how a project manager would apply Bayes Theorem

An explanation with examples of how a project manager would apply Bayes Theorem

Statistics

Views

Total Views
2,421
Views on SlideShare
2,367
Embed Views
54

Actions

Likes
1
Downloads
58
Comments
0

1 Embed 54

http://www.scoop.it 54

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

CC Attribution License

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

Bayes Theorem and Inference Reasoning for Project Managers Bayes Theorem and Inference Reasoning for Project Managers Document Transcript

  • Bayes’ Theorem and Inference Reasoning for Project Managers John C. Goodpasture, PMP Managing Principal Square Peg Consulting LLC www.sqpegconsulting.com www.johngoodpasture.comPage 1 of 8Copyright by John C Goodpasture, © 2010
  • Bayes’ Theorem and Inference Reasoning for Project ManagersThe Plausible HypothesisProject managers often face the task of evaluating the plausibility of an event happeningduring the course of a project that would affect project performance. Plausibility is in thespectrum of “uncertainty to risk”, a spectrum that reaches from “possibility —> plausible—> probable —> planable”. In this context, project managers and their risk managementbrethren hypothesize the plausible from the range of possibilities.Degree of PlausibilityIt’s helpful to think of probability as the “degree of plausibility of a hypothesis”. By thisdefinition, probability is still quantitatively scaled from 0 to 1. Numbers near 0 mean thehypothesis is very implausible even if it is a possibility; numbers near 1 mean thehypothesis is certain enough to be planned in terms of risk response or projectperformance affects.Probabilities are not dataNow, probabilities are not themselves data; they are not measureable artifacts. Thus,probabilities are subjective and open to many vagaries introduced by bias, opinion, andpersonal experience. By extension, plausibility is a subjective evaluation. For thisreason, project managers are led toward “inference reasoning”, also known as “inductivereasoning”. [Many confuse probability with statistics. Statistics are data obtained byprocessing measured observations according to certain processing rules.]Infer a propertyWe “infer” something we can’t directly observe by working backward through asupposition from observed data. That is, given observations of actual outcomes, we drawan inference as to what the situation, condition, or event must have been to cause thoseoutcomes to occur.For the case in hand—plausible hypothesis—we surmise a hypothesis that we can’tobserve directly; we can only observe actual outcomes. For example, we mighthypothesize that a coin is not fair. We can not ‘observe’ an unfair coin [unless it has twoheads or two tails]; we can only observe the outcomes of testing the coin for fairness.About timingNow when making an inference there are two time frames involved:Page 2 of 8Copyright by John C Goodpasture, © 2010
  • • Posterior: The time after estimates are made when observations are taken of actual outcomes—we call this the posterior time; and • A priori: The previous—or prior—period when we estimated probabilities based on estimates or subjective factors.Reasoning forward in time, as in ‘a priori’ estimates, is deductive; reasoning backward intime, as in posterior analysis, is inductive and inferential.In the example of the coin, the a priori estimate—a deduction—was that the coin is notfair. The posterior data observations either confirm this hypothesis is TRUE or not.From the confirmation, we draw an inference about the coin.In short, what we observe may differ from what we expect. This may occur becauseeffects, events, and conditions may influence outcomes. Thus, when making aninference, these effects must be accounted for or else we will draw the inferenceincorrectly.Hypothesis and inferencePutting it together, in the a priori timeframe we hypothesize a possible event and estimateits plausibility. Then, in the posterior timeframe, we make observations of actualoutcomes. The outcomes may be different than hypothesized. We try to draw aninference about why we observe what we do. And we estimate what adjustments need tobe made to the a priori estimates so that they are more accurate next time.Thomas Bayes’ TheorizesAn eighteenth century English mathematician by the name of Thomas Bayes was amongthe first to think about the plausible hypothesis problem. In doing so, he more or lessinvented a different definition of probability—a definition different from the prevailingconventional definition based on chance. Bayes posited: probability is the degree towhich the truth of a situation—as determined by observation—varies from ourexpectation for that situation. You probably recognize Bayes’ idea is the plausibilitydefinition of probability in slightly different terms.Bayes was curious about the variance between truth and expectation. To assuage hiscuriosity, he worked out the mathematical rules for relating a priori probabilities of ahypothesis, posterior observations, and effects [conditions, events, or influences] thatwould impact the a priori estimates in a way that explained the posterior observations.Today, this is usually framed as conditional probabilities wherein the probability of oneevent is actually dependent upon, or conditioned by, the probability of another event.The outcome of his investigations was the formulation of Bayes’ Theorem.Page 3 of 8Copyright by John C Goodpasture, © 2010
  • Bayes’ Theorem definedBayes’ Theorem expresses a relationship between a hypothesis and a condition [event, orcircumstance] that influences the hypothesis. In the examples that follow, the hypothesisis labeled A, and the influencing condition is labeled B. The theorem uses a construct ofthe form ‘A | B’ meaning ‘A given the presence of B’, or ‘A given B’. The generalformulation of his rule is: Probability ( A | B ) = Probability ( B | A ) x Probability (A) / Probability ( B )Where the posterior results—A | B—a bit different from our expectation. Thus, Adepends on B, but B does not depend on A.For project management purposes, it’s enough to understand that the left side of theformula is the posterior outcomes, the hypothesis ‘A’ modified by the presence of ‘B’.And, on the right side of the formula, Probability ( B | A ) is the ‘likelihood’ of B beingTRUE at the same time A is TRUE. Multiplying the likelihood by P(A) then gives us thelikelihood of B and A being TRUE for all possibilities of A. That is: “Probability ( B | A )x Probability (A)” is actually the probability of A and B being TRUE at the same time,giving this equality that will come in handy later: Probability ( B | A ) x Probability (A) = P (A and B)Finally, on the right side, Probability (B) normalizes the probability of A and B beingTRUE at the same time to the probability that B is actually TRUE.Some identitiesRewrite the equation above and note the symmetry: • Probability (A) = P (A and B) / P (B | A) • Probability ( B ) x Probability ( A | B ) = Probability ( B | A ) x Probability (A)And with a little reasoning, you can also write: • Probability (A and B) = Probability (B and A).These identities will used when we form a Bayes’ Grid to evaluate project situations.Page 4 of 8Copyright by John C Goodpasture, © 2010
  • An exampleThe set-upLet us define an “event space” A as having event A~ and the counter-event A^. Thepresence of A^ means A~ did not occur. Similarly, we define an event space for B in thesame way.To put it into a project context, let’s say that A~ is a passed test, and A^ is the same testfailed. Let’s define B~ as influencing condition present for the test, and B^ means theinfluencing condition is missing. If the test is outdoors, B could be some aspect of theweather. Presumably A is affected by B, but there is some possibility that A could passeven without B. Of course, B—the weather—is not affected by A, the project test.As project managers we would like to know how likely it is that a test will pass; that is,we want to know the P (A~), but we can’t observe this directly because B~ or B^ ispresent and influences the test results. Thus, we can only draw an inference about A~from the observations of A in the presence of B. However, there is a tool that can help; itis called Bayes’ Grid.Bayes’ GridTo employ Bayes’ Theorem to find P(A~) we form a grid of A and B where we can putdown some of the observable data about A and B, and then calculate the otherinformation not available from observations.The grid below has the cells labeled with the elements from Bayes’ Theorem withweather in the two vertical columns and test performance in the two horizontal rows:The test results (A) are conditioned on the weather (B) in this example.P(A~| B) is read as “probability of a passing test given any condition of the weather”.Other cells are read similarly.The cross points in the grid in the white cells are probability intersections. ‘A~ and B~’in the upper left is the probability of a successful test and the influencing conditionspresent.Since the white grid represents the entire space of A and B, the grid must sum to 1. Thegrid must also sum up and down and left and right. For instance the top white row mustPage 5 of 8Copyright by John C Goodpasture, © 2010
  • sum to the probability of A~| B. The left white column must sum to the probability ofB~.Applying observations to the gridNext we run some tests and write down our observations. Because there are twovariables, A and B, we need two sets of independent observations to solve all therelationships.First observation: We observe the probability of passing a test under good conditions ofthe weather, B~, is 75%, that is P( A~ | B~ ). But since we know the weather has someinfluence, we also know that 75% is not P(A~).B, on the other hand, is a set of conditions, like the weather, that we can independentlymeasure and estimate. Let’s say that in this example the probability of B~, good weather,being present is 65%. Note: the statistics of B are not the second observation we needbecause the observation we want is a posterior interaction between A and B.Here is the grid as we know it from what we have observed about B:We can calculate some of the cells from Bayes’ Theorem and the first A | B observation:P ( A~ | B ~ ) = 0.75 = P ( A~ and B~ ) / P(B~)P ( A~ | B ) = 0.75 = P ( A~ and B~ ) / 0.65Solving for P ( A~ and B~ ):0.65 x 0.75 = 0.4875 = P ( A~ and B~ ).We then solve for the other value for the white grid cell in the first column that must sumto 0.65. [We could also use the equation: P ( A^ | B~ ) = 0.25 = P ( A^ and B~ ) / 0.65]Page 6 of 8Copyright by John C Goodpasture, © 2010
  • Now, we need to find the other values of the grid, and for this we will need a secondindependent observation:For convenience, x and y are shown to make it easier to write what we need to know:Top row: X = 0.4875 + YBottom row: 1-X = 0.1625 + 0.35-Y, simplifying: X = .4875 + YTwo UnknownsSo, we have two unknowns and only one equation.We know Y > 0 and < 0.35 because the sum of the four white cells = 1.0. This means Xis between .4875 and .8375, and ‘1 – X’ is between 0.5125 and 0.1625.Any value of Y that satisfies the equation with X will be a possible valid inference.We could guess at the second equation by guessing a value for X and Y that satisfies theequation. But guessing carries no credibility. The best way to resolve this is with actualobservations from the project outcomes. We already have an observation of test resultswhen the weather is good. If we now take test measurements when the weather is bad,we then have a second independent set of observations that fulfill P (A~ | B^).Suppose we observe that P (A~ | B^) is 40%, meaning there is some test success evenwhen the weather is bad.We can now calculate the Y value in the grid:P (A~ | B^) = P (A~ and B^) / P (B^)Rearranging the equation and filling in the known values:0.4 x 0.35 = P (A~ and B^) = 0.14Page 7 of 8Copyright by John C Goodpasture, © 2010
  • Take note that the white cells add top and bottom, left and right, to their respectiveshaded cells. Take note that the sum of all four of the white cells added together is 1.This means that the entire event space is accounted for in the grid.Hypothesis: A~From the grid we now see that the value of the hypothesis, A~, regardless of the weather,is 0.6275. Our observations were 0.75 when the weather was good and 0.4 when theweather was bad. Our inference is that the underlying hypothesis is 0.6275.SummaryBayes’ Theorem provides the project manager information in the form of probabilitiesabout the performance of one project activity when it is conditioned upon theperformance of another.There are some required prerequisites: A must depend on B, but B must be independentof A. And, there must be two independent sets of observations of the posteriorperformance of the interaction of A and B.Attributes not observed may be calculated using Bayes Theorem. A Bayes grid providesassistance in the calculations.+++++++++++++++++++ To read more: johngoodpasture.com sqpegconsulting.comPage 8 of 8Copyright by John C Goodpasture, © 2010