Unpredictable/ Probabilistic : - The result is not unique but may be one of the several possible outcomes.
Examples : -
(i) In tossing of a coin one is not sure if a head or a tail will be obtained.
(ii) If a light tube has lasted for t hours, nothing can be said about its further life. It may fail to
function any moment.
Trial & Event
Example : - Consider an experiment which, though repeated under essentially identical conditions, does not give unique results but may result in any one of the several possible outcomes.
Experiment is known as a Trial & the outcomes are known as Events or Cases .
Throwing a die is a Trial & getting 1 (2,3,…,6) is an event.
Tossing a coin is a Trial & getting Head (H) or Tail (T) is an event.
Exhaustive Events : - The total number of possible outcomes in any trial.
In tossing a coin there are 2 exhaustive cases, head & tail.
In throwing a die, there are 6 exhaustive cases since any one of the 6 faces 1,2,…,6 may come uppermost.
Possible solutions – Ace to King Exhaustive no. of cases – 52 In drawing a card from a well shuffled standard pack of playing cards Possible solutions – 1,2,3,4,5,6 Exhaustive no. of cases – 6 In a throw of an unbiased cubic die Possible solutions – Head/ Tail Exhaustive no. of cases – 2 In a tossing of an unbiased coin Collectively Exhaustive Events Experiment
Favorable Events/ Cases : - It is the number of outcomes which entail the happening of an event.
In throwing of 2 dice, the number of cases favorable to getting the sum 5 is:
(1,4), (4,1), (2,3), (3,2).
In drawing a card from a pack of cards the number of cases favorable to drawing an ace is 4, for drawing a spade is 13 & for drawing a red card is 26.
Independent Events : - If the happening (or non-happening) of an event is not affected by the supplementary knowledge concerning the occurrence of any number of the remaining events.
In tossing an unbiased coin the event of getting a head in the first toss is independent of getting a head in the second, third & subsequent throws.
Mutually exclusive Events : - If the happening of any one of the event precludes the happening of all the others.
In tossing a coin the events head & tail are mutually exclusive.
In throwing a die all the 6 faces numbered 1 to 6 are mutually exclusive since if any one of these faces comes, the possibility of others, in the same trial, is ruled out.
Card is a spade or heart Card is a diamond or club Card is a king or a queen In drawing a card from a well shuffled standard pack of playing cards Occurrence of 1 or 2 or 3 or 4 or 5 or 6 In a throw of an unbiased cubic die Head/ Tail In a tossing of an unbiased coin Mutually Exclusive Events Experiment
Equally likely Events : - Outcomes of a trial are said to be equally likely if taken into consideration all the relevant evidences, there is no reason to expect one in preference to the others.
In tossing an unbiased coin or uniform coin, head or tail are equally likely events.
In throwing an unbiased die, all the 6 faces are equally likely to come.
Any card out of 52 is likely to come up In drawing a card from a well shuffled standard pack of playing cards Any number out of 1,2,3,4,5,6 is likely to come up In a throw of an unbiased cubic die Head is likely to come up as a Tail In a tossing of an unbiased coin Collectively Exhaustive Events Experiment
Probability : Probability of a given event is an expression of likelihood of occurrence of an event .
Probability is a number which ranges from 0 to 1.
Zero (0) for an event which cannot occur and 1 for an event which is certain to occur.
Importance of the concept of Probability
Probability models can be used for making predictions.
Probability theory facilitates the construction of econometric model.
It facilitates the managerial decisions on planning and control.
Types of Probability
There are 3 approaches to probability, namely:
The Classical or ‘a priori’ probability
The Statistical or Empirical probability
The Axiomatic probability
Mathematical/ Classical/ ‘a priori’ Probability
Basic assumption of classical approach is that the outcomes of a random experiment are “equally likely”.
According to Laplace, a French Mathematician:
“ Probability, is the ratio of the number of ‘favorable’ cases to the total number of equally likely cases”.
If the probability of occurrence of A is denoted by p(A), then by this definition, we have:
Number of favorable cases m p = P(E) = ------------------------------ = ---- Total number of equally likely cases n
Probability ‘ p ’ of the happening of an event is also known as probability of success & ‘ q ’ the non-happening of the event as the probability of failure .
If P(E) = 1 , E is called a certain event &
if P(E) = 0 , E is called an impossible event
The probability of an event E is a number such that
0 ≤ P(E) ≤ 1 , & the sum of the probability that an event will occur & an event will not occur is equal to 1 .
i.e., p + q = 1
Classical probability is often called a priori probability because if one keeps using orderly examples of unbiased dice, fair coin, etc. one can state the answer in advance (a priori) without rolling a dice, tossing a coin etc.
Classical definition of probability is not very satisfactory because of the following reasons:
It fails when the number of possible outcomes of the experiment is infinite .
It is based on the cases which are “equally likely” and as such cannot be applied to experiments where the outcomes are not equally likely .
Limitations of Classical definition
It may not be possible practically to enumerate all the possible outcomes of certain experiments and in such cases the method fails.
Example it is inadequate for answering questions such as: What is the probability that a man aged 45 will die within the next year?
Here there are only 2 possible outcomes, the individual will die in the ensuing year or he will live. The chances that he will die is of course much smaller than he will live.
How much smaller?
Relative/ Statistical/ Empirical Probability
Probability of an event is determined objectively by repetitive empirical observations/ Experiments . Probabilities are assigned a posterior.
According to Von Mises “If an experiment is performed repeatedly under essentially homogeneous conditions and identical conditions, then the limiting value of the ratio of the number of times the event occurs to the number of trials, as the number of trials becomes indefinitely large, is called the probability of happening of the event, it being assumed that the limit is finite and unique”.
Example : - When a coin is tossed, what is the probability that the coin will turn heads?
Suppose coin is tossed for 50 times & it falls head 20 times, then the ratio 20/50 is used as an estimate of the probability of heads of this coin.
Symbolically, if in n trials an event E happens m times, then the probability ‘ p’ of the happening of E is given by
In this case, as the number of trails increase probabilities of outcomes move closer to the real probabilities and tend to be real probabilities as the number of trails tends to infinity (a large number).
The empirical probability approaches the classical probability as the number of trails becomes indefinitely large.
m p = P(E) = Lt ---- N -> N
The Empirical probability P(A) defined earlier can never be obtained in practice and we can only attempt at a close estimate of P(A) by making N sufficiently large .
The experimental conditions may not remain essentially homogeneous and identical in a large number of repetitions of the experiment.
The relative frequency of m/N, may not attain a unique value, no matter however large N may be.
Limitations of Statistical/ Empirical method
The Axiomatic Approach
Modern theory of probability is based on the axiomatic approach introduced by the Russian Mathematician A. N. Kolmogorov in 1930’s.
Classical approach restricts the calculation of probability to essentially equally likely & mutually exclusively events.
Empirical approach requires that every question be examined experimentally under identical conditions , over a long period of time considering repeated observations.
Axiomatic approach is largely free from the inadequacies of both the classical & empirical approaches.
Given a sample space of a random experiment, the probability of the occurrence of any event A is defined as a set function P(A) satisfying the following axioms.
Axiom 1: - P(A) is defined, is real and non-negative i.e.,
P(A) ≥ 0 (Axiom of non-negativity)
Axiom 2: - P(S) = 1 (Axiom of certainty)
Axiom 3: - If A1, A2, …., An is any finite or infinite sequence of disjoint events of S, then
n n P ( U A i ) = ∑ P( A i ) i=1 i=1
The Objective and Subjective Approach
Objective approach to probability is arrived on opinion basis or an empirical basis.
It is given by the ratio of frequency of an outcome to the total number of possible outcomes.
Subjective approach to probability is not concerned with the relative or expected frequency of an outcome.
It is concerned with the strength of a decision makers belief that an outcome will not occur.
It is particularly oriented towards decision-making situations.
Theorems of Probability
There are 2 important theorems of probability which are as follows:
The Addition Theorem and
The Multiplication Theorem
Addition theorem when events are Mutually Exclusive
Definition : - It states that if 2 events A and B are mutually exclusive then the probability of the occurrence of either A or B is the sum of the individual probability of A and B.
The theorem can be extended to three or more mutually exclusive events. Thus,
P(A or B) or P(A U B) = P(A) + P(B) P(A or B or C) = P(A) + P(B) + P(C)
Addition theorem when events are not Mutually Exclusive (Overlapping or Intersection Events)
Definition : - It states that if 2 events A and B are not mutually exclusive then the probability of the occurrence of either A or B is the sum of the individual probability of A and B minus the probability of occurrence of both A and B.
P(A or B) or P(A U B) = P(A) + P(B) – P(A ∩ B)
Definition: States that if 2 events A and B are independent, then the probability of the occurrence of both of them (A & B) is the product of the individual probability of A and B.
Probability of happening of both the events:
Theorem can be extended to 3 or more independent events. Thus,
P(A and B) or P(A ∩ B) = P(A) x P(B) P(A, B and C) or P(A ∩ B ∩ C) = P(A) x P(B) x P(C)
How to calculate probability in case of Dependent Events P(A U B) = P(A) + P(B) P(A U B) = P(A) + P(B) – P(A ∩ B) P(A ∩ B) = P(A) + P(B) – P(A U B) P(A ∩ B) = P(A) - P(A ∩ B) P(A ∩ B) = P(B) - P(A ∩ B) P(A ∩ B) = 1 - P(A U B) P(A U B) = 1 - P(A ∩ B)
Probability of occurrence of at least A or B
When events are mutually
When events are not mutually exclusive
Probability of occurrence of both A & B
Probability of occurrence of A & not B
Probability of occurrence of B & not A
Probability of non-occurrence of both A & B
Probability of non-occurrence of atleast A or B
How to calculate probability in case of Independent Events P(A ∩ B) = P(A) x P(B) P(A ∩ B) = P(A) x P(B) P(A ∩ B) = P(A) x P(B) P(A ∩ B) = P(A) x P(B) P(A U B) = 1 - P(A ∩ B) = 1 – [P(A) x P(B)] P(A U B) = 1 - P(A ∩ B) = 1 – [P(A) x P(B)] P(A ∩ B) + P(A ∩ B) = [P(A) x P(B)] + [P(A) x P(B)]
Probability of occurrence of both A & B
Probability of non-occurrence of both A & B
Probability of occurrence of A & not B
Probability of occurrence of B & not A
Probability of occurrence of atleast one event
Probability of non-occurrence of atleast one event
Probability of occurrence of only one event
An inspector of the Alaska Pipeline has the task of comparing the reliability of 2 pumping stations. Each station is susceptible to 2 kinds of failure: Pump failure & leakage. When either (or both) occur, the station must be shut down. The data at hand indicate that the following probabilities prevail:
Station P(Pump failure) P(Leakage) P(Both)
1 0.07 0.10 0
2 0.09 0.12 0.06
Which station has the higher probability of being shut down.
P(Pump failure or Leakage)
= P(Pump Failure) + P(Leakage Failure)
– P(Pump Failure ∩ Leakage Failure)
Station 1: 0.07 + 0.10 – 0 = 0.17
Station 2: 0.09 + 0.12 – 0.06 = 0.15
Thus, station 1 has the higher probability of being shut down.
Probabilities under conditions of Statistical Independence
Statistically Independent Events: - The occurrence of one event has no effect on the probability of the occurrence of any other event.
Most managers who use probabilities are concerned with 2 conditions.
The case where one event or another will occur.
The situation where 2 or more events will both occur.
There are 3 types of probabilities under statistical independence.
Marginal/ Unconditional Probability: - A single probability where only one event can take place.
Joint probability: - Probability of 2 or more events occurring together or in succession.
Conditional probability: - Probability that a second event (B) will occur if a first event (A) has already happened.
Example: Marginal Probability - Statistical Independence
A single probability where only one event can take place.
Example 1 : - On each individual toss of an biased or unfair coin, P(H) = 0.90 & P(T) = 0.10. The outcomes of several tosses of this coin are statistically independent events too, even tough the coin is biased.
Example 2 : - 50 students of a school drew lottery to see which student would get a free trip to the Carnival at Goa. Any one of the students can calculate his/ her chances of winning as:
P(Winning) = 1/50 = 0.02
Marginal Probability of an Event P(A) = P(A)
The probability of 2 or more independent events occurring together or in succession is the product of their marginal probabilities.
Example : - What is the probability of heads on 2 successive tosses?
P(H1H2) = P(H1) * P(H2)
= 0.5 * 0.5 = 0.25
The probability of heads on 2 successive tosses is 0.25, since the probability of any outcome is not affected by any preceding outcome.
Example: Joint Probability - Statistical Independence Joint Probability of 2 Independent Events P(AB) = P(A) * P(B)
We can make the probabilities of events even more explicit using a Probabilistic Tree.
0.125 T 1 T 2 T 3 0.125 T 1 T 2 H 3 0.125 T 1 H 2 T 3 0.125 T 1 H 2 H 3 0.125 H 1 T 2 T 3 0.25 T 1 T 2 0.125 H 1 T 2 H 3 0.25 T 1 H 2 0.125 H 1 H 2 T 3 0.25 H 1 T 2 0.5 T 1 0.125 H 1 H 2 H 3 0.25 H 1 H 2 0.5 H 1 3 Toss 2 Toss 1 Toss
For statistically independent events, conditional probability of event B given that event A has occurred is simply the probability of event B.
Example : - What is the probability that the second toss of a fair coin will result in heads, given that heads resulted on the first toss?
P(H2|H1) = 0.5
For 2 independent events, the result of the first toss have absolutely no effect on the results of the second toss.
Example: Conditional Probability - Statistical Independence Conditional Probability for 2 Independent Events P(B|A) = P(B)
Probabilities under conditions of Statistical Dependence
Statistical Dependence exists when the probability of some event is dependent on or affected by the occurrence of some other event.
The types of probabilities under statistical dependence are:
Assume that a box contains 10 balls distributed as follows: -
It can be computed by summing up all the joint events in which the simple event occurs.
Compute the marginal probability of the event colored.
It can be computed by summing up the probabilities of the two joint events in which colored occurred:
P(C) = P(CD) + P(CS)
= 0.3 + 0.1
Example: Marginal Probability - Statistically Dependent
Joint probabilities under conditions of statistical dependence is given by
What is the probability that this ball is dotted and colored?
Probability of colored & dotted balls =
P(DC) = P(D|C) * P(D)
= (0.3/0.4) * 0.5
= 0.3 (Approximately)
Example: Joint Probability - Statistically Dependent Joint probability for Statistically Dependent Events P(BA) = P(B|A) * P(A)
Given A & B to be the 2 events then,
What is the probability that this ball is dotted, given that it is colored?
The probability of drawing any one of the ball from this box is 0.1 (1/10) [Total no. of balls in the box = 10].
Example: Conditional Probability - Statistically Dependent Conditional probability for Statistically Dependent Events P(BA) P(B|A) = ---------- P(A)
We know that there are 4 colored balls, 3 of which are dotted & one of it striped. P(DC) 0.3 P(D|C) = --------- = ------ P(C) 0.4 = 0.75 P(DC) = Probability of colored & dotted balls (3 out of 10 --- 3/10) P(C) = 4 out of 10 --- 4/10
Revising Prior Estimates of Probabilities: Bayes’ Theorem
A very important & useful application of conditional probability is the computation of unknown probabilities , based on past data or information.
When an event occurs through one of the various mutually disjoint events, then the conditional probability that this event has occurred due to a particular reason or event is termed as Inverse Probability or Posterior Probability .
Has wide ranging applications in Business & its Management.
Since it is a concept of revision of probability based on some additional information, it shows the improvement towards certainty level of the event .
Example 1 : - If a manager of a boutique finds that most of the purple & white jackets that she thought would sell so well are hanging on the rack, she must revise her prior probabilities & order a different color combination or have a sale.
Certain probabilities were altered after the people got additional information. New probabilities are known as revised, or Posterior probabilities .
If an event A can occur only in conjunction with n mutually exclusive & exhaustive events B 1 , B 2 , …, B n , & if A actually happens, then the probability that it was preceded by an event B i (for a conditional probabilities of A given B 1 , A given B 2 … A given B n are known) & if marginal probabilities P(B i ) are also known, then the posterior probability of event B i given that event A has occurred is given by:
P(A | B i ). P(B i ) P(B i | A) = ---------------------- ∑ P(A | B i ). P(B i )
Remarks : -
The probabilities P(B 1 ), P(B 2 ), … , P(B n ) are termed as the ‘ a priori probabilities ’ because they exist before we gain any information from the experiment itself.
The probabilities P(A | Bi), i=1,2,…,n are called ‘ Likelihoods ’ because they indicate how likely the event A under consideration is to occur, given each & every a priori probability.
The probabilities P(Bi | A), i=1, 2, …,n are called ‘ Posterior probabilities ’ because they are determined after the results of the experiment are known.
In a bolt factory machines A, B, & C manufacture respectively 25%, 35%, & 40% of the total. Of their output 5%, 4%, 2% are defective bolts. A bolt is drawn at random from the product & Is found to be defective.
What are the probabilities that it was manufactured by
machines A, B & C?
Let E1, E2, E3 denote the events manufactured by machines A, B & C respectively.
Let E denote the event of its being defective.
P(E1) = 0.25; P(E2) = 0.35; P(E3) = 0.40;
Probability of drawing a defective bolt manufactured by machine A is P(E|E1) = 0.05
Similarly P(E|E2) = 0.04; P(E|E3) = 0.02
Probability that defective bolt selected at random is manufactured by machine A is given by