Upcoming SlideShare
×

# Ch08

530 views

Published on

Published in: Technology, Education
0 Likes
Statistics
Notes
• Full Name
Comment goes here.

Are you sure you want to Yes No

Are you sure you want to  Yes  No
• good we under stand the meaning of decision

Are you sure you want to  Yes  No
• Be the first to like this

Views
Total views
530
On SlideShare
0
From Embeds
0
Number of Embeds
1
Actions
Shares
0
36
2
Likes
0
Embeds 0
No embeds

No notes for slide

### Ch08

1. 1. 1Slide Decision TheoryDecision Theory Professor Ahmadi
2. 2. 2Slide Learning ObjectivesLearning Objectives Structuring the decision problem and decision treesStructuring the decision problem and decision trees Types of decision making environments:Types of decision making environments: • Decision making under uncertainty whenDecision making under uncertainty when probabilities are not knownprobabilities are not known • Decision making under risk when probabilitiesDecision making under risk when probabilities are knownare known Expected Value of Perfect InformationExpected Value of Perfect Information Decision Analysis with Sample InformationDecision Analysis with Sample Information Developing a Decision StrategyDeveloping a Decision Strategy Expected Value of Sample InformationExpected Value of Sample Information
3. 3. 3Slide Types Of Decision Making EnvironmentsTypes Of Decision Making Environments Type 1: Decision Making under CertaintyType 1: Decision Making under Certainty.. Decision maker know for sure (that is, withDecision maker know for sure (that is, with certainty) outcome or consequence of every decisioncertainty) outcome or consequence of every decision alternative.alternative. TType 2: Decision Making under Uncertainty.ype 2: Decision Making under Uncertainty. Decision maker has no information at all aboutDecision maker has no information at all about various outcomes or states of nature.various outcomes or states of nature. Type 3: Decision Making under RiskType 3: Decision Making under Risk.. Decision maker has some knowledge regardingDecision maker has some knowledge regarding probability of occurrence of each outcome or state ofprobability of occurrence of each outcome or state of nature.nature.
4. 4. 4Slide Decision TreesDecision Trees AA decision treedecision tree is a chronological representation ofis a chronological representation of the decision problem.the decision problem. Each decision tree has two types of nodes;Each decision tree has two types of nodes; roundround nodesnodes correspond to the states of nature whilecorrespond to the states of nature while squaresquare nodesnodes correspond to the decision alternatives.correspond to the decision alternatives. TheThe branchesbranches leaving each round node represent theleaving each round node represent the different states of nature while the branches leavingdifferent states of nature while the branches leaving each square node represent the different decisioneach square node represent the different decision alternatives.alternatives. At the end of each limb of a tree are the payoffsAt the end of each limb of a tree are the payoffs attained from the series of branches making up thatattained from the series of branches making up that limb.limb.
5. 5. 5Slide Decision Making Under UncertaintyDecision Making Under Uncertainty If the decision maker does not know with certaintyIf the decision maker does not know with certainty which state of nature will occur, then he/she is saidwhich state of nature will occur, then he/she is said to beto be making decision under uncertaintymaking decision under uncertainty.. The five commonly used criteria for decision makingThe five commonly used criteria for decision making under uncertainty are:under uncertainty are: 1.1. thethe optimisticoptimistic approach (Maximax)approach (Maximax) 2.2. thethe conservativeconservative approach (Maximin)approach (Maximin) 3.3. thethe minimax regretminimax regret approach (approach (Minimax regret)Minimax regret) 4.4. Equally likely (Equally likely (LaplaceLaplace criterion)criterion) 5.5. Criterion of realism withCriterion of realism with αα ((Hurwicz criterionHurwicz criterion))
6. 6. 6Slide Optimistic ApproachOptimistic Approach TheThe optimistic approachoptimistic approach would be used by anwould be used by an optimistic decision maker.optimistic decision maker. TheThe decision with the largest possible payoffdecision with the largest possible payoff isis chosen.chosen. If the payoff table was in terms of costs, theIf the payoff table was in terms of costs, the decisiondecision with the lowest costwith the lowest cost would be chosen.would be chosen.
7. 7. 7Slide Conservative ApproachConservative Approach TheThe conservative approachconservative approach would be used by awould be used by a conservative decision maker.conservative decision maker. For each decision the minimum payoff is listed andFor each decision the minimum payoff is listed and then the decision corresponding to the maximum ofthen the decision corresponding to the maximum of these minimum payoffs is selected. (Hence, thethese minimum payoffs is selected. (Hence, the minimum possible payoff is maximizedminimum possible payoff is maximized.).) If the payoff was in terms of costs, the maximumIf the payoff was in terms of costs, the maximum costs would be determined for each decision andcosts would be determined for each decision and then the decision corresponding to the minimum ofthen the decision corresponding to the minimum of these maximum costs is selected. (Hence, thethese maximum costs is selected. (Hence, the maximum possible cost is minimizedmaximum possible cost is minimized.).)
8. 8. 8Slide Minimax Regret ApproachMinimax Regret Approach The minimax regret approach requires theThe minimax regret approach requires the construction of aconstruction of a regret tableregret table or anor an opportunity lossopportunity loss tabletable.. This is done by calculating for each state of nature theThis is done by calculating for each state of nature the difference between each payoff and the largest payoffdifference between each payoff and the largest payoff for that state of nature.for that state of nature. Then, using this regret table, the maximum regret forThen, using this regret table, the maximum regret for each possible decision is listed.each possible decision is listed. The decision chosen is the one corresponding to theThe decision chosen is the one corresponding to the minimum of the maximum regretsminimum of the maximum regrets..
9. 9. 9Slide Example: Marketing StrategyExample: Marketing Strategy Consider the following problem with two decisionConsider the following problem with two decision alternatives (dalternatives (d11 & d& d22) and two states of nature S) and two states of nature S11 (Market(Market Receptive) and SReceptive) and S22 (Market Unfavorable) with the(Market Unfavorable) with the following payoff table representing profits ( \$1000):following payoff table representing profits ( \$1000): States of NatureStates of Nature ss11 ss33 dd11 2020 66 DecisionsDecisions dd22 2525 33
10. 10. 10Slide Example:Example: Optimistic ApproachOptimistic Approach An optimistic decision maker would use the optimisticAn optimistic decision maker would use the optimistic approach. All we really need to do is to choose theapproach. All we really need to do is to choose the decision that has the largest single value in the payoffdecision that has the largest single value in the payoff table. This largest value is 25, and hence the optimaltable. This largest value is 25, and hence the optimal decision isdecision is dd22.. MaximumMaximum DecisionDecision PayoffPayoff dd11 2020 choosechoose dd22 dd22 25 maximum25 maximum
11. 11. 11Slide Example:Example: Conservative ApproachConservative Approach A conservative decision maker would use theA conservative decision maker would use the conservative approach. List the minimum payoff forconservative approach. List the minimum payoff for each decision. Choose the decision with the maximumeach decision. Choose the decision with the maximum of these minimum payoffs.of these minimum payoffs. MinimumMinimum DecisionDecision PayoffPayoff choosechoose dd11 dd11 6 maximum6 maximum dd22 33
12. 12. 12Slide Example:Example: Minimax Regret ApproachMinimax Regret Approach For the minimax regret approach, first compute aFor the minimax regret approach, first compute a regret table by subtracting each payoff in a columnregret table by subtracting each payoff in a column from the largest payoff in that column. The resultingfrom the largest payoff in that column. The resulting regret table is:regret table is: ss11 ss22 MaximumMaximum dd11 5 05 0 55 dd22 0 30 3 33 minimumminimum Then, select the decision with minimum regret.Then, select the decision with minimum regret.
13. 13. 13Slide Example:Example: Equally Likely (Laplace) Criterion Equally likely, also called Laplace, criterion finds decision alternative with highest average payoff. • First calculate average payoff for every alternative. • Then pick alternative with maximum average payoff. Average forAverage for dd11 = (20 + 6)/2 = 13= (20 + 6)/2 = 13 Average forAverage for dd22 = (25 + 3)/2 = 14 Thus,= (25 + 3)/2 = 14 Thus, dd22 is selectedis selected
14. 14. 14Slide Example:Example: Criterion of Realism (Hurwicz)Criterion of Realism (Hurwicz) Often calledOften called weighted averageweighted average, the, the criterion of realismcriterion of realism (or(or HurwiczHurwicz) decision criterion is a compromise between) decision criterion is a compromise between optimistic and aoptimistic and a pessimisticpessimistic decision.decision. • First, selectFirst, select coefficient of realismcoefficient of realism,, αα, with a value, with a value between 0 and 1. Whenbetween 0 and 1. When αα is close to 1, decision makeris close to 1, decision maker is optimistic about future, and whenis optimistic about future, and when αα is close to 0,is close to 0, decision maker is pessimistic about future.decision maker is pessimistic about future. • Payoff =Payoff = αα x (maximum payoff) + (1-x (maximum payoff) + (1-αα) x (minimum payoff)) x (minimum payoff) In our example letIn our example let αα = 0.8= 0.8 Payoff for dPayoff for d11 = 0.8*20+0.2*6=17.2= 0.8*20+0.2*6=17.2 Payoff for dPayoff for d22 = 0.8*25+0.2*3=20.6 Thus, select d2= 0.8*25+0.2*3=20.6 Thus, select d2
15. 15. 15Slide Decision Making with ProbabilitiesDecision Making with Probabilities Expected Value ApproachExpected Value Approach • If probabilistic information regarding the states ofIf probabilistic information regarding the states of nature is available, one may use thenature is available, one may use the expectedexpected Monetary value (EMV) approach (also known asMonetary value (EMV) approach (also known as Expected Value or EV)Expected Value or EV).. • Here the expected return for each decision isHere the expected return for each decision is calculated by summing the products of the payoffcalculated by summing the products of the payoff under each state of nature and the probability ofunder each state of nature and the probability of the respective state of nature occurring.the respective state of nature occurring. • The decision yielding theThe decision yielding the best expected returnbest expected return isis chosen.chosen.
16. 16. 16Slide Expected Value of a Decision AlternativeExpected Value of a Decision Alternative TheThe expected value of a decision alternativeexpected value of a decision alternative is the sumis the sum of weighted payoffs for the decision alternative.of weighted payoffs for the decision alternative. The expected value (EV) of decision alternativeThe expected value (EV) of decision alternative ddii isis defined as:defined as: where:where: NN = the number of states of nature= the number of states of nature PP((ssjj) = the probability of state of nature) = the probability of state of nature ssjj VVijij = the payoff corresponding to decision= the payoff corresponding to decision alternativealternative ddii and state of natureand state of nature ssjj EV( ) ( )d P s Vi j ij j N = = ∑1 EV( ) ( )d P s Vi j ij j N = = ∑1
17. 17. 17Slide Example: Marketing StrategyExample: Marketing Strategy Expected Value ApproachExpected Value Approach Refer to the previous problem. Assume theRefer to the previous problem. Assume the probability of the market being receptive is known toprobability of the market being receptive is known to be 0.75. Use the expected monetary value criterion tobe 0.75. Use the expected monetary value criterion to determine the optimal decision.determine the optimal decision.
18. 18. 18Slide Expected Value of Perfect InformationExpected Value of Perfect Information Frequently information is available that can improveFrequently information is available that can improve the probability estimates for the states of nature.the probability estimates for the states of nature. TheThe expected value of perfect informationexpected value of perfect information (EVPI) is(EVPI) is the increase in the expected profit that would result ifthe increase in the expected profit that would result if one knew with certainty which state of nature wouldone knew with certainty which state of nature would occur.occur. The EVPI provides anThe EVPI provides an upper bound on the expectedupper bound on the expected value of any sample or survey informationvalue of any sample or survey information..
19. 19. 19Slide Expected Value of Perfect InformationExpected Value of Perfect Information EVPI CalculationEVPI Calculation • Step 1:Step 1: Determine the optimal return corresponding to eachDetermine the optimal return corresponding to each state of nature.state of nature. • Step 2:Step 2: Compute the expected value of these optimalCompute the expected value of these optimal returns.returns. • Step 3:Step 3: Subtract the EV of the optimal decision from theSubtract the EV of the optimal decision from the amount determined in step (2).amount determined in step (2).
20. 20. 20Slide Example: Marketing StrategyExample: Marketing Strategy Expected Value of Perfect InformationExpected Value of Perfect Information Calculate the expected value for the best action for eachCalculate the expected value for the best action for each state of nature and subtract the EV of the optimalstate of nature and subtract the EV of the optimal decision.decision. EVPI= .75(25,000) + .25(6,000) - 19,500 = \$750EVPI= .75(25,000) + .25(6,000) - 19,500 = \$750
21. 21. 21Slide Decision Analysis With Sample InformationDecision Analysis With Sample Information Knowledge of sample or survey information can beKnowledge of sample or survey information can be used to revise the probability estimates for the states ofused to revise the probability estimates for the states of nature.nature. Prior to obtaining this information, the probabilityPrior to obtaining this information, the probability estimates for the states of nature are calledestimates for the states of nature are called priorprior probabilitiesprobabilities.. With knowledge ofWith knowledge of conditional probabilitiesconditional probabilities for thefor the outcomes or indicators of the sample or surveyoutcomes or indicators of the sample or survey information, these prior probabilities can be revised byinformation, these prior probabilities can be revised by employingemploying Bayes' TheoremBayes' Theorem.. The outcomes of this analysis are calledThe outcomes of this analysis are called posteriorposterior probabilitiesprobabilities..
22. 22. 22Slide Posterior ProbabilitiesPosterior Probabilities Posterior Probabilities CalculationPosterior Probabilities Calculation • Step 1:Step 1: For each state of nature, multiply the priorFor each state of nature, multiply the prior probability by its conditional probability for theprobability by its conditional probability for the indicator -- this gives theindicator -- this gives the joint probabilitiesjoint probabilities for thefor the states and indicator.states and indicator. • Step 2:Step 2: Sum these joint probabilities over all states -- thisSum these joint probabilities over all states -- this gives thegives the marginal probabilitymarginal probability for the indicator.for the indicator. • Step 3:Step 3: For each state, divide its joint probability by theFor each state, divide its joint probability by the marginal probability for the indicator -- this gives themarginal probability for the indicator -- this gives the posterior probability distribution.posterior probability distribution.
23. 23. 23Slide Expected Value of Sample InformationExpected Value of Sample Information TheThe expected value of sample informationexpected value of sample information (EVSI) is the(EVSI) is the additional expected profit possible through knowledgeadditional expected profit possible through knowledge of the sample or survey information.of the sample or survey information. EVSI CalculationEVSI Calculation • Step 1:Step 1: Determine the optimal decision and itsDetermine the optimal decision and its expected return for the possible outcomes of theexpected return for the possible outcomes of the sample using the posterior probabilities for the statessample using the posterior probabilities for the states of nature.of nature. • Step 2:Step 2: Compute the expected value of these optimalCompute the expected value of these optimal returns.returns. • Step 3:Step 3: Subtract the EV of the optimal decisionSubtract the EV of the optimal decision obtained without using the sample information fromobtained without using the sample information from the amount determined in step (2).the amount determined in step (2).
24. 24. 24Slide Efficiency of Sample InformationEfficiency of Sample Information Efficiency of sample informationEfficiency of sample information is the ratio of EVSI tois the ratio of EVSI to EVPI.EVPI. As the EVPI provides an upper bound for the EVSI,As the EVPI provides an upper bound for the EVSI, efficiency is always a number between 0 and 1.efficiency is always a number between 0 and 1.
25. 25. 25Slide Refer to the Marketing Strategy ExampleRefer to the Marketing Strategy Example It is known from past experience that of all the casesIt is known from past experience that of all the cases when the market was receptive, a research companywhen the market was receptive, a research company predicted it in 90 percent of the cases. (In the other 10predicted it in 90 percent of the cases. (In the other 10 percent, they predicted an unfavorable market). Also,percent, they predicted an unfavorable market). Also, of all the cases when the market proved to beof all the cases when the market proved to be unfavorable, the research company predicted itunfavorable, the research company predicted it correctly in 85 percent of the cases. (In the other 15correctly in 85 percent of the cases. (In the other 15 percent of the cases, they predicted it incorrectly.)percent of the cases, they predicted it incorrectly.) Answer the following questions based on the aboveAnswer the following questions based on the above information.information.
26. 26. 26Slide Example: Marketing StrategyExample: Marketing Strategy 1. Draw a complete probability tree.1. Draw a complete probability tree. 2. Find the posterior probabilities of all states of nature.2. Find the posterior probabilities of all states of nature. 3. Using the posterior probabilities, which plan would3. Using the posterior probabilities, which plan would you recommend?you recommend? 4.4. How much should one be willing to pay (maximum)How much should one be willing to pay (maximum) for the research survey? That is, compute the expectedfor the research survey? That is, compute the expected value of sample information (EVSI).value of sample information (EVSI).