SlideShare a Scribd company logo
1 of 26
1Slide
Decision TheoryDecision Theory
Professor Ahmadi
2Slide
Learning ObjectivesLearning Objectives
Structuring the decision problem and decision treesStructuring the decision problem and decision trees
Types of decision making environments:Types of decision making environments:
• Decision making under uncertainty whenDecision making under uncertainty when
probabilities are not knownprobabilities are not known
• Decision making under risk when probabilitiesDecision making under risk when probabilities
are knownare known
Expected Value of Perfect InformationExpected Value of Perfect Information
Decision Analysis with Sample InformationDecision Analysis with Sample Information
Developing a Decision StrategyDeveloping a Decision Strategy
Expected Value of Sample InformationExpected Value of Sample Information
3Slide
Types Of Decision Making EnvironmentsTypes Of Decision Making Environments
Type 1: Decision Making under CertaintyType 1: Decision Making under Certainty..
Decision maker know for sure (that is, withDecision maker know for sure (that is, with
certainty) outcome or consequence of every decisioncertainty) outcome or consequence of every decision
alternative.alternative.
TType 2: Decision Making under Uncertainty.ype 2: Decision Making under Uncertainty.
Decision maker has no information at all aboutDecision maker has no information at all about
various outcomes or states of nature.various outcomes or states of nature.
Type 3: Decision Making under RiskType 3: Decision Making under Risk..
Decision maker has some knowledge regardingDecision maker has some knowledge regarding
probability of occurrence of each outcome or state ofprobability of occurrence of each outcome or state of
nature.nature.
4Slide
Decision TreesDecision Trees
AA decision treedecision tree is a chronological representation ofis a chronological representation of
the decision problem.the decision problem.
Each decision tree has two types of nodes;Each decision tree has two types of nodes; roundround
nodesnodes correspond to the states of nature whilecorrespond to the states of nature while squaresquare
nodesnodes correspond to the decision alternatives.correspond to the decision alternatives.
TheThe branchesbranches leaving each round node represent theleaving each round node represent the
different states of nature while the branches leavingdifferent states of nature while the branches leaving
each square node represent the different decisioneach square node represent the different decision
alternatives.alternatives.
At the end of each limb of a tree are the payoffsAt the end of each limb of a tree are the payoffs
attained from the series of branches making up thatattained from the series of branches making up that
limb.limb.
5Slide
Decision Making Under UncertaintyDecision Making Under Uncertainty
If the decision maker does not know with certaintyIf the decision maker does not know with certainty
which state of nature will occur, then he/she is saidwhich state of nature will occur, then he/she is said
to beto be making decision under uncertaintymaking decision under uncertainty..
The five commonly used criteria for decision makingThe five commonly used criteria for decision making
under uncertainty are:under uncertainty are:
1.1. thethe optimisticoptimistic approach (Maximax)approach (Maximax)
2.2. thethe conservativeconservative approach (Maximin)approach (Maximin)
3.3. thethe minimax regretminimax regret approach (approach (Minimax regret)Minimax regret)
4.4. Equally likely (Equally likely (LaplaceLaplace criterion)criterion)
5.5. Criterion of realism withCriterion of realism with αα ((Hurwicz criterionHurwicz criterion))
6Slide
Optimistic ApproachOptimistic Approach
TheThe optimistic approachoptimistic approach would be used by anwould be used by an
optimistic decision maker.optimistic decision maker.
TheThe decision with the largest possible payoffdecision with the largest possible payoff isis
chosen.chosen.
If the payoff table was in terms of costs, theIf the payoff table was in terms of costs, the decisiondecision
with the lowest costwith the lowest cost would be chosen.would be chosen.
7Slide
Conservative ApproachConservative Approach
TheThe conservative approachconservative approach would be used by awould be used by a
conservative decision maker.conservative decision maker.
For each decision the minimum payoff is listed andFor each decision the minimum payoff is listed and
then the decision corresponding to the maximum ofthen the decision corresponding to the maximum of
these minimum payoffs is selected. (Hence, thethese minimum payoffs is selected. (Hence, the
minimum possible payoff is maximizedminimum possible payoff is maximized.).)
If the payoff was in terms of costs, the maximumIf the payoff was in terms of costs, the maximum
costs would be determined for each decision andcosts would be determined for each decision and
then the decision corresponding to the minimum ofthen the decision corresponding to the minimum of
these maximum costs is selected. (Hence, thethese maximum costs is selected. (Hence, the
maximum possible cost is minimizedmaximum possible cost is minimized.).)
8Slide
Minimax Regret ApproachMinimax Regret Approach
The minimax regret approach requires theThe minimax regret approach requires the
construction of aconstruction of a regret tableregret table or anor an opportunity lossopportunity loss
tabletable..
This is done by calculating for each state of nature theThis is done by calculating for each state of nature the
difference between each payoff and the largest payoffdifference between each payoff and the largest payoff
for that state of nature.for that state of nature.
Then, using this regret table, the maximum regret forThen, using this regret table, the maximum regret for
each possible decision is listed.each possible decision is listed.
The decision chosen is the one corresponding to theThe decision chosen is the one corresponding to the
minimum of the maximum regretsminimum of the maximum regrets..
9Slide
Example: Marketing StrategyExample: Marketing Strategy
Consider the following problem with two decisionConsider the following problem with two decision
alternatives (dalternatives (d11 & d& d22) and two states of nature S) and two states of nature S11 (Market(Market
Receptive) and SReceptive) and S22 (Market Unfavorable) with the(Market Unfavorable) with the
following payoff table representing profits ( $1000):following payoff table representing profits ( $1000):
States of NatureStates of Nature
ss11 ss33
dd11 2020 66
DecisionsDecisions
dd22 2525 33
10Slide
Example:Example: Optimistic ApproachOptimistic Approach
An optimistic decision maker would use the optimisticAn optimistic decision maker would use the optimistic
approach. All we really need to do is to choose theapproach. All we really need to do is to choose the
decision that has the largest single value in the payoffdecision that has the largest single value in the payoff
table. This largest value is 25, and hence the optimaltable. This largest value is 25, and hence the optimal
decision isdecision is dd22..
MaximumMaximum
DecisionDecision PayoffPayoff
dd11 2020
choosechoose dd22 dd22 25 maximum25 maximum
11Slide
Example:Example: Conservative ApproachConservative Approach
A conservative decision maker would use theA conservative decision maker would use the
conservative approach. List the minimum payoff forconservative approach. List the minimum payoff for
each decision. Choose the decision with the maximumeach decision. Choose the decision with the maximum
of these minimum payoffs.of these minimum payoffs.
MinimumMinimum
DecisionDecision PayoffPayoff
choosechoose dd11 dd11 6 maximum6 maximum
dd22 33
12Slide
Example:Example: Minimax Regret ApproachMinimax Regret Approach
For the minimax regret approach, first compute aFor the minimax regret approach, first compute a
regret table by subtracting each payoff in a columnregret table by subtracting each payoff in a column
from the largest payoff in that column. The resultingfrom the largest payoff in that column. The resulting
regret table is:regret table is:
ss11 ss22 MaximumMaximum
dd11 5 05 0 55
dd22 0 30 3 33 minimumminimum
Then, select the decision with minimum regret.Then, select the decision with minimum regret.
13Slide
Example:Example: Equally Likely (Laplace) Criterion
Equally likely, also called Laplace, criterion finds
decision alternative with highest average payoff.
• First calculate average payoff for every alternative.
• Then pick alternative with maximum average payoff.
Average forAverage for dd11 = (20 + 6)/2 = 13= (20 + 6)/2 = 13
Average forAverage for dd22 = (25 + 3)/2 = 14 Thus,= (25 + 3)/2 = 14 Thus, dd22 is selectedis selected
14Slide
Example:Example: Criterion of Realism (Hurwicz)Criterion of Realism (Hurwicz)
Often calledOften called weighted averageweighted average, the, the criterion of realismcriterion of realism
(or(or HurwiczHurwicz) decision criterion is a compromise between) decision criterion is a compromise between
optimistic and aoptimistic and a pessimisticpessimistic decision.decision.
• First, selectFirst, select coefficient of realismcoefficient of realism,, αα, with a value, with a value
between 0 and 1. Whenbetween 0 and 1. When αα is close to 1, decision makeris close to 1, decision maker
is optimistic about future, and whenis optimistic about future, and when αα is close to 0,is close to 0,
decision maker is pessimistic about future.decision maker is pessimistic about future.
• Payoff =Payoff = αα x (maximum payoff) + (1-x (maximum payoff) + (1-αα) x (minimum payoff)) x (minimum payoff)
In our example letIn our example let αα = 0.8= 0.8
Payoff for dPayoff for d11 = 0.8*20+0.2*6=17.2= 0.8*20+0.2*6=17.2
Payoff for dPayoff for d22 = 0.8*25+0.2*3=20.6 Thus, select d2= 0.8*25+0.2*3=20.6 Thus, select d2
15Slide
Decision Making with ProbabilitiesDecision Making with Probabilities
Expected Value ApproachExpected Value Approach
• If probabilistic information regarding the states ofIf probabilistic information regarding the states of
nature is available, one may use thenature is available, one may use the expectedexpected
Monetary value (EMV) approach (also known asMonetary value (EMV) approach (also known as
Expected Value or EV)Expected Value or EV)..
• Here the expected return for each decision isHere the expected return for each decision is
calculated by summing the products of the payoffcalculated by summing the products of the payoff
under each state of nature and the probability ofunder each state of nature and the probability of
the respective state of nature occurring.the respective state of nature occurring.
• The decision yielding theThe decision yielding the best expected returnbest expected return isis
chosen.chosen.
16Slide
Expected Value of a Decision AlternativeExpected Value of a Decision Alternative
TheThe expected value of a decision alternativeexpected value of a decision alternative is the sumis the sum
of weighted payoffs for the decision alternative.of weighted payoffs for the decision alternative.
The expected value (EV) of decision alternativeThe expected value (EV) of decision alternative ddii isis
defined as:defined as:
where:where: NN = the number of states of nature= the number of states of nature
PP((ssjj) = the probability of state of nature) = the probability of state of nature ssjj
VVijij = the payoff corresponding to decision= the payoff corresponding to decision
alternativealternative ddii and state of natureand state of nature ssjj
EV( ) ( )d P s Vi j ij
j
N
=
=
∑1
EV( ) ( )d P s Vi j ij
j
N
=
=
∑1
17Slide
Example: Marketing StrategyExample: Marketing Strategy
Expected Value ApproachExpected Value Approach
Refer to the previous problem. Assume theRefer to the previous problem. Assume the
probability of the market being receptive is known toprobability of the market being receptive is known to
be 0.75. Use the expected monetary value criterion tobe 0.75. Use the expected monetary value criterion to
determine the optimal decision.determine the optimal decision.
18Slide
Expected Value of Perfect InformationExpected Value of Perfect Information
Frequently information is available that can improveFrequently information is available that can improve
the probability estimates for the states of nature.the probability estimates for the states of nature.
TheThe expected value of perfect informationexpected value of perfect information (EVPI) is(EVPI) is
the increase in the expected profit that would result ifthe increase in the expected profit that would result if
one knew with certainty which state of nature wouldone knew with certainty which state of nature would
occur.occur.
The EVPI provides anThe EVPI provides an upper bound on the expectedupper bound on the expected
value of any sample or survey informationvalue of any sample or survey information..
19Slide
Expected Value of Perfect InformationExpected Value of Perfect Information
EVPI CalculationEVPI Calculation
• Step 1:Step 1:
Determine the optimal return corresponding to eachDetermine the optimal return corresponding to each
state of nature.state of nature.
• Step 2:Step 2:
Compute the expected value of these optimalCompute the expected value of these optimal
returns.returns.
• Step 3:Step 3:
Subtract the EV of the optimal decision from theSubtract the EV of the optimal decision from the
amount determined in step (2).amount determined in step (2).
20Slide
Example: Marketing StrategyExample: Marketing Strategy
Expected Value of Perfect InformationExpected Value of Perfect Information
Calculate the expected value for the best action for eachCalculate the expected value for the best action for each
state of nature and subtract the EV of the optimalstate of nature and subtract the EV of the optimal
decision.decision.
EVPI= .75(25,000) + .25(6,000) - 19,500 = $750EVPI= .75(25,000) + .25(6,000) - 19,500 = $750
21Slide
Decision Analysis With Sample InformationDecision Analysis With Sample Information
Knowledge of sample or survey information can beKnowledge of sample or survey information can be
used to revise the probability estimates for the states ofused to revise the probability estimates for the states of
nature.nature.
Prior to obtaining this information, the probabilityPrior to obtaining this information, the probability
estimates for the states of nature are calledestimates for the states of nature are called priorprior
probabilitiesprobabilities..
With knowledge ofWith knowledge of conditional probabilitiesconditional probabilities for thefor the
outcomes or indicators of the sample or surveyoutcomes or indicators of the sample or survey
information, these prior probabilities can be revised byinformation, these prior probabilities can be revised by
employingemploying Bayes' TheoremBayes' Theorem..
The outcomes of this analysis are calledThe outcomes of this analysis are called posteriorposterior
probabilitiesprobabilities..
22Slide
Posterior ProbabilitiesPosterior Probabilities
Posterior Probabilities CalculationPosterior Probabilities Calculation
• Step 1:Step 1:
For each state of nature, multiply the priorFor each state of nature, multiply the prior
probability by its conditional probability for theprobability by its conditional probability for the
indicator -- this gives theindicator -- this gives the joint probabilitiesjoint probabilities for thefor the
states and indicator.states and indicator.
• Step 2:Step 2:
Sum these joint probabilities over all states -- thisSum these joint probabilities over all states -- this
gives thegives the marginal probabilitymarginal probability for the indicator.for the indicator.
• Step 3:Step 3:
For each state, divide its joint probability by theFor each state, divide its joint probability by the
marginal probability for the indicator -- this gives themarginal probability for the indicator -- this gives the
posterior probability distribution.posterior probability distribution.
23Slide
Expected Value of Sample InformationExpected Value of Sample Information
TheThe expected value of sample informationexpected value of sample information (EVSI) is the(EVSI) is the
additional expected profit possible through knowledgeadditional expected profit possible through knowledge
of the sample or survey information.of the sample or survey information.
EVSI CalculationEVSI Calculation
• Step 1:Step 1: Determine the optimal decision and itsDetermine the optimal decision and its
expected return for the possible outcomes of theexpected return for the possible outcomes of the
sample using the posterior probabilities for the statessample using the posterior probabilities for the states
of nature.of nature.
• Step 2:Step 2: Compute the expected value of these optimalCompute the expected value of these optimal
returns.returns.
• Step 3:Step 3: Subtract the EV of the optimal decisionSubtract the EV of the optimal decision
obtained without using the sample information fromobtained without using the sample information from
the amount determined in step (2).the amount determined in step (2).
24Slide
Efficiency of Sample InformationEfficiency of Sample Information
Efficiency of sample informationEfficiency of sample information is the ratio of EVSI tois the ratio of EVSI to
EVPI.EVPI.
As the EVPI provides an upper bound for the EVSI,As the EVPI provides an upper bound for the EVSI,
efficiency is always a number between 0 and 1.efficiency is always a number between 0 and 1.
25Slide
Refer to the Marketing Strategy ExampleRefer to the Marketing Strategy Example
It is known from past experience that of all the casesIt is known from past experience that of all the cases
when the market was receptive, a research companywhen the market was receptive, a research company
predicted it in 90 percent of the cases. (In the other 10predicted it in 90 percent of the cases. (In the other 10
percent, they predicted an unfavorable market). Also,percent, they predicted an unfavorable market). Also,
of all the cases when the market proved to beof all the cases when the market proved to be
unfavorable, the research company predicted itunfavorable, the research company predicted it
correctly in 85 percent of the cases. (In the other 15correctly in 85 percent of the cases. (In the other 15
percent of the cases, they predicted it incorrectly.)percent of the cases, they predicted it incorrectly.)
Answer the following questions based on the aboveAnswer the following questions based on the above
information.information.
26Slide
Example: Marketing StrategyExample: Marketing Strategy
1. Draw a complete probability tree.1. Draw a complete probability tree.
2. Find the posterior probabilities of all states of nature.2. Find the posterior probabilities of all states of nature.
3. Using the posterior probabilities, which plan would3. Using the posterior probabilities, which plan would
you recommend?you recommend?
4.4. How much should one be willing to pay (maximum)How much should one be willing to pay (maximum)
for the research survey? That is, compute the expectedfor the research survey? That is, compute the expected
value of sample information (EVSI).value of sample information (EVSI).

More Related Content

What's hot (14)

DECISION THEORY WITH EXAMPLE
DECISION THEORY WITH EXAMPLEDECISION THEORY WITH EXAMPLE
DECISION THEORY WITH EXAMPLE
 
Decision making under uncertaionity
Decision making under uncertaionityDecision making under uncertaionity
Decision making under uncertaionity
 
Chapter15 f 2010
Chapter15 f 2010Chapter15 f 2010
Chapter15 f 2010
 
Decision Theory
Decision TheoryDecision Theory
Decision Theory
 
Decision treeprimer 1
Decision treeprimer 1Decision treeprimer 1
Decision treeprimer 1
 
Module 5 Decision Theory
Module 5 Decision TheoryModule 5 Decision Theory
Module 5 Decision Theory
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Review
ReviewReview
Review
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Statergy mgmt
Statergy mgmtStatergy mgmt
Statergy mgmt
 
Decision Theory
Decision TheoryDecision Theory
Decision Theory
 
Linear Programming- Lecture 1
Linear Programming- Lecture 1Linear Programming- Lecture 1
Linear Programming- Lecture 1
 
Analysis of risk and uncertainity
Analysis of risk and uncertainityAnalysis of risk and uncertainity
Analysis of risk and uncertainity
 
Decision analysis
Decision analysisDecision analysis
Decision analysis
 

Viewers also liked

Decision theory
Decision theoryDecision theory
Decision theorySurekha98
 
Bba 3274 qm week 4 decision analysis
Bba 3274 qm week 4 decision analysisBba 3274 qm week 4 decision analysis
Bba 3274 qm week 4 decision analysisStephen Ong
 
Decision theory approach in management
Decision theory approach in managementDecision theory approach in management
Decision theory approach in managementsanna1
 
Decision Tree Analysis
Decision Tree AnalysisDecision Tree Analysis
Decision Tree AnalysisAnand Arora
 
Data Science - Part V - Decision Trees & Random Forests
Data Science - Part V - Decision Trees & Random Forests Data Science - Part V - Decision Trees & Random Forests
Data Science - Part V - Decision Trees & Random Forests Derek Kane
 

Viewers also liked (13)

Dm
DmDm
Dm
 
Qt decision theory
Qt decision theoryQt decision theory
Qt decision theory
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Decision tree
Decision treeDecision tree
Decision tree
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Bba 3274 qm week 4 decision analysis
Bba 3274 qm week 4 decision analysisBba 3274 qm week 4 decision analysis
Bba 3274 qm week 4 decision analysis
 
Decision theory approach in management
Decision theory approach in managementDecision theory approach in management
Decision theory approach in management
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Decision Tree Analysis
Decision Tree AnalysisDecision Tree Analysis
Decision Tree Analysis
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Data Science - Part V - Decision Trees & Random Forests
Data Science - Part V - Decision Trees & Random Forests Data Science - Part V - Decision Trees & Random Forests
Data Science - Part V - Decision Trees & Random Forests
 

Similar to Ch08

Decisiontree&game theory
Decisiontree&game theoryDecisiontree&game theory
Decisiontree&game theoryDevaKumari Vijay
 
Decision making environment
Decision making environmentDecision making environment
Decision making environmentshubhamvaghela
 
Decision Analysis.pdf
Decision Analysis.pdfDecision Analysis.pdf
Decision Analysis.pdfjxuaaaka
 
Decision analysis & Markov chain
Decision analysis & Markov chainDecision analysis & Markov chain
Decision analysis & Markov chainSawsan Monir
 
Decision Tree- M.B.A -DecSci
Decision Tree- M.B.A -DecSciDecision Tree- M.B.A -DecSci
Decision Tree- M.B.A -DecSciLesly Lising
 
Making Decisions Presentation
Making Decisions PresentationMaking Decisions Presentation
Making Decisions Presentationandyped
 
352735322 rsh-qam11-tif-03-doc
352735322 rsh-qam11-tif-03-doc352735322 rsh-qam11-tif-03-doc
352735322 rsh-qam11-tif-03-docBookStoreLib
 
352735322 rsh-qam11-tif-03-doc
352735322 rsh-qam11-tif-03-doc352735322 rsh-qam11-tif-03-doc
352735322 rsh-qam11-tif-03-docFiras Husseini
 
Cengage_EBA_Chapter12.pptx
Cengage_EBA_Chapter12.pptxCengage_EBA_Chapter12.pptx
Cengage_EBA_Chapter12.pptxPuneet Kumar
 
Presentation of decision modeling
Presentation of decision modelingPresentation of decision modeling
Presentation of decision modelingTahirsabir55
 
OR CHAPTER FOUR.pptx
OR CHAPTER FOUR.pptxOR CHAPTER FOUR.pptx
OR CHAPTER FOUR.pptxAynetuTerefe2
 
Final review nopause
Final review nopauseFinal review nopause
Final review nopausej4tang
 

Similar to Ch08 (20)

Decision theory
Decision theoryDecision theory
Decision theory
 
Decisiontree&game theory
Decisiontree&game theoryDecisiontree&game theory
Decisiontree&game theory
 
Chapter 4 R Part I
Chapter 4 R Part IChapter 4 R Part I
Chapter 4 R Part I
 
Decision making environment
Decision making environmentDecision making environment
Decision making environment
 
Decision Analysis.pdf
Decision Analysis.pdfDecision Analysis.pdf
Decision Analysis.pdf
 
Decision analysis & Markov chain
Decision analysis & Markov chainDecision analysis & Markov chain
Decision analysis & Markov chain
 
Decision Tree- M.B.A -DecSci
Decision Tree- M.B.A -DecSciDecision Tree- M.B.A -DecSci
Decision Tree- M.B.A -DecSci
 
Decision analysis.1
Decision analysis.1Decision analysis.1
Decision analysis.1
 
Making Decisions Presentation
Making Decisions PresentationMaking Decisions Presentation
Making Decisions Presentation
 
Decisions Decisions
Decisions DecisionsDecisions Decisions
Decisions Decisions
 
352735322 rsh-qam11-tif-03-doc
352735322 rsh-qam11-tif-03-doc352735322 rsh-qam11-tif-03-doc
352735322 rsh-qam11-tif-03-doc
 
352735322 rsh-qam11-tif-03-doc
352735322 rsh-qam11-tif-03-doc352735322 rsh-qam11-tif-03-doc
352735322 rsh-qam11-tif-03-doc
 
Cengage_EBA_Chapter12.pptx
Cengage_EBA_Chapter12.pptxCengage_EBA_Chapter12.pptx
Cengage_EBA_Chapter12.pptx
 
Presentation of decision modeling
Presentation of decision modelingPresentation of decision modeling
Presentation of decision modeling
 
OR CHAPTER FOUR.pptx
OR CHAPTER FOUR.pptxOR CHAPTER FOUR.pptx
OR CHAPTER FOUR.pptx
 
Payoff_ 7.ppt
Payoff_ 7.pptPayoff_ 7.ppt
Payoff_ 7.ppt
 
Risk
RiskRisk
Risk
 
Chapter 5R
Chapter 5RChapter 5R
Chapter 5R
 
Decision theory
Decision theoryDecision theory
Decision theory
 
Final review nopause
Final review nopauseFinal review nopause
Final review nopause
 

Recently uploaded

Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Enterprise Knowledge
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubKalema Edgar
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLScyllaDB
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfAlex Barbosa Coqueiro
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsMark Billinghurst
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024The Digital Insurer
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsSergiu Bodiu
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebUiPathCommunity
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxhariprasad279825
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr LapshynFwdays
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machinePadma Pradeep
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Mark Simos
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsMemoori
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Mattias Andersson
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 

Recently uploaded (20)

Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024Designing IA for AI - Information Architecture Conference 2024
Designing IA for AI - Information Architecture Conference 2024
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
DMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special EditionDMCC Future of Trade Web3 - Special Edition
DMCC Future of Trade Web3 - Special Edition
 
Unleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding ClubUnleash Your Potential - Namagunga Girls Coding Club
Unleash Your Potential - Namagunga Girls Coding Club
 
Developer Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQLDeveloper Data Modeling Mistakes: From Postgres to NoSQL
Developer Data Modeling Mistakes: From Postgres to NoSQL
 
Unraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdfUnraveling Multimodality with Large Language Models.pdf
Unraveling Multimodality with Large Language Models.pdf
 
Human Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR SystemsHuman Factors of XR: Using Human Factors to Design XR Systems
Human Factors of XR: Using Human Factors to Design XR Systems
 
My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024My INSURER PTE LTD - Insurtech Innovation Award 2024
My INSURER PTE LTD - Insurtech Innovation Award 2024
 
DevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platformsDevEX - reference for building teams, processes, and platforms
DevEX - reference for building teams, processes, and platforms
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Dev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio WebDev Dives: Streamline document processing with UiPath Studio Web
Dev Dives: Streamline document processing with UiPath Studio Web
 
Artificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptxArtificial intelligence in cctv survelliance.pptx
Artificial intelligence in cctv survelliance.pptx
 
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
"Federated learning: out of reach no matter how close",Oleksandr Lapshyn
 
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort ServiceHot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
Hot Sexy call girls in Panjabi Bagh 🔝 9953056974 🔝 Delhi escort Service
 
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptxE-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
E-Vehicle_Hacking_by_Parul Sharma_null_owasp.pptx
 
Install Stable Diffusion in windows machine
Install Stable Diffusion in windows machineInstall Stable Diffusion in windows machine
Install Stable Diffusion in windows machine
 
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
Tampa BSides - Chef's Tour of Microsoft Security Adoption Framework (SAF)
 
AI as an Interface for Commercial Buildings
AI as an Interface for Commercial BuildingsAI as an Interface for Commercial Buildings
AI as an Interface for Commercial Buildings
 
Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?Are Multi-Cloud and Serverless Good or Bad?
Are Multi-Cloud and Serverless Good or Bad?
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 

Ch08

  • 2. 2Slide Learning ObjectivesLearning Objectives Structuring the decision problem and decision treesStructuring the decision problem and decision trees Types of decision making environments:Types of decision making environments: • Decision making under uncertainty whenDecision making under uncertainty when probabilities are not knownprobabilities are not known • Decision making under risk when probabilitiesDecision making under risk when probabilities are knownare known Expected Value of Perfect InformationExpected Value of Perfect Information Decision Analysis with Sample InformationDecision Analysis with Sample Information Developing a Decision StrategyDeveloping a Decision Strategy Expected Value of Sample InformationExpected Value of Sample Information
  • 3. 3Slide Types Of Decision Making EnvironmentsTypes Of Decision Making Environments Type 1: Decision Making under CertaintyType 1: Decision Making under Certainty.. Decision maker know for sure (that is, withDecision maker know for sure (that is, with certainty) outcome or consequence of every decisioncertainty) outcome or consequence of every decision alternative.alternative. TType 2: Decision Making under Uncertainty.ype 2: Decision Making under Uncertainty. Decision maker has no information at all aboutDecision maker has no information at all about various outcomes or states of nature.various outcomes or states of nature. Type 3: Decision Making under RiskType 3: Decision Making under Risk.. Decision maker has some knowledge regardingDecision maker has some knowledge regarding probability of occurrence of each outcome or state ofprobability of occurrence of each outcome or state of nature.nature.
  • 4. 4Slide Decision TreesDecision Trees AA decision treedecision tree is a chronological representation ofis a chronological representation of the decision problem.the decision problem. Each decision tree has two types of nodes;Each decision tree has two types of nodes; roundround nodesnodes correspond to the states of nature whilecorrespond to the states of nature while squaresquare nodesnodes correspond to the decision alternatives.correspond to the decision alternatives. TheThe branchesbranches leaving each round node represent theleaving each round node represent the different states of nature while the branches leavingdifferent states of nature while the branches leaving each square node represent the different decisioneach square node represent the different decision alternatives.alternatives. At the end of each limb of a tree are the payoffsAt the end of each limb of a tree are the payoffs attained from the series of branches making up thatattained from the series of branches making up that limb.limb.
  • 5. 5Slide Decision Making Under UncertaintyDecision Making Under Uncertainty If the decision maker does not know with certaintyIf the decision maker does not know with certainty which state of nature will occur, then he/she is saidwhich state of nature will occur, then he/she is said to beto be making decision under uncertaintymaking decision under uncertainty.. The five commonly used criteria for decision makingThe five commonly used criteria for decision making under uncertainty are:under uncertainty are: 1.1. thethe optimisticoptimistic approach (Maximax)approach (Maximax) 2.2. thethe conservativeconservative approach (Maximin)approach (Maximin) 3.3. thethe minimax regretminimax regret approach (approach (Minimax regret)Minimax regret) 4.4. Equally likely (Equally likely (LaplaceLaplace criterion)criterion) 5.5. Criterion of realism withCriterion of realism with αα ((Hurwicz criterionHurwicz criterion))
  • 6. 6Slide Optimistic ApproachOptimistic Approach TheThe optimistic approachoptimistic approach would be used by anwould be used by an optimistic decision maker.optimistic decision maker. TheThe decision with the largest possible payoffdecision with the largest possible payoff isis chosen.chosen. If the payoff table was in terms of costs, theIf the payoff table was in terms of costs, the decisiondecision with the lowest costwith the lowest cost would be chosen.would be chosen.
  • 7. 7Slide Conservative ApproachConservative Approach TheThe conservative approachconservative approach would be used by awould be used by a conservative decision maker.conservative decision maker. For each decision the minimum payoff is listed andFor each decision the minimum payoff is listed and then the decision corresponding to the maximum ofthen the decision corresponding to the maximum of these minimum payoffs is selected. (Hence, thethese minimum payoffs is selected. (Hence, the minimum possible payoff is maximizedminimum possible payoff is maximized.).) If the payoff was in terms of costs, the maximumIf the payoff was in terms of costs, the maximum costs would be determined for each decision andcosts would be determined for each decision and then the decision corresponding to the minimum ofthen the decision corresponding to the minimum of these maximum costs is selected. (Hence, thethese maximum costs is selected. (Hence, the maximum possible cost is minimizedmaximum possible cost is minimized.).)
  • 8. 8Slide Minimax Regret ApproachMinimax Regret Approach The minimax regret approach requires theThe minimax regret approach requires the construction of aconstruction of a regret tableregret table or anor an opportunity lossopportunity loss tabletable.. This is done by calculating for each state of nature theThis is done by calculating for each state of nature the difference between each payoff and the largest payoffdifference between each payoff and the largest payoff for that state of nature.for that state of nature. Then, using this regret table, the maximum regret forThen, using this regret table, the maximum regret for each possible decision is listed.each possible decision is listed. The decision chosen is the one corresponding to theThe decision chosen is the one corresponding to the minimum of the maximum regretsminimum of the maximum regrets..
  • 9. 9Slide Example: Marketing StrategyExample: Marketing Strategy Consider the following problem with two decisionConsider the following problem with two decision alternatives (dalternatives (d11 & d& d22) and two states of nature S) and two states of nature S11 (Market(Market Receptive) and SReceptive) and S22 (Market Unfavorable) with the(Market Unfavorable) with the following payoff table representing profits ( $1000):following payoff table representing profits ( $1000): States of NatureStates of Nature ss11 ss33 dd11 2020 66 DecisionsDecisions dd22 2525 33
  • 10. 10Slide Example:Example: Optimistic ApproachOptimistic Approach An optimistic decision maker would use the optimisticAn optimistic decision maker would use the optimistic approach. All we really need to do is to choose theapproach. All we really need to do is to choose the decision that has the largest single value in the payoffdecision that has the largest single value in the payoff table. This largest value is 25, and hence the optimaltable. This largest value is 25, and hence the optimal decision isdecision is dd22.. MaximumMaximum DecisionDecision PayoffPayoff dd11 2020 choosechoose dd22 dd22 25 maximum25 maximum
  • 11. 11Slide Example:Example: Conservative ApproachConservative Approach A conservative decision maker would use theA conservative decision maker would use the conservative approach. List the minimum payoff forconservative approach. List the minimum payoff for each decision. Choose the decision with the maximumeach decision. Choose the decision with the maximum of these minimum payoffs.of these minimum payoffs. MinimumMinimum DecisionDecision PayoffPayoff choosechoose dd11 dd11 6 maximum6 maximum dd22 33
  • 12. 12Slide Example:Example: Minimax Regret ApproachMinimax Regret Approach For the minimax regret approach, first compute aFor the minimax regret approach, first compute a regret table by subtracting each payoff in a columnregret table by subtracting each payoff in a column from the largest payoff in that column. The resultingfrom the largest payoff in that column. The resulting regret table is:regret table is: ss11 ss22 MaximumMaximum dd11 5 05 0 55 dd22 0 30 3 33 minimumminimum Then, select the decision with minimum regret.Then, select the decision with minimum regret.
  • 13. 13Slide Example:Example: Equally Likely (Laplace) Criterion Equally likely, also called Laplace, criterion finds decision alternative with highest average payoff. • First calculate average payoff for every alternative. • Then pick alternative with maximum average payoff. Average forAverage for dd11 = (20 + 6)/2 = 13= (20 + 6)/2 = 13 Average forAverage for dd22 = (25 + 3)/2 = 14 Thus,= (25 + 3)/2 = 14 Thus, dd22 is selectedis selected
  • 14. 14Slide Example:Example: Criterion of Realism (Hurwicz)Criterion of Realism (Hurwicz) Often calledOften called weighted averageweighted average, the, the criterion of realismcriterion of realism (or(or HurwiczHurwicz) decision criterion is a compromise between) decision criterion is a compromise between optimistic and aoptimistic and a pessimisticpessimistic decision.decision. • First, selectFirst, select coefficient of realismcoefficient of realism,, αα, with a value, with a value between 0 and 1. Whenbetween 0 and 1. When αα is close to 1, decision makeris close to 1, decision maker is optimistic about future, and whenis optimistic about future, and when αα is close to 0,is close to 0, decision maker is pessimistic about future.decision maker is pessimistic about future. • Payoff =Payoff = αα x (maximum payoff) + (1-x (maximum payoff) + (1-αα) x (minimum payoff)) x (minimum payoff) In our example letIn our example let αα = 0.8= 0.8 Payoff for dPayoff for d11 = 0.8*20+0.2*6=17.2= 0.8*20+0.2*6=17.2 Payoff for dPayoff for d22 = 0.8*25+0.2*3=20.6 Thus, select d2= 0.8*25+0.2*3=20.6 Thus, select d2
  • 15. 15Slide Decision Making with ProbabilitiesDecision Making with Probabilities Expected Value ApproachExpected Value Approach • If probabilistic information regarding the states ofIf probabilistic information regarding the states of nature is available, one may use thenature is available, one may use the expectedexpected Monetary value (EMV) approach (also known asMonetary value (EMV) approach (also known as Expected Value or EV)Expected Value or EV).. • Here the expected return for each decision isHere the expected return for each decision is calculated by summing the products of the payoffcalculated by summing the products of the payoff under each state of nature and the probability ofunder each state of nature and the probability of the respective state of nature occurring.the respective state of nature occurring. • The decision yielding theThe decision yielding the best expected returnbest expected return isis chosen.chosen.
  • 16. 16Slide Expected Value of a Decision AlternativeExpected Value of a Decision Alternative TheThe expected value of a decision alternativeexpected value of a decision alternative is the sumis the sum of weighted payoffs for the decision alternative.of weighted payoffs for the decision alternative. The expected value (EV) of decision alternativeThe expected value (EV) of decision alternative ddii isis defined as:defined as: where:where: NN = the number of states of nature= the number of states of nature PP((ssjj) = the probability of state of nature) = the probability of state of nature ssjj VVijij = the payoff corresponding to decision= the payoff corresponding to decision alternativealternative ddii and state of natureand state of nature ssjj EV( ) ( )d P s Vi j ij j N = = ∑1 EV( ) ( )d P s Vi j ij j N = = ∑1
  • 17. 17Slide Example: Marketing StrategyExample: Marketing Strategy Expected Value ApproachExpected Value Approach Refer to the previous problem. Assume theRefer to the previous problem. Assume the probability of the market being receptive is known toprobability of the market being receptive is known to be 0.75. Use the expected monetary value criterion tobe 0.75. Use the expected monetary value criterion to determine the optimal decision.determine the optimal decision.
  • 18. 18Slide Expected Value of Perfect InformationExpected Value of Perfect Information Frequently information is available that can improveFrequently information is available that can improve the probability estimates for the states of nature.the probability estimates for the states of nature. TheThe expected value of perfect informationexpected value of perfect information (EVPI) is(EVPI) is the increase in the expected profit that would result ifthe increase in the expected profit that would result if one knew with certainty which state of nature wouldone knew with certainty which state of nature would occur.occur. The EVPI provides anThe EVPI provides an upper bound on the expectedupper bound on the expected value of any sample or survey informationvalue of any sample or survey information..
  • 19. 19Slide Expected Value of Perfect InformationExpected Value of Perfect Information EVPI CalculationEVPI Calculation • Step 1:Step 1: Determine the optimal return corresponding to eachDetermine the optimal return corresponding to each state of nature.state of nature. • Step 2:Step 2: Compute the expected value of these optimalCompute the expected value of these optimal returns.returns. • Step 3:Step 3: Subtract the EV of the optimal decision from theSubtract the EV of the optimal decision from the amount determined in step (2).amount determined in step (2).
  • 20. 20Slide Example: Marketing StrategyExample: Marketing Strategy Expected Value of Perfect InformationExpected Value of Perfect Information Calculate the expected value for the best action for eachCalculate the expected value for the best action for each state of nature and subtract the EV of the optimalstate of nature and subtract the EV of the optimal decision.decision. EVPI= .75(25,000) + .25(6,000) - 19,500 = $750EVPI= .75(25,000) + .25(6,000) - 19,500 = $750
  • 21. 21Slide Decision Analysis With Sample InformationDecision Analysis With Sample Information Knowledge of sample or survey information can beKnowledge of sample or survey information can be used to revise the probability estimates for the states ofused to revise the probability estimates for the states of nature.nature. Prior to obtaining this information, the probabilityPrior to obtaining this information, the probability estimates for the states of nature are calledestimates for the states of nature are called priorprior probabilitiesprobabilities.. With knowledge ofWith knowledge of conditional probabilitiesconditional probabilities for thefor the outcomes or indicators of the sample or surveyoutcomes or indicators of the sample or survey information, these prior probabilities can be revised byinformation, these prior probabilities can be revised by employingemploying Bayes' TheoremBayes' Theorem.. The outcomes of this analysis are calledThe outcomes of this analysis are called posteriorposterior probabilitiesprobabilities..
  • 22. 22Slide Posterior ProbabilitiesPosterior Probabilities Posterior Probabilities CalculationPosterior Probabilities Calculation • Step 1:Step 1: For each state of nature, multiply the priorFor each state of nature, multiply the prior probability by its conditional probability for theprobability by its conditional probability for the indicator -- this gives theindicator -- this gives the joint probabilitiesjoint probabilities for thefor the states and indicator.states and indicator. • Step 2:Step 2: Sum these joint probabilities over all states -- thisSum these joint probabilities over all states -- this gives thegives the marginal probabilitymarginal probability for the indicator.for the indicator. • Step 3:Step 3: For each state, divide its joint probability by theFor each state, divide its joint probability by the marginal probability for the indicator -- this gives themarginal probability for the indicator -- this gives the posterior probability distribution.posterior probability distribution.
  • 23. 23Slide Expected Value of Sample InformationExpected Value of Sample Information TheThe expected value of sample informationexpected value of sample information (EVSI) is the(EVSI) is the additional expected profit possible through knowledgeadditional expected profit possible through knowledge of the sample or survey information.of the sample or survey information. EVSI CalculationEVSI Calculation • Step 1:Step 1: Determine the optimal decision and itsDetermine the optimal decision and its expected return for the possible outcomes of theexpected return for the possible outcomes of the sample using the posterior probabilities for the statessample using the posterior probabilities for the states of nature.of nature. • Step 2:Step 2: Compute the expected value of these optimalCompute the expected value of these optimal returns.returns. • Step 3:Step 3: Subtract the EV of the optimal decisionSubtract the EV of the optimal decision obtained without using the sample information fromobtained without using the sample information from the amount determined in step (2).the amount determined in step (2).
  • 24. 24Slide Efficiency of Sample InformationEfficiency of Sample Information Efficiency of sample informationEfficiency of sample information is the ratio of EVSI tois the ratio of EVSI to EVPI.EVPI. As the EVPI provides an upper bound for the EVSI,As the EVPI provides an upper bound for the EVSI, efficiency is always a number between 0 and 1.efficiency is always a number between 0 and 1.
  • 25. 25Slide Refer to the Marketing Strategy ExampleRefer to the Marketing Strategy Example It is known from past experience that of all the casesIt is known from past experience that of all the cases when the market was receptive, a research companywhen the market was receptive, a research company predicted it in 90 percent of the cases. (In the other 10predicted it in 90 percent of the cases. (In the other 10 percent, they predicted an unfavorable market). Also,percent, they predicted an unfavorable market). Also, of all the cases when the market proved to beof all the cases when the market proved to be unfavorable, the research company predicted itunfavorable, the research company predicted it correctly in 85 percent of the cases. (In the other 15correctly in 85 percent of the cases. (In the other 15 percent of the cases, they predicted it incorrectly.)percent of the cases, they predicted it incorrectly.) Answer the following questions based on the aboveAnswer the following questions based on the above information.information.
  • 26. 26Slide Example: Marketing StrategyExample: Marketing Strategy 1. Draw a complete probability tree.1. Draw a complete probability tree. 2. Find the posterior probabilities of all states of nature.2. Find the posterior probabilities of all states of nature. 3. Using the posterior probabilities, which plan would3. Using the posterior probabilities, which plan would you recommend?you recommend? 4.4. How much should one be willing to pay (maximum)How much should one be willing to pay (maximum) for the research survey? That is, compute the expectedfor the research survey? That is, compute the expected value of sample information (EVSI).value of sample information (EVSI).