Decision analysis is a systematic approach to evaluating important choices using tools like decision trees and influence diagrams. It involves 6 steps: 1) defining the problem, 2) listing alternatives, 3) identifying outcomes, 4) listing payoffs, 5) selecting a decision model, and 6) making a decision. There are three types of decision making environments: certainty, uncertainty, and risk. Under uncertainty, approaches include maximax, maximin, realism, equally likely, and minimax regret. Markov models are useful when risk is continuous over time and events may occur repeatedly, representing the problem as transitions between health states.
1. ~1~
Sawsan Monir * 20142629
DEFINITION of 'Decision Analysis - DA'
A systematic, quantitative and visual approach to addressing and
evaluating important choices confronted by businesses. Decision
analysis utilizes a variety of tools to evaluate all relevant information
to aid in the decision making process. A graphical representation of
alternatives and possible solutions, as well as challenges and
uncertainties, can be created on a decision tree or influence diagram.
Decision analysis (DA) has been applied to business problems in
management, marketing, operations, accounting, and finance. In
addition, it has had an impact on the fields of medicine, law, military
science, environmental sciences, and public policy more generally.
The Six Steps in Decision Analysis
1. Clearly define the problem at hand
2. List the possible alternatives
3. Identify the possible outcomes or states of nature
4. List the payoff or profit of each combination of alternatives and
outcomes
5. Select one of the mathematical decision theory models
6. Apply the model and make your decision
Step 1 – Define the problem
Expand by manufacturing and marketing a new product, backyard
storage sheds
Step 2 – List alternatives
Construct a large new plant, small plant or no plant at all
Decision analysis
2. ~2~
Sawsan Monir * 20142629
Step 3 – Identify possible outcomes
The market could be favorable or unfavorable
Types of Decision-Making Environments
Type 1: Decision making under certainty
Decision maker knows with certainty the consequences of every
alternative or decision choice
Type 2: Decision making under uncertainty
The decision maker does not know the probabilities of the various
outcomes
There are several criteria for making decisions under uncertainty
1. Maximax (optimistic)
Used to find the alternative that maximizes the maximum payoff
Locate the maximum payoff for each alternative
Select the alternative with the maximum number
State of nature
alternative Favorable
market ($)
Unfavorable
market ($)
Maximum in a
raw ($)
Construct a
large plant
200,000 –180,000 200,000
maximax
Construct a
small plant
100,000 –20,000 100,000
Do nothing 0 0 0
2. Maximin (pessimistic)
Used to find the alternative that maximizes the minimum payoff
3. ~3~
Sawsan Monir * 20142629
Locate the minimum payoff for each alternative
Select the alternative with the maximum number
State of nature
alternative Favorable
market ($)
Unfavorable
market ($)
Maximum in a
raw ($)
Construct a
large plant
200,000 –180,000 200,000
Construct a
small plant
100,000 –20,000 100,000
Do nothing 0 0 0 maximin
3. Criterion of realism (Hurwicz)
A weighted average compromise between optimistic and
pessimistic
Select a coefficient of realism α
Coefficient is between 0 and 1
A value of 1 is 100% optimistic
Compute the weighted averages for each alternative
Select the alternative with the highest value
Weighted average = α (maximum in row) + (1 – α )(minimum
in row)
o For the large plant alternative using α = 0.8
(0.8)(200,000) + (1 – 0.8)(–180,000) = 124,000
o For the small plant alternative using α = 0.8
(0.8)(100,000) + (1 – 0.8)(–20,000) = 76,000
State of nature
4. ~4~
Sawsan Monir * 20142629
alternative Favorable
market ($)
Unfavorable
market ($)
Criterion of
realism
(α=0.8)$
Construct a
large plant
200,000 –180,000 124,000
realism
Construct a
small plant
100,000 –20,000 76,000
Do nothing 0 0 0
4. Equally likely (Laplace)
Considers all the payoffs for each alternative
Find the average payoff for each alternative
Select the alternative with the highest average
State of nature
alternative Favorable
market ($)
Unfavorable
market ($)
Raw average
($)
Construct a
large plant
200,000 –180,000 10,000
Construct a
small plant
100,000 –20,000 40,000
equally likely
Do nothing 0 0 0
5. Minimax regret
Based on opportunity loss or regret, the difference between the
optimal profit and actual payoff for a decision
Create an opportunity loss table by determining the
opportunity loss for not choosing the best alternative
5. ~5~
Sawsan Monir * 20142629
Opportunity loss is calculated by subtracting each payoff in the
column from the best payoff in the column
Find the maximum opportunity loss for each alternative and
pick the alternative with the minimum number
State of nature
alternative Favorable
market ($)
Unfavorable
market ($)
Maximum in a
raw ($)
Construct a
large plant
200,000 –
200,000 = 0
0 – (–180,000)
= 180,000
180,000
Construct a
small plant
200,000 –
100,000 =
100,000
0 – (–20,000) =
20,000
100,000
minimax
Do nothing 200,000 – 0 =
200,000
0 – 0 = 0 200,000
Type 3: Decision making under risk
The decision maker knows the probabilities of the various outcomes
Decision making when there are several possible states of
nature and we know the probabilities associated with each
possible state
Most popular method is to choose the alternative with the
highest expected monetary value (EMV)
(alternative i) = (payoff of first state of nature) x (probability
of first state of nature) + payoff of second state of nature x
(probability of second state of nature) + … +payoff of last
state of nature x (probability of last state of nature)
6. ~6~
Sawsan Monir * 20142629
Any problem that can be presented in a decision table can also
be graphically represented in a decision tree
Decision trees are most beneficial when a sequence of decisions
must be made
All decision trees contain decision points or nodes and state-of-
nature points or nodes
A decision node from which one of several alternatives may be
chosen
A state-of-nature node out of which one state of nature will
occur
Markov chain
7. ~7~
Sawsan Monir * 20142629
We describe a Markov chain as follows: We have a set of states, S = }
s1, s2 … sr{.
The process starts in one of these states
and moves successively from one state
to
another. Each move is called a step. If
the chain is currently in state si, then
it moves to state sj at the next step with
a probability denoted by pij , and this
probability does not depend upon which states the chain was in
before the current state.
The probabilities pij are called transition probabilities. The process
can remain
in the state it is in, and this occurs with probability pii. An initial
probability
distribution, defined on S, specifies the starting state. Usually this is
done by
specifying a particular state as the starting state.
Markov models are useful when a decision problem involves risk that
is continuous over time, when the timing of events is important, and
when important events may happen more than once. Markov
models assume that a patient is always in one of a finite number of
discrete health states, called Markov states. The ability of the Markov
model to represent repetitive events and the time dependence of
both probabilities and utilities allows for more accurate
representation of clinical settings that involve these issues.
8. ~8~
Sawsan Monir * 20142629
Markov models are particularly useful when a decision problem
involves a risk that is ongoing over time. Some clinical examples are
the risk of hemorrhage while on anticoagulant therapy, the risk of
rupture of an abdominal aortic aneurysm, and the risk of mortality in
any person, whether sick or healthy.
There are two important consequences of events that have ongoing
risk. First, the times at which the events will occur are uncertain. For
example, a stroke that occurs immediately may have a different
impact on the patient than one that occurs ten years later. The
second consequence is that a given event may occur more than
once. As the following example shows, representing events that are
repetitive or that occur with uncertain timing is difficult using a
simple tree model.
There are five steps for Markov modeling:
(1) choose the health states that represent the possible outcomes
from each intervention; (2) determine possible transitions between
health states; (3) choose how long each cycle should be and how
many cycles will be analyzed; (4) estimate the probabilities
associated with moving (i.e., transitioning) in and out of health
states; (5) estimate the costs and outcomes associated with each
option.
Step 1: Choose Health States
These are referred to as Markov states. Patients cannot be in more
than one health state during each cycle. A simple general example is
“well, sick, or dead.” Graphically, by convention, each health state is
placed in an oval or circle in a bubble diagram.
Step 2: Determine Transitions
9. ~9~
Sawsan Monir * 20142629
It is that patients move (i.e.,
transition) from one health
state to another. For example,
if the patient dies, this is called
an absorbing state. An
absorbing state indicates that
patients cannot move to
another health state in a later
cycle. Graphically, arrows are
used to indicate which
transitions are allowed.
For cycle 1, each patient can stay well, or can move to the sick or
dead states. For the next cycle, patients in the well state can again
stay well or move to the sick or dead states. Those in the dead state
cannot move back to the other two states. Depending on the disease
of interest, patients may or may not be able to move back to the well
state after being in the sick state.
Step 3: Choose the Cycle Length and Number of Cycles
The cycle length depends on the disease being modeled. For the
example of patients with a blood clot, a cycle of 1 week might be
enough time to determine the number of patients with additional
blood clots or bleeding. For chronic diseases, a cycle length of 1
year is commonly used.
Step 4: Estimate Transition Probabilities
Transition probabilities are used to estimate the percent of
patients who are
10. ~10~
Sawsan Monir * 20142629
likely to move from one health state to another during each cycle.
These probability values usually come from previous research or
expert panel estimates.
Step 5: Calculate Costs and Outcomes
Outcomes for each health state should be estimated and given a
value. If the outcome of interest is years of life gained or saved
and each cycle is for 1 year, then
each person who is alive during a
cycle gets a value of 1.0 as his or
her outcome for that cycle. It is
common to adjust each year of life in each cycle for the quality of
health that year.
The two basic calculation methods used to determine the results
of a Markov analysis are cohort simulation and Monte Carlo
simulation.
Cohort simulation uses a hypothetical group (cohort) of patients
that usually
start out in the same health state. At each cycle, the transition
probabilities are applied. (Probabilities may be the same for every
cycle if using a Markov chain analysis, or they may vary by cycle if
using a Markov process analysis.) The number of patients in each
cycle is calculated and summed using matrix algebra. This type of
calculation can incorporate discount rates to account for time
value associated with costs and outcomes.
Monte Carlo simulation is a type of stochastic analysis that
takes into account uncertainty or variability at the patient level. A
random patient is sent through the model, and outcomes and
costs are calculated individually for that patient. Then one by one,
11. ~11~
Sawsan Monir * 20142629
more random patients are sent through the model. The path
through the model that each patient may take is different because
of random variation, and results for a specific model can result in
different answers each time the simulation is conducted because
of the randomness at chance nodes in the model. If a large
number of patients (e.g., 100,000) are sent through the model one
at a time, the results may be close to the results of the cohort
simulation.
Disadvantages of Markov Modeling
By their nature, Markov models can be more complex than simple
decision trees and therefore less transparent to decision makers.
A commonly cited disadvantage of Markov modeling is that it is
“memoryless” because the Markovian assumption is that the
probability of moving from state to state is not based on the
previous experiences from former cycles.
More advanced and complex computations, such as using tunnel
states, allow for integration of health experiences from previous
cycles. Another disadvantage is that the data needed to estimate
probabilities and costs, especially in the long term, are often
unavailable.
Markov models can therefore become somewhat contrived if
these implicit assumptions do not reflect sufficiently well the
characteristics of a system and how it functions in practice.
Can require a large number of states
Model can be difficult to construct and validate
"Markov" Property assumption and component failure
distribution assumptions may be invalid for the system being
modeled
12. ~12~
Sawsan Monir * 20142629
Model types of greatest complexity require solution
techniques that are currently feasible only for small models
Model is often not structurally similar to the physical or
logical organization of the system
Markov modeling advantages:
Markov analysis has the advantage of being an analytical method
which means that the reliability parameters for the system are
calculated in effect by a formula. This has the considerable
advantages of speed and accuracy when producing results. Speed
is especially useful when investigating many alternative variations
of design or exploring a range of sensitivities. In contrast accuracy
is vitally important when investigating small design changes or
when the reliability or availability of high integrity systems are
being quantified.
Can model repair in a natural way:
Repairs of individual components and groups
Variable number of repair persons
Sequential repair; Partial repair (degraded components)
Can model standby spares (hot, warm, cold)
Can model sequence dependencies:
Functional dependencies
Sequence enforcement
Can model imperfect coverage more naturally than
combinatorial models
Can model fault/error handling and recovery at a detailed
level