MULTI-CRITERIA DECISION ANALYSIS FOR HEALTHCARE DECISION MAKING 
Maarten IJzerman, Nancy Devlin, Praveen Thokala and 
Kevin Marsh on behalf of the ISPOR MCDA Task Force 
November 10, 2014
Vakaramoko Diaby , Kaitryn Campbell , Ron Goeree - Multi-criteria decision analysis (MCDA) in health care: A bibliometric analysis, Operations Research for Health Care Volume 2, Issues 2013 20 – 24 http://dx.doi.org/10.1016/j.orhc.2013.03.001
To develop guidance for outcomes researchers and decision makers on the use and application of MCDA in healthcare decision making 
The task force will: 
To provide a common definition for MCDA in health care decision making 
To develop emerging good practices for conducting MCDA to aid health care decision making
Co-Chairs: 
Maarten J. IJzerman, University of Twente, Netherlands 
Kevin Marsh, Evidera, London 
Nancy Devlin, Office of Health Economics, London 
Praveen Thokala, University of Sheffield, Sheffield
Rob Baltussen, Radboud University Medical Center 
Meindert Boysen, National Institute for Health and Clinical Excellence 
Zoltan Kalo, Eotvos Lorand University, Budapest 
Thomas Lonngren, NDA group AB, UK and Sweden 
Filip Mussen, Jansen Pharmaceutical, Antwerp 
Stuart Peacock, British Columbia Cancer Agency, Vancouver, Canada 
John Watkins, Premera Blue Cross, USA
Maarten IJzerman: Introduction 
Nancy Devlin: 1. What do we mean by MCDA? 
Praveen Thokala: 2. Overview of MCDA techniques 
Kevin Marsh: 3. Which MCDA approach is best for different kinds of decisions?
Solicit input from the ISPOR membership regarding our work and choices made 
Identify potential reviewers for draft taskforce reports
Nancy Devlin Office of Health Economics
One of the first tasks for the Taskforce is to establish a working definition of MCDA. 
Not straightforward: different researchers use the term MCDA to mean quite different things. 
How broad should our definition be? e.g. 
“Any approach to making decisions that involve multiple criteria”: In principle, includes purely deliberative decision-making processes. 
What kinds of uses of MCDA are we interested in? e.g., 
“Any application that entails consideration of multiple criteria” : In principle, could include methods for valuing QoL. 
We need to define MCDA in a way that is clear, and enables the Taskforce to focus its efforts where it can add most value.
As generally understood, MCDA 
Comprises a broad set of methodological approaches, stemming from operations research. 
Decomposes complex decision problems, where there are many factors to be taken into account (‘multiple criteria’) by using a set of relevant criteria. 
Provides a way of structuring such decisions, and aims to help the decision-maker be clear about what criteria are relevant and the relative importance of each in their decisions. 
Generally entails being explicit about both the criteria and the weights. 
Facilitates transparent and consistent decisions.
We propose to focus on: 
methods designed to evaluate the options available to health care decision makers by accounting for all relevant value criteria, and which explicitly defines, measures and weights those criteria. 
We will not include deliberative processes, other than their use to inform explicit selection of criteria and weights 
how these methods can be used at ‘real’ decision points: that is, where there is direct involvement of a decision maker; a complete set of factors to be taken into account; and a ‘real’ decision to be made. 
Excludes stated preferences methods, other than where those are used to weight decision criteria.
ISPOR Taskforces on health state utilities, DCE methods, etc: important to avoid duplication of effort 
The goal of PROs, QoL utilities and QALYs is not to make a decision per se, but to measure health. This provides one, very important source of evidence to decision makers, but the aim of using those methods is not to make a decision in itself, but rather to generate evidence. 
While MAU constitutes a type of MCDA, participants in TTO, DCE etc. are making hypothetical choices – they are not making ‘real’ decisions. 
Stated preference methods may be relevant to weighting decision criteria: our focus will be on best practise in using those methods in that specific context, building on existing best practise.
How does our proposed definition fit with existing definitions in the literature? 
Sources: 
Studies included in a recent review of MCDA use in health care decision making, published in Pharmacoeconomics (March et al 2014). 
With the addition of a few key papers published subsequent to that review. 
Extraction 
Definition of MCDA provided in the introduction sections of these papers 
14
Belton and Stewart 
“An umbrella term to describe a collection of formal approaches which seek to take explicit account of multiple criteria in helping individuals or groups explore decisions that matter” 
Keeney and Raiifa 
“An extension of decision theory that covers any decision with multiple objectives. A methodology for appraising options on individual, often conflicting criteria, and combining them into one overall appraisal” 
15
16 
What decisions were MCDAs designed to support? 
Source: Marsh et al (2014)
17 
0 
10 
20 
30 
40 
Yes 
No 
% of studies
18 
0% 
20% 
40% 
60% 
80% 
100% 
Support decisions / decision- 
makers 
Valuation of interventions 
Elicitation of decision makers 
values 
Elicitation of preferences 
Deal with uncertainty 
% of studies 
Yes 
No
A range of definitions of MCDA may be found in the literature. 
We have proposed (what we hope is!) a very clear, focussed definition, which will direct our efforts to the use of MCDA techniques to aid and structure real health care decisions. Your feedback is welcome. 
There is increasing interest in MCDA to help make benefit risk assessments, resource allocation and reimbursement decisions in a transparent and consistent way. 
Fewer published examples of its use in portfolio optimisation and SDM. 
This taskforce aims to produce good practice guidelines relevant to all each of these decision- making contexts.
Praveen Thokala University of Sheffield
Objective 
Criteria 
Measure performance 
Performance matrix 
Weights 
Scoring 
Decision 
How these are done differentiates the MCDA methods 
Aggregation 
Qualitative MCDA methods 
Quantitative MCDA methods
•Level of deliberation vs quantification 
•Deliberative approaches: Use multiple criteria but not explicit about the way the criteria are incorporated in decisions
Value measurement models 
- weighted sum approach 
- PBMA, AHP, MAUT, etc 
Outranking 
- direct comparison of alternatives 
- ELECTRE, PROMETHEE, etc 
Goal programming 
- multi-objective optimisation, LP, etc 
Fully quantified methods
The total score for each alternative using the weighted sum model by combining the 
scores for each intervention on each criterion 
weights for each criterion 
V(Ai) = Σ wj*aij 
where wj denotes the relative weight of importance of the criterion Cj and aij is the performance value of alternative Ai when it is evaluated in terms of criterion Cj.
Step 
Description 
Decision problem 
Problem structuring to establish the decision problem i.e. identify objectives, alternatives and decision makers 
Identify criteria 
Identify value criteria relevant to the decision problem 
Measure performance 
Gather evidence on the performance of the alternatives on the criteria 
Weight criteria 
Elicit the opinions of the stakeholders on the relative importance of different criteria or their preferences for criteria. 
Performance scoring 
Convert performance measures into scores that describe the desirability of achieving different levels of performance for each criterion 
Aggregation 
Combine or ‘aggregate’ criteria scores and weights to estimate the overall value of an option 
Supporting decision making 
Use the outputs from the MCDA exercise to support decision making
Stakeholder expert views and mission statements of the relevant decision makers e.g. national/local directives 
•Key stakeholders – e.g. 
oClinicians 
oPatients 
•Key national stakeholders – e.g. 
oPolicy 
oLegislation 
oNICE 
•Elicitation of stakeholder values (e.g. focus groups or surveys) in other situations 
•Decision makers should construct or validate criteria
Methods vary from subjective judgment in absence of data (e.g. expert clinical opinion) to rapid reviews to full systematic reviews and modeling 
Marsh K, Lanitis T, Neasham D, Orfanos P, Caro J. Assessing the Value of Healthcare Interventions Using Multi-Criteria Decision Analysis: A Review of the Literature. Pharmacoeconomics. 2014 DOI 10.1007/s40273-014-0135-0
Direct rating 
Likert, visual analogue scales (VAS) 
Simple MultiAttribute Rating Technique (SMART) 
Swing weighting 
Analytic Hierarchy Process (AHP) 
Indirect methods 
Discrete choice experiments (DCEs)/Conjoint analysis 
PAPRIKA 
Increasing complexity
Visual Analogue Scale 
Likert Scale
Assign highest weighting to the criterion which the decisions maker considers will lead to the most important change in outcomes, from worst to best case, for the available alternatives. Other weightings are compared to this and ranked accordingly. 
“How big is the difference, and how much do you care about it?” 
Zafiropoulos, Nikolaos and Phillips, Lawrence D. and Pignatti, Francesco and Luria, Xavier (2012) Evaluating benefit-risk: an Agency perspective. Regulatory rapporteur, 9 (6). pp. 5-8. ISSN 1742-8955 
Swing Weights 
This swing was judged to be larger… 
…and this one was judged to be 60% as much. 
Swing weights express the relevance of the criteria
AHP – Pairwise Comparisons 
SAATY T. 1977. A scaling method for priorities in hierarchical structures. Journal of mathematical psychology, 15(3): 234–281. 
SAATY T. 1980. The Analytic Hierarchy Process. New York, McGraw-Hill. 
• Make pairwise comparisons of attributed and alternatives 
• Ratio scale 
• Transform the comparisons into weights and check the consistency of the comparisons 
Scale of relative importance
Understand the relative importance of the different criteria using stated preferences on hypothetical scenarios 
* http://help.matrixknowledge.com
Marsh K, Lanitis T, Neasham D, Orfanos P, Caro J. Assessing the Value of Healthcare Interventions Using Multi-Criteria Decision Analysis: A Review of the Literature. Pharmacoeconomics. 2014 DOI 10.1007/s40273-014-0135-0 
There are a number of different methods to determine the weights of attributes
Different methods 
Direct rating 
Category estimation 
Ratio estimation 
AHP 
Developing the form of value function (i.e. importance of different levels of criteria) e.g. bisection methods and indifference methods 
Intrinsically linked to the choice of the weighting approach 
Increasing complexity
•Value function v(x) assigns a number i.e. value to each attribute level x. 
•Value describes subjective desirability of the corresponding attribute level. 
•For example: 
value 
Size of the ice cream cone 
1 
value 
1 
Working hours / day
Direct rating/Category estimation method 
Direct rating: 1) Rank the alternatives 2) Give 100 points to the best alternative 3) Give 0 points to the worst alternative 4) Rate the remaining alternatives between 0 and 100 Category estimation assign values to “a small number of categories” in a similar manner as in the direct rating method: 
Give 100 points to the best category 
Give 0 points to the worst category 
Rate the remaining categories between 0 and 10 
Category 
Poor 
Satisfactory 
Good 
Salary range 
Less than £1500 
£1500-2500 
More than £2500
•Define the value function by assessing the form of the function or by curve drawing 
•Needs input from the stakeholders 
•Values for different alternatives can be read from the value curve 
Value 
Level of an attribute
Range of different methods 
Direct rating 
Category estimation 
Bisection 
Difference standard sequence 
Developing the form of value function 
And indirect methods… 
Intrinsically linked to the choice of the weighting approach 
Increasing complexity
Aggregation using weighted sum modelling 
Uncertainty needs to be taken into account
Kevin Marsh Evidera
Objective 
To propose a framework that can help researchers and decision makers distinguish and select between MCDA approaches 
Overview 
Summary of existing typologies 
Proposed synthesis of this literature for discussion 
Illustration 
Typology of approaches 
Characterizing different decisions
The current literature 
Includes many studies that discuss the advantages and disadvantages of MCDA approaches. 
But only a few that propose criteria for systematically understanding the advantages and disadvantages of MCDA approaches
It is doubtful if an identification of the “best” MCDA method in general can be performed (De Montis et al, 2005) 
It is impossible to characterize all the DMS; there might exist as many DMS as there are decisions (Guitouni and Martel,1998) 
All methods have their assumptions and hypotheses, on which is based all its theoretical and axiomatic development - these are the frontiers beyond which the methods cannot be used (Guitouni and Martel, 1998)
Duckstein et al (1982) 
Consistency of results between methodologies 
Robustness of results with respect to changes in parameter values 
Ease of computation 
Hobbs et al (1992) 
Degree of comfort the users feel in using the methods 
Confidence users express in the methods 
Ability to help users understand the problem 
Ability to be valid – results consistent with the actual preferences of users 
Appropriateness and ease of use 
But, (i) would expect different results and (ii) requires a ‘true’ result against which to assess consistency?
Duckstein et al (1982) 
Consistency of results between methodologies 
Robustness of results with respect to changes in parameter values 
Ease of computation 
Hobbs et al (1992) 
Degree of comfort the users feel in using the methods 
Confidence users express in the methods 
Ability to help users understand the problem 
Ability to be valid – results consistent with the actual preferences of users 
Appropriateness and ease of use 
Non-sensitivity of outcomes to changes in parameter inputs is not the same as ‘robustness’
Weights explicitly determined or implicit?
Importance or trade off?
Qualitative, quantitative, fuzzy?
Guidelines to distinguish / select MCDA methods 
1.Preference elicitation method 
1.Mode: direct weighting or trade off? 
2.Preference relation assumed: indifference, preference, incomparability 
2.Decision problem: ranking vs choice 
3.Data handled: (i) ordinal, cardinal, (ii) deterministic or non-deterministic 
4.Theoretical assumptions: independence, comparability, transitivity
Decision problem 
Criteria 1. What is the decision makers’ objective? Rank options or measure their value
Criteria 2. Time and resources available 
-Amount of data required by the method? 
-Collection mode: survey, workshop
Criteria 3: Cognitive burden imposed on DM - nature and amount of data required 
Criteria 4: Problem solving process 
4a. Break down problem into components 
4b. Allow knowledge sharing
Criteria 5: Do the methods assumptions about the nature of preferences correspond with DM’s preference structure? 5a. Do DM accept that criteria are comparable? 5b. Do DM have linear or non-linear preferences?
Decision problem 
Demands on participants 
Decision makers preferences 
Theoretical requirement 
Practical constraints 
Criteria 6: Does the method meet the theoretical requirements of the DM’s objectives?
Criteria 
Value measurement 
Outranking 
1. Decision – value measurement? 
 
 
2. Time/ resource – low? 
N/a 
3. Cognitive effort – low? 
4a. Break down problem? 
4b. Allow knowledge sharing? 
5a. Incomparable criteria 
 
 
5b. Non-linear preferences 
N/a 
6. Meets theoretical requirements? 
Value measurement of outranking approaches?
Direct 
AHP 
Swing 
DCE 
1. Decision – value measurement? 
n/a 
2. Time/ resource – low? 
 
 
 
3. Cognitive effort – low? 
 
 
4a. Break down problem? 
 
 
 
4b. Allow knowledge sharing? 
 
 
 
5a. Incomparable criteria 
n/a 
5b. Non-linear preferences 
? 
 
 
6. Meets theoretical requirements? 
 
 
Which value measurement approach?
HTA 
Authorisation 
SDM 
1. Decision – value measurement? 
 
2. Time/ resource – low? 
 
3. Cognitive effort – low? 
 
4a. Break down problem? 
? 
? 
? 
4b. Allow knowledge sharing? 
? 
? 
? 
5a. Incomparable criteria 
5b. Non-linear preferences 
? 
? 
? 
6. Meets theoretical requirements? 
n/a
TBC – perhaps we could decide these in our meeting on Monday morning?
Objective: associate a real number with each alternative in order to produce a preference ordering consistent with DMs value judgements 
Often divided into two elements 
61 
Criterion 1 
100 
0 
A 
B 
Criterion 2 
100 
0 
X 
Y 
1. Partial value functions 
2. Aggregation using weights 
B-A = 100 
X-Y = 50
Requires 2 assumptions 
62 
Criterion 1 
100 
0 
A 
B 
Criterion 2 
100 
0 
X 
Y 
B-A = 100 
X-Y = 50 
1. Weights are scaling constants, or trade offs 
a= 70 
b=70 
b=55 
a=40 
Stakeholder is no worse off moving from intervention a to intervention b
63 
Criterion 1 
100 
0 
A 
B 
Criterion 2 
100 
0 
X 
Y 
B-A = 100 
X-Y = 50 
2. Interval scale property – equal increments in value on a partial value function should represent equal trade offs with other criterion 
v1 
v2 
v4 
v3 
v5 
If v1-v2 = v2-v3 v1-v2 = v4-v5 Then v2-v3 = v4-v5
64 
1.Direct ration: How is important is outcome i? 
2.AHP: how much more important is outcome I vs outcome j? 
3.Not obvious that importance ratios expressed in this way correspond to the meaning of the weigh parameter in the model 
4.People express such importance ratios in a context-free way (regardless of the magnitude of change on the criterion)
65 
Swing weighing 
DCE

MCDA devlin nov14

  • 1.
    MULTI-CRITERIA DECISION ANALYSISFOR HEALTHCARE DECISION MAKING Maarten IJzerman, Nancy Devlin, Praveen Thokala and Kevin Marsh on behalf of the ISPOR MCDA Task Force November 10, 2014
  • 3.
    Vakaramoko Diaby ,Kaitryn Campbell , Ron Goeree - Multi-criteria decision analysis (MCDA) in health care: A bibliometric analysis, Operations Research for Health Care Volume 2, Issues 2013 20 – 24 http://dx.doi.org/10.1016/j.orhc.2013.03.001
  • 4.
    To develop guidancefor outcomes researchers and decision makers on the use and application of MCDA in healthcare decision making The task force will: To provide a common definition for MCDA in health care decision making To develop emerging good practices for conducting MCDA to aid health care decision making
  • 5.
    Co-Chairs: Maarten J.IJzerman, University of Twente, Netherlands Kevin Marsh, Evidera, London Nancy Devlin, Office of Health Economics, London Praveen Thokala, University of Sheffield, Sheffield
  • 6.
    Rob Baltussen, RadboudUniversity Medical Center Meindert Boysen, National Institute for Health and Clinical Excellence Zoltan Kalo, Eotvos Lorand University, Budapest Thomas Lonngren, NDA group AB, UK and Sweden Filip Mussen, Jansen Pharmaceutical, Antwerp Stuart Peacock, British Columbia Cancer Agency, Vancouver, Canada John Watkins, Premera Blue Cross, USA
  • 7.
    Maarten IJzerman: Introduction Nancy Devlin: 1. What do we mean by MCDA? Praveen Thokala: 2. Overview of MCDA techniques Kevin Marsh: 3. Which MCDA approach is best for different kinds of decisions?
  • 8.
    Solicit input fromthe ISPOR membership regarding our work and choices made Identify potential reviewers for draft taskforce reports
  • 9.
    Nancy Devlin Officeof Health Economics
  • 10.
    One of thefirst tasks for the Taskforce is to establish a working definition of MCDA. Not straightforward: different researchers use the term MCDA to mean quite different things. How broad should our definition be? e.g. “Any approach to making decisions that involve multiple criteria”: In principle, includes purely deliberative decision-making processes. What kinds of uses of MCDA are we interested in? e.g., “Any application that entails consideration of multiple criteria” : In principle, could include methods for valuing QoL. We need to define MCDA in a way that is clear, and enables the Taskforce to focus its efforts where it can add most value.
  • 11.
    As generally understood,MCDA Comprises a broad set of methodological approaches, stemming from operations research. Decomposes complex decision problems, where there are many factors to be taken into account (‘multiple criteria’) by using a set of relevant criteria. Provides a way of structuring such decisions, and aims to help the decision-maker be clear about what criteria are relevant and the relative importance of each in their decisions. Generally entails being explicit about both the criteria and the weights. Facilitates transparent and consistent decisions.
  • 12.
    We propose tofocus on: methods designed to evaluate the options available to health care decision makers by accounting for all relevant value criteria, and which explicitly defines, measures and weights those criteria. We will not include deliberative processes, other than their use to inform explicit selection of criteria and weights how these methods can be used at ‘real’ decision points: that is, where there is direct involvement of a decision maker; a complete set of factors to be taken into account; and a ‘real’ decision to be made. Excludes stated preferences methods, other than where those are used to weight decision criteria.
  • 13.
    ISPOR Taskforces onhealth state utilities, DCE methods, etc: important to avoid duplication of effort The goal of PROs, QoL utilities and QALYs is not to make a decision per se, but to measure health. This provides one, very important source of evidence to decision makers, but the aim of using those methods is not to make a decision in itself, but rather to generate evidence. While MAU constitutes a type of MCDA, participants in TTO, DCE etc. are making hypothetical choices – they are not making ‘real’ decisions. Stated preference methods may be relevant to weighting decision criteria: our focus will be on best practise in using those methods in that specific context, building on existing best practise.
  • 14.
    How does ourproposed definition fit with existing definitions in the literature? Sources: Studies included in a recent review of MCDA use in health care decision making, published in Pharmacoeconomics (March et al 2014). With the addition of a few key papers published subsequent to that review. Extraction Definition of MCDA provided in the introduction sections of these papers 14
  • 15.
    Belton and Stewart “An umbrella term to describe a collection of formal approaches which seek to take explicit account of multiple criteria in helping individuals or groups explore decisions that matter” Keeney and Raiifa “An extension of decision theory that covers any decision with multiple objectives. A methodology for appraising options on individual, often conflicting criteria, and combining them into one overall appraisal” 15
  • 16.
    16 What decisionswere MCDAs designed to support? Source: Marsh et al (2014)
  • 17.
    17 0 10 20 30 40 Yes No % of studies
  • 18.
    18 0% 20% 40% 60% 80% 100% Support decisions / decision- makers Valuation of interventions Elicitation of decision makers values Elicitation of preferences Deal with uncertainty % of studies Yes No
  • 19.
    A range ofdefinitions of MCDA may be found in the literature. We have proposed (what we hope is!) a very clear, focussed definition, which will direct our efforts to the use of MCDA techniques to aid and structure real health care decisions. Your feedback is welcome. There is increasing interest in MCDA to help make benefit risk assessments, resource allocation and reimbursement decisions in a transparent and consistent way. Fewer published examples of its use in portfolio optimisation and SDM. This taskforce aims to produce good practice guidelines relevant to all each of these decision- making contexts.
  • 20.
  • 21.
    Objective Criteria Measureperformance Performance matrix Weights Scoring Decision How these are done differentiates the MCDA methods Aggregation Qualitative MCDA methods Quantitative MCDA methods
  • 22.
    •Level of deliberationvs quantification •Deliberative approaches: Use multiple criteria but not explicit about the way the criteria are incorporated in decisions
  • 23.
    Value measurement models - weighted sum approach - PBMA, AHP, MAUT, etc Outranking - direct comparison of alternatives - ELECTRE, PROMETHEE, etc Goal programming - multi-objective optimisation, LP, etc Fully quantified methods
  • 24.
    The total scorefor each alternative using the weighted sum model by combining the scores for each intervention on each criterion weights for each criterion V(Ai) = Σ wj*aij where wj denotes the relative weight of importance of the criterion Cj and aij is the performance value of alternative Ai when it is evaluated in terms of criterion Cj.
  • 25.
    Step Description Decisionproblem Problem structuring to establish the decision problem i.e. identify objectives, alternatives and decision makers Identify criteria Identify value criteria relevant to the decision problem Measure performance Gather evidence on the performance of the alternatives on the criteria Weight criteria Elicit the opinions of the stakeholders on the relative importance of different criteria or their preferences for criteria. Performance scoring Convert performance measures into scores that describe the desirability of achieving different levels of performance for each criterion Aggregation Combine or ‘aggregate’ criteria scores and weights to estimate the overall value of an option Supporting decision making Use the outputs from the MCDA exercise to support decision making
  • 26.
    Stakeholder expert viewsand mission statements of the relevant decision makers e.g. national/local directives •Key stakeholders – e.g. oClinicians oPatients •Key national stakeholders – e.g. oPolicy oLegislation oNICE •Elicitation of stakeholder values (e.g. focus groups or surveys) in other situations •Decision makers should construct or validate criteria
  • 27.
    Methods vary fromsubjective judgment in absence of data (e.g. expert clinical opinion) to rapid reviews to full systematic reviews and modeling Marsh K, Lanitis T, Neasham D, Orfanos P, Caro J. Assessing the Value of Healthcare Interventions Using Multi-Criteria Decision Analysis: A Review of the Literature. Pharmacoeconomics. 2014 DOI 10.1007/s40273-014-0135-0
  • 28.
    Direct rating Likert,visual analogue scales (VAS) Simple MultiAttribute Rating Technique (SMART) Swing weighting Analytic Hierarchy Process (AHP) Indirect methods Discrete choice experiments (DCEs)/Conjoint analysis PAPRIKA Increasing complexity
  • 29.
    Visual Analogue Scale Likert Scale
  • 30.
    Assign highest weightingto the criterion which the decisions maker considers will lead to the most important change in outcomes, from worst to best case, for the available alternatives. Other weightings are compared to this and ranked accordingly. “How big is the difference, and how much do you care about it?” Zafiropoulos, Nikolaos and Phillips, Lawrence D. and Pignatti, Francesco and Luria, Xavier (2012) Evaluating benefit-risk: an Agency perspective. Regulatory rapporteur, 9 (6). pp. 5-8. ISSN 1742-8955 Swing Weights This swing was judged to be larger… …and this one was judged to be 60% as much. Swing weights express the relevance of the criteria
  • 31.
    AHP – PairwiseComparisons SAATY T. 1977. A scaling method for priorities in hierarchical structures. Journal of mathematical psychology, 15(3): 234–281. SAATY T. 1980. The Analytic Hierarchy Process. New York, McGraw-Hill. • Make pairwise comparisons of attributed and alternatives • Ratio scale • Transform the comparisons into weights and check the consistency of the comparisons Scale of relative importance
  • 32.
    Understand the relativeimportance of the different criteria using stated preferences on hypothetical scenarios * http://help.matrixknowledge.com
  • 33.
    Marsh K, LanitisT, Neasham D, Orfanos P, Caro J. Assessing the Value of Healthcare Interventions Using Multi-Criteria Decision Analysis: A Review of the Literature. Pharmacoeconomics. 2014 DOI 10.1007/s40273-014-0135-0 There are a number of different methods to determine the weights of attributes
  • 34.
    Different methods Directrating Category estimation Ratio estimation AHP Developing the form of value function (i.e. importance of different levels of criteria) e.g. bisection methods and indifference methods Intrinsically linked to the choice of the weighting approach Increasing complexity
  • 35.
    •Value function v(x)assigns a number i.e. value to each attribute level x. •Value describes subjective desirability of the corresponding attribute level. •For example: value Size of the ice cream cone 1 value 1 Working hours / day
  • 36.
    Direct rating/Category estimationmethod Direct rating: 1) Rank the alternatives 2) Give 100 points to the best alternative 3) Give 0 points to the worst alternative 4) Rate the remaining alternatives between 0 and 100 Category estimation assign values to “a small number of categories” in a similar manner as in the direct rating method: Give 100 points to the best category Give 0 points to the worst category Rate the remaining categories between 0 and 10 Category Poor Satisfactory Good Salary range Less than £1500 £1500-2500 More than £2500
  • 37.
    •Define the valuefunction by assessing the form of the function or by curve drawing •Needs input from the stakeholders •Values for different alternatives can be read from the value curve Value Level of an attribute
  • 38.
    Range of differentmethods Direct rating Category estimation Bisection Difference standard sequence Developing the form of value function And indirect methods… Intrinsically linked to the choice of the weighting approach Increasing complexity
  • 39.
    Aggregation using weightedsum modelling Uncertainty needs to be taken into account
  • 40.
  • 41.
    Objective To proposea framework that can help researchers and decision makers distinguish and select between MCDA approaches Overview Summary of existing typologies Proposed synthesis of this literature for discussion Illustration Typology of approaches Characterizing different decisions
  • 42.
    The current literature Includes many studies that discuss the advantages and disadvantages of MCDA approaches. But only a few that propose criteria for systematically understanding the advantages and disadvantages of MCDA approaches
  • 43.
    It is doubtfulif an identification of the “best” MCDA method in general can be performed (De Montis et al, 2005) It is impossible to characterize all the DMS; there might exist as many DMS as there are decisions (Guitouni and Martel,1998) All methods have their assumptions and hypotheses, on which is based all its theoretical and axiomatic development - these are the frontiers beyond which the methods cannot be used (Guitouni and Martel, 1998)
  • 44.
    Duckstein et al(1982) Consistency of results between methodologies Robustness of results with respect to changes in parameter values Ease of computation Hobbs et al (1992) Degree of comfort the users feel in using the methods Confidence users express in the methods Ability to help users understand the problem Ability to be valid – results consistent with the actual preferences of users Appropriateness and ease of use But, (i) would expect different results and (ii) requires a ‘true’ result against which to assess consistency?
  • 45.
    Duckstein et al(1982) Consistency of results between methodologies Robustness of results with respect to changes in parameter values Ease of computation Hobbs et al (1992) Degree of comfort the users feel in using the methods Confidence users express in the methods Ability to help users understand the problem Ability to be valid – results consistent with the actual preferences of users Appropriateness and ease of use Non-sensitivity of outcomes to changes in parameter inputs is not the same as ‘robustness’
  • 47.
  • 48.
  • 49.
  • 50.
    Guidelines to distinguish/ select MCDA methods 1.Preference elicitation method 1.Mode: direct weighting or trade off? 2.Preference relation assumed: indifference, preference, incomparability 2.Decision problem: ranking vs choice 3.Data handled: (i) ordinal, cardinal, (ii) deterministic or non-deterministic 4.Theoretical assumptions: independence, comparability, transitivity
  • 51.
    Decision problem Criteria1. What is the decision makers’ objective? Rank options or measure their value
  • 52.
    Criteria 2. Timeand resources available -Amount of data required by the method? -Collection mode: survey, workshop
  • 53.
    Criteria 3: Cognitiveburden imposed on DM - nature and amount of data required Criteria 4: Problem solving process 4a. Break down problem into components 4b. Allow knowledge sharing
  • 54.
    Criteria 5: Dothe methods assumptions about the nature of preferences correspond with DM’s preference structure? 5a. Do DM accept that criteria are comparable? 5b. Do DM have linear or non-linear preferences?
  • 55.
    Decision problem Demandson participants Decision makers preferences Theoretical requirement Practical constraints Criteria 6: Does the method meet the theoretical requirements of the DM’s objectives?
  • 56.
    Criteria Value measurement Outranking 1. Decision – value measurement?   2. Time/ resource – low? N/a 3. Cognitive effort – low? 4a. Break down problem? 4b. Allow knowledge sharing? 5a. Incomparable criteria   5b. Non-linear preferences N/a 6. Meets theoretical requirements? Value measurement of outranking approaches?
  • 57.
    Direct AHP Swing DCE 1. Decision – value measurement? n/a 2. Time/ resource – low?    3. Cognitive effort – low?   4a. Break down problem?    4b. Allow knowledge sharing?    5a. Incomparable criteria n/a 5b. Non-linear preferences ?   6. Meets theoretical requirements?   Which value measurement approach?
  • 58.
    HTA Authorisation SDM 1. Decision – value measurement?  2. Time/ resource – low?  3. Cognitive effort – low?  4a. Break down problem? ? ? ? 4b. Allow knowledge sharing? ? ? ? 5a. Incomparable criteria 5b. Non-linear preferences ? ? ? 6. Meets theoretical requirements? n/a
  • 59.
    TBC – perhapswe could decide these in our meeting on Monday morning?
  • 61.
    Objective: associate areal number with each alternative in order to produce a preference ordering consistent with DMs value judgements Often divided into two elements 61 Criterion 1 100 0 A B Criterion 2 100 0 X Y 1. Partial value functions 2. Aggregation using weights B-A = 100 X-Y = 50
  • 62.
    Requires 2 assumptions 62 Criterion 1 100 0 A B Criterion 2 100 0 X Y B-A = 100 X-Y = 50 1. Weights are scaling constants, or trade offs a= 70 b=70 b=55 a=40 Stakeholder is no worse off moving from intervention a to intervention b
  • 63.
    63 Criterion 1 100 0 A B Criterion 2 100 0 X Y B-A = 100 X-Y = 50 2. Interval scale property – equal increments in value on a partial value function should represent equal trade offs with other criterion v1 v2 v4 v3 v5 If v1-v2 = v2-v3 v1-v2 = v4-v5 Then v2-v3 = v4-v5
  • 64.
    64 1.Direct ration:How is important is outcome i? 2.AHP: how much more important is outcome I vs outcome j? 3.Not obvious that importance ratios expressed in this way correspond to the meaning of the weigh parameter in the model 4.People express such importance ratios in a context-free way (regardless of the magnitude of change on the criterion)
  • 65.