SlideShare a Scribd company logo
1 of 9
Download to read offline
This article was downloaded by: [University of Maastricht], [Inge van der Putten]
On: 07 August 2015, At: 00:12
Publisher: Taylor & Francis
Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: 5 Howick Place,
London, SW1P 1WG
Click for updates
Expert Review of Pharmacoeconomics & Outcomes
Research
Publication details, including instructions for authors and subscription information:
http://www.tandfonline.com/loi/ierp20
Discrete-choice experiments versus rating scale
exercises to evaluate the importance of attributes
Ben FM Wijnen, Inge M van der Putten
ab
, Siebren Groothuis
ab
, Reina JA de Kinderen, Cindy
YG Noben
ab
, Aggie TG Paulus
ab
, Bram LT Ramaekers
d
, Gaston CWM Vogel
ad
& Mickael
Hiligsmann
ab
a 1
CAPHRI, Research School for Public Health and Primary Care, Maastricht University, PO
Box 616, 6200 MD Maastricht, The Netherlands
b 2
Department of Health Services Research, Maastricht University, PO Box 616, 6200 MD
Maastricht, The Netherlands
c 3
Department of Research and Development, Epilepsy Centre Kempenhaeghe, PO Box 61,
5590 AB Heeze, The Netherlands
d 4
Department of Clinical Epidemiology and Medical Technology Assessment, Maastricht
University Medical Centre, PO Box 5800, 6202 AZ Maastricht, The Netherlands
Published online: 03 Jul 2015.
To cite this article: Ben FM Wijnen, Inge M van der Putten, Siebren Groothuis, Reina JA de Kinderen, Cindy YG Noben, Aggie
TG Paulus, Bram LT Ramaekers, Gaston CWM Vogel & Mickael Hiligsmann (2015) Discrete-choice experiments versus rating
scale exercises to evaluate the importance of attributes, Expert Review of Pharmacoeconomics & Outcomes Research, 15:4,
721-728
To link to this article: http://dx.doi.org/10.1586/14737167.2015.1033406
PLEASE SCROLL DOWN FOR ARTICLE
Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained
in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no
representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the
Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and
are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and
should be independently verified with primary sources of information. Taylor and Francis shall not be liable for
any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever
or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of
the Content.
This article may be used for research, teaching, and private study purposes. Any substantial or systematic
reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any
form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http://
www.tandfonline.com/page/terms-and-conditions
Discrete-choice experiments
versus rating scale exercises
to evaluate the importance
of attributes
Expert Rev. Pharmacoecon. Outcomes Res. 15(4), 721–728 (2015)
Ben FM Wijnen*1–3
,
Inge M
van der Putten1,2
,
Siebren Groothuis1,2
,
Reina JA de Kinderen1–3
,
Cindy YG Noben1,2
,
Aggie TG Paulus1,2
,
Bram LT Ramaekers4
,
Gaston CWM Vogel1,4
and
Mickael Hiligsmann1,2
1
CAPHRI, Research School for Public
Health and Primary Care, Maastricht
University, PO Box 616, 6200 MD
Maastricht, The Netherlands
2
Department of Health Services
Research, Maastricht University,
PO Box 616, 6200 MD Maastricht,
The Netherlands
3
Department of Research and
Development, Epilepsy Centre
Kempenhaeghe, PO Box 61, 5590 AB
Heeze, The Netherlands
4
Department of Clinical Epidemiology
and Medical Technology Assessment,
Maastricht University Medical Centre,
PO Box 5800, 6202 AZ Maastricht,
The Netherlands
*Author for correspondence:
Tel.: +31 433 882 294
Fax: +31 433 884 162
b.wijnen@maastrichtuniversity.nl
Aim: To examine the difference between discrete-choice experiments (DCE) and rating scale
exercises (RSE) in determining the most important attributes using a case study. Methods:
Undergraduate health sciences students were asked to complete a DCE and a RSE. Six
potentially important attributes were identified in focus groups. Fourteen unlabelled choice
tasks were constructed using a statistically efficient design. Mixed multinomial logistic
regression analysis was used for DCE data analysis. Results: In total, 254 undergraduate
students filled out the questionnaire. In the DCE, only four attributes were statistically
significant, whereas in the RSE, all attributes except one were rated four or higher.
Conclusion: Attribute importance differs between DCE and RSE. The DCE had a
differentiating effect on the relative importance of the attributes; however, determining
relative importance using DCE should be done with caution as a lack of statistically significant
difference between levels does not necessarily imply that the attribute is not important.
KEYWORDS: discrete choice experiment . Likert scale . preferences . rating scale . relative importance of attributes
Eliciting preferences has become increasingly
important in healthcare [1]. Understanding
preferences can be very informative for both
policy and clinical decisions. Due to the exces-
sive and increasing demand for healthcare and
limited resources available, decision makers
have to make choices on the allocation of scarce
resources among competing alternatives. Over
the years, weighing the public opinion in these
decisions has become more evident [2]. Under-
standing the preferences of patients and incor-
porating them in clinical decisions could also
lead to improved adherence and outcomes [2].
The most common way to measure prefer-
ences in healthcare is using stated preference
(SP) methods. SP methods involve the elicita-
tion of responses to predefined alternatives in
which people hypothetically state their prefer-
ences as opposed to revealed preference
methods in which preferences are being
observed in real life. Three broad categories of
SP methods have been distinguished: ranking,
ratings scale exercises (RSE), and choice-based
approaches [3]. In ranking exercises, respond-
ents are given a pre-specified amount of
alternatives in which they are asked to rank
these alternatives from ‘most preferable’ to
‘least preferable’. In RSE, respondents are
mostly asked to rate each alternative on a
Likert scale with a pre-specified range (e.g.,
1–7). The third category are choice-based
approaches, including discrete choice experi-
ments (DCE), which involves a presentation of
a series of pair wise choice tasks regarding
hypothetical scenarios in which respondents are
asked each time to select their preferred sce-
nario. Rankings are also popular due to their
relative ease of administration and analysis [1],
but this method fails to provide a measure of
strength of preferences [4]. Hence, our study
was designed to compare RSE with DCE.
Both the RSE and DCE could be used to
evaluate the importance of different aspects of
health and healthcare. The DCE is based on
well-tested theories of choice behavior known
as random utility theory (RUT) [5] and eco-
nomic theory [3], whereas the RSE has little
theoretical basis. RUT proposes that there is a
latent (cannot be observed) construct called
‘utility’, which is present in individuals for
informahealthcare.com 10.1586/14737167.2015.1033406 Ó 2015 Informa UK Ltd ISSN 1473-7167 721
Research Report
Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
each choice alternative. According to RUT, this utility can be
divided in two components: an explainable component and an
unexplainable, random component. The explainable component
consists of attributes of the actual choice task and covariates of
the individual, whereas the random component consists of all
unidentified factors (‘error-terms’). Hence, in DCEs, a utility
function is used to determine individuals’ preferences. In addi-
tion, economic theory proposes that utilities are stable over
time and that each individual will eventually try to maximize
his or her utility function. In DCEs, individuals are expected
to make trade-offs within a resource constraint and that these
decision-making processes conform to the assumptions of these
theories and rational choice [6].
Although, in RES, respondents are not typically asked to
make choices within a resource constraint, individuals are
expected to value each step on the scale equally (e.g., an
increase from 1 to 2 points is equally high as an increase from
3 to 4 points). Moreover, in contrast to DCEs, RSE more typi-
cally use a holistic approach where respondents evaluate attrib-
utes as a discrete whole [6]. Hence, a well-known problem with
the RSE is that it does not explicitly capture the tradeoff
between attributes [7].
The DCE is considered to better reflect actual decision mak-
ing, it allows for the estimation of overall preferences for any
combination of attributes and is shown to be one of the most
sensitive methods to elicit preferences [3,6]. An increased use of
DCEs has been observed in the past few years [8]. However, as
a DCE is a complex method, requiring more cognitive effort,
it is still convenient for decision makers and investigators to
use other methods, such as a RSE. A RSE is less cognitive
demanding and more easy to conduct, but has demonstrated
limitations to account for the strength of preferences [3].
Few studies have compared these techniques for preference
elicitation with different results. Although Bridges et al. [9]
emphasized a high level of concordance between DCE and
RSE; other studies have suggested that different elicitation
methods could lead to different results [6,10,11]. For example,
Pignone et al. [11] reported that a DCE produced somewhat
different patterns of attribute importance than a rating task.
In this study, we assessed the preferences of undergraduate
students in choosing a study orientation/speciality. Given the
prominence of assessing the relative importance of attributes
inside and outside healthcare sector, and the need for more
comparison of methods, our primary aim was to examine the
difference between a DCE and a RSE to determine the (rela-
tive) importance of attributes regarding the choice of orienta-
tion in second or third year of the undergraduate program.
A secondary aim was to assess possible ordering effects of the
questionnaire on both the DCE and RSE.
Methods
In this study, a comparison was made between preference elici-
tation using a RSE and a DCE in a study examining the pref-
erences of undergraduate health sciences students for the
selection of study specialization for a bachelor orientation.
Case description
The Health Sciences program at Maastricht University is a
3-year bachelor program, covering a wide range of disciplines
within healthcare. The introductory year is broad and multidis-
ciplinary, covering the entire field of health and healthcare,
including behavior, environmental, social and biological aspects.
In the second year, students will have to specialize by choosing
for one of four tracks: Policy, Management and Evaluation of
Health Care; Biology and Health; Mental Health Sciences; and
Prevention and Health.
Maastricht University is renowned for its use of Problem-
Based Learning, in which students work in small tutorial
groups (~10–12 students), looking for practical solutions to
real-world problems guided by a tutor (i.e., member of the aca-
demic staff) [12,13].
Attribute & levels
Several sources were combined to identify all relevant attributes
that were used both for the RSE and the DCE. First, manda-
tory online reflection forms on the choice of bachelor orienta-
tion from a random selection of 30 first year students were
studied to identify important aspects of the decisional process.
Second, two focus groups were organized with second year stu-
dents (n = 8 and n = 11), which were guided by semi-
structured questions based on the student reflections. Third, a
meeting with educational experts (n = 3) was organized to
review all identified attributes, check whether any important
attributes were missing and reach consensus regarding the final
selection of the attributes.
To reduce cognitive burden resulting from too many attrib-
utes, six final attributes on the hypothetical choice for a bachelor
orientation were determined: ‘possible acquainted masters’, spec-
ified as the number of master programs that are strongly related
to the bachelor orientation, which results in an enhanced eligibil-
ity for those master programs; ‘job opportunity’, specified as the
percentage of graduated students who found a job in the field of
the bachelor orientation within 12 months of post-graduation;
‘scope of orientation’, specified as the bachelor orientation being
more or less multidisciplinary; ‘quality of education’, specified as
the overall quality score of education within the bachelor orien-
tation by former students; ‘hours of self-study’, specified as the
self-reported hours of self-study within the bachelor orientation
by former students; and ‘correspondence with personal interests’,
specified as the extent to which the bachelor orientation corre-
sponds to the personal interests of a participant. Other attributes
were considered to be less relevant or were left out because of
commonality between attributes (i.e., ‘Information provision’
and ‘Relation between theory and practice’). For each attribute,
several levels were defined using expert opinion (for ‘personal
interests’ and ‘possible acquainted masters’), evaluations of grad-
uated students (for ‘hours of self-study’, ‘scope of orientation’,
and ‘quality of education’), and follow-up data of graduated stu-
dents (for ‘job opportunity’). The final list of attributes and their
corresponding levels was constructed in agreement with experts
in the field of education.
Research Report Wijnen, van der Putten, Groothuis et al.
722 Expert Rev. Pharmacoecon. Outcomes Res. 15(4), (2015)
Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
Design of DCE & RSE
The design of the questionnaire was
constructed using the software Ngene
(Version 1.1.1, [14]). A fractional factorial
(Bayesian efficient) design was used to
construct the questionnaire, as a full fac-
torial design would result in 486 hypo-
thetical scenarios (five attributes with
three levels [35
] multiplied with one attri-
bute with two levels [21
]).
To generate a Bayesian efficient design,
a pilot questionnaire was distributed
among 10 first year undergraduate stu-
dents to obtain prior distributions of
likely parameter values (e.g., the beta
coefficients in the regression analysis).
Furthermore, students were asked to
reflect on the questionnaire in terms of
comprehension, completeness, and
description of attributes and levels. Based
on their comments, the final question-
naire was adjusted. Using the prior distri-
butions of likely parameters, a statistically
efficient design, which minimizes the
D-error was generated. Bayesian efficient
experimental designs maximize the precision of estimated
parameters for a given number of choice sets [15]. In this study,
14 choice sets were generated. Within each choice task, stu-
dents were asked to choose between two scenarios. No opt-out
question was included to force students in making a choice and
hence a trade-off between attributes.
To measure (in)consistency of students’ decisions, a
‘dominant pair’ comparison was added, in which three attrib-
utes of one scenario were assumed to be more preferred (high
[80] vs low [40%] job opportunity, high [8] vs low [6] quality
of education, and high vs low correspondence with personal
interest) and the other levels were similar across alternative
options, and one choice set was repeated at the end of the
questionnaire (test–retest exercise). Hence, the total question-
naire included 16 choice sets, which is in line with other DCEs
and is shown to be cognitively acceptable [8,16].
In the RSE, participants were asked to rate the importance
of each attribute on a Likert scale ranging from 1 = ‘Attribute
is not important at all’ to 7 = ‘Attribute is very important’.
Outline of questionnaire
The questionnaire started with a short introduction to the
nature of the study, followed by a thorough description of the
attributes and levels. To clarify the DCE, an example of a com-
pleted choice set was provided. To account for possible ordering
effects, two versions were constructed. In version 1, participants
were given a short introduction to the items and levels, follow-
ing by the DCE, and received questions regarding background
information and the RSE at the end. In version 2, participants
were given a short introduction to the items and levels followed
by questions regarding background information and the RSE,
and the DCE was put at the end of the questionnaire.
Data collection & participants
The study was conducted at Maastricht University, the Nether-
lands, among all first year health sciences students (n = 267).
No sample size calculation was performed as sample size calcu-
lations are particularly difficult for DCEs [17]. However, our
sample is in line with findings from Marshall et al. [18] who
reported that the mean sample size for conjoint analysis studies
in health care published between 2005 and 2008 was 259,
with nearly 40% of the sample sizes in the range of 100–300
respondents.
Data was collected in May 2014. Questionnaires were dis-
tributed during tutorial sessions in which a tutor was informed
regarding the procedures described in the questionnaire. Both
versions of the questionnaire were equally distributed among
tutorial groups (version 1 in group 1–12 and version 2 in
group 13–24). Students are randomly assigned to each tutorial
group by the education office. Undergraduate students were
asked to complete the questionnaire and return them to the
tutor. The average time to complete the questionnaire was
10–15 min. An example of a DCE and RSE are given in FIGURE 1.
Data analyses
DCE data were analyzed using Nlogit version 5 (Econometric
Software, Inc). Data of undergraduate students who completed
less than five choice sets or RSE questions were excluded.
Remaining missing values were handled using list-wise deletion.
In addition, students who failed the dominance test were
Characteristics
Possible acquainted
masters
Job opportunity
Scope of orientation
Quality of education
Hours of self-study
Correspondence with
personal interests
Which option do you
prefer?
Low correspondence with
personal interests
High correspondence with
personal interests
12 h per week 15 h per week
7 out of 10 8 out of 10
Multidisciplinary bachelor
orientation but less
deepening
Specific bachelor
orientation but more
deepening
Limited (1 or 2 acquainted
master programs)
40% of graduated student
found a job within the field
60% of graduated student
found a job within the field
Many (more than 5
acquainted master
programs)
Method A Method B
On a scale from 1 (‘not important’) to 7 (‘very important’), please mark how important
each characteristic is to you in general when you are choosing a bachelor orientation?
Characteristics
1. Possible acquainted masters 1 2 3 4 5 6 7
Not important Very important
Figure 1. Example of a choice set and of a rating scale exercises question.
DCE vs RSE to evaluate the importance of attributes Research Report
informahealthcare.com 723
Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
excluded from the analyses. RSE data were analyzed using
SPSS 21 (IBM, Inc). For the DCE, a panel mixed multinomial
logit model was used to determine the effect of the attribute
levels on patients’ preferences. Mixed multinomial logit model
allows for possible heterogeneity across respondents and
accounts for the panel nature of the data [15,19]. This model is
based on the assumption that parameters are randomly distrib-
uted in the population and heterogeneity is captured by esti-
mating the standard deviation of the parameters [20].
The following model was estimated:
Vij = (b1 + h1i) * SOME_PAM + (b2 + h2i) * MANY_PAM
+ (b3 + h3i) * JO + (b4 + h4i) * SO + (b5 + h5i) *
QE + (b6 + h6i) * HSS + (b7 + h7i) * SOME_PI
+ (b8 + h8i) * HIGH_PI +"ij,
where V represents the observable relative utility of student i
for scenario j, b1–b8 are coefficients of the attributes indicating
the relative weight placed on the attributes, and hi represents the
standard deviation of the random parameter for student i. Finally,
hij + "ij captures the individual-specific
unexplained variance around the mean.
Dummy coding was used to describe all
categorical variables, and base-case levels
can be found in TABLE 1. As a sensitivity
analysis, effects coding was used to exam-
ine the impact of coding on the results.
All parameters were included as random
parameters. Each attribute was assumed to
be normally distributed. The estimation
was conducted by using 2000 Halton
draws. Model fit was assessed using log-
likelihood and McFadden’s pseudo-R2
.
Interactions between attributes were tested
and subgroup analysis between both ver-
sions of the questionnaire was done to
check for ordering effects.
To determine the relative importance of
the attributes valued using DCE, relative
importance weights were calculated using
the method described by Malhotra and
Birks [21]. In short, this method assumes
that the range of the level coefficients
within an attribute represents the relative
importance of that respective attribute.
Hence, the resulting percentage is the per-
centage of explained variance around the
choice decision that is attributable to the
respective attribute. In the RSE, the impor-
tance of the attributes was calculated using
the mean values as expressed on the Likert
scale by the participants. Finally, a compari-
son was made between the ranking of
attributes according to the DCE and
according to the RSE. Furthermore, we will
examine whether results of both the DCE
and RSE differ between the two versions of the questionnaire. To
compare the mean ratings for each attribute within the RSE paired
samples t-tests were done. To compare attributes between RSE ver-
sion 1 and version 2 independent-samples t-tests was done.
Results
A total of 254 (95.1%) undergraduate students completed the
questionnaire of which 82.6% was females. The mean age was
19.7 years old, with the youngest student being 17 and the
oldest being 30 years old. Version 1 was completed by 122 stu-
dents (48%) and version 2 by 132 students (52%). Further-
more, regarding the consistency tests, participants chose the
dominant scenario in the ‘dominant pair’ comparison 99.2%
of the time and the test–retest exercise was successfully repeated
79.9% of the time.
Results of the DCE
Four of the six attributes appeared to have a significant influ-
ence on the choice for a bachelor orientation (TABLE 2). Although
Table 1. Attributes and levels for bachelor orientation.
Attributes Levels Regression
coefficient
Possible acquainted
masters
Limited (1 or 2 acquainted master programs) (Reference level)
Some (3 to 5 acquainted master programs) b1
Many (more than 5 acquainted master
programs)
b2
Job opportunity† 40% of graduated students found a job
within the respective field
b3
60% of graduated students found a job
within the respective field
80% of graduated students found a job
within the respective field
Scope of orientation Multidisciplinary bachelor orientation but less
deepening
(Reference level)
Specific bachelor orientation but more
deepening
b4
Quality of education†
6 out of 10 b5
7 out of 10
8 out of 10
Hours of self-study†
12 h per week b6
15 h per week
18 h per week
Correspondence with
personal interests
Low correspondence with personal interests (Reference level)
Some correspondence with personal interests b7
High correspondence with personal interests b8
†
Estimated as continuous variable within the mixed multinomial logit model model.
Research Report Wijnen, van der Putten, Groothuis et al.
724 Expert Rev. Pharmacoecon. Outcomes Res. 15(4), (2015)
Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
there was no significant difference
between some and many acquainted mas-
ter programs, a limited number of
acquainted master programs negatively
influenced the choice for a bachelor ori-
entation. Furthermore, an increase in job
opportunity (in %), in quality of educa-
tion and correspondence with personal
interests were all associated with a higher
preference for the respective bachelor ori-
entation. Finally, the scope of the orien-
tation and the hours of self-study did not
significantly influence decision making.
When looking at the relative impor-
tance of the attributes, ‘correspondence
with personal interests’ has the largest
impact on participants’ preferences
(51.5%; TABLE 3). ‘Job opportunity’ was
shown to have the second largest impact
on participants’ preferences (22.5%) fol-
lowed by ‘quality of education’ (19.4%),
‘possible acquainted master programs’
(5.5%), ‘scope of orientation’ (1.0%), and
‘hours of self-study’ (0.1%). The use of
effects coding only marginally impacted
the relative importance weights and did
not impact the ranking of the attributes.
Results of the RSE
Based on the mean values elicited in the RSE, an importance
ranking of the attributes was constructed (TABLE 3).
‘Correspondence with personal interests’ had the highest score
(mean 6.5). The second highest valued attributes were ‘scope
of orientation’ (5.1) and ‘job opportunity’ (5.0), followed by
‘quality of education’ (4.5), ‘possible acquainted masters’ (4.3),
and ‘hours of self-study’ (3.3).
Comparison of DCE & RSE
A comparison of the importance ranking
of the attributes based on DCE and RSE
leads to some dissimilarities (TABLE 3). In
the DCE, four attributes were statistically
important for participants when making
a decision but in the RSE, all attributes
except ‘hours of self-study’ were rated
4 or higher. Although in both methods
respondents expressed a clear preference
for ‘correspondence with personal inter-
ests’, the importance for the attribute
‘scope of orientation’ varied strongly
between RSE (regarded as second most
important attribute) and DCE (regarded
as second last/before last). According to
the DCE, the difference in levels of
‘scope of orientation’ did not significantly
impact students’ choice. However, in the RSE respondents, it
was regarded as second most important. Albeit ‘scope of ori-
entation’ was not statistically significant, a statistically signifi-
cant heterogeneity was observed for this attribute meaning that,
although, on average, no difference was observed between lev-
els, some students had a preference for a multidisciplinary
bachelor orientation at the cost of in-depth study materials and
other students for a less multidisciplinary bachelor orientation
Table 2. Results from mixed multinomial logit model illustrating
influence of attributes on utility.
Attributes Coefficient (95% CI) Standard deviation
(95% CI)
Possible acquainted masters†
SOME_PAM 0.495 (0.290, À0.701)‡
0.155‡
(À0.364, 0.675)
MANY_PAM 0.512 (0.325, 0.699) 0.444‡
(0.148, 0.741)
Job opportunity (per %) 0.052 (0.044, 0.060)‡
0.035‡
(0.028, 0.042)
Scope of orientation 0.094 (À0.116, 0.199) 0.656‡
(0.453, 0.859)
Quality of education (per point) 0.596 (0.491, 0.701)‡
0.293‡
(0.138, 0.447)
Hours of self-study (per hour) 0.001 (À0.023, 0.023) 0.024 (À0.056, 0.104)
Correspondence with personal interests
SOME_PI 2.002 (1.723, 2.280)‡
0.502§
(0.210, 0.795)
HIGH_PI 4.747 (4.190, 5.305)‡
1.317‡
(0.993, 1.642)
Log likelihood À1383.10
Pseudo R-squared 0.43
Number of observations 3528
Number of individuals 252
Table represents b-coefficients from mixed multinomial logit model. The regression coefficients represent the
mean part-worth utility of that attribute in the respondent sample.
†
Reference level is ‘SOME_masterprogr’.
‡
Significance at 1% level.
§
Significance 5% level.
Table 3. (Relative) importance ranking of attributes based on
discrete-choice experiments and rating scale exercises.
(Relative) Importance ranking based
on discrete-choice experiments
(% impact on choice)
Importance ranking based on
rating scale exercises
(mean value of a 7-point Likert scale)†
Correspondence with personal
interests (51.5%)
Correspondence with personal interests (6.5)
Job opportunity (22.5%) Scope of orientation (5.1)‡
Quality of education (19.4%) Job opportunity (5.0)‡
Possible acquainted masters (5.5%) Quality of education (4.5)
Scope of orientation (1.0%) Possible acquainted masters (4.3)
Hours of self-study (0.1%) Hours of self-study (3.3)
†
Significance of the attributes based on paired-samples t-test between attributes within aggregated data of
both versions 1 and 2 of the rating scale exercises.
‡
No significance difference between attributes at 5% level based on paired samples t-test.
DCE vs RSE to evaluate the importance of attributes Research Report
informahealthcare.com 725
Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
but more in-depth study materials. Ranking of the other attrib-
utes was similar between both methods.
Results of the DCE were not significantly affected by the
order of the RSE (i.e., version 1 vs version 2). However, results
of the RSE were significantly different between both versions
(TABLE 4). Valuations of ‘scope of orientation’ (4.7), ‘possible
acquainted master programs’ (4.0), and ‘hours of self-study’
(3.1) were significantly different between both versions, leading
to a different ranking of the attributes. These attributes were
also indicated as being the three least important attributes in
the DCE.
Discussion
This study examined the difference between a DCE and a RSE
to determine the most important attributes using the choice of
study orientation by undergraduate students as a case study.
Our study suggests that attributes importance resulting from
DCE and RSE could differ. The DCE had a differentiating
effect on the relative importance of attributes, whereas in the
RSE, attributes were rated more equally and were, except for
‘hours of self-study’, all considered important. However, as the
RSE did not involve a trade-off, one should be careful when
interpreting the importance of ranking based on the RSE, as
participants were not forced to choose between attributes. Our
results are likely to highlight the lack of discriminative power
of a RSE. In addition, an order effect was observed between
questionnaire versions. RSE scores were significantly different
when administering the RSE before or after the DCE. The
placement of the RSE did not significantly impact results of
the DCE.
Our findings are consistent with previous literature.
Pignone et al. [11] also showed different patterns of attribute
importance between choice-based conjoint analyses and
rating tasks in the choice for colorectal cancer screening.
Phillips et al. [6] found differences in how respondents valued
certain attributes and showed variations in how different attri-
bute levels were valued between both methods in their prefer-
ence for HIV tests. However, Bridges et al. [9] revealed high
levels of concordance between DCE and RSE for preference
elicitation for hearing care.
Given the increasing use of stated preference methods, our
results provide insights for future research on preference elicita-
tion. First, it can be concluded that RSE and DCE could result
in different relative importance rankings of attributes. Second,
our findings support the statement that RSE outcomes should
be interpreted with caution as, in a RSE, respondents have the
tendency to rate all attributes more equally and (relative)
important. The higher discriminative power of DCEs in com-
parison to RSE must be taken into account when one intends
to use RSE as a way to elicit preferences. Third, interpreting
the relative importance of attributes from the DCE should be
carefully done. Although one attribute was not significant in
the DCE (‘Scope of orientation’), the RSE revealed high
importance of this ‘Scope of orientation’ attribute. Fourth, the
DCE was shown to be more robust for ordering effects as the
order in which the RSE and DCE were presented (RSE before
or after the DCE) did not significantly impact the results of
the DCE. Finally, the mixed multinomial logit model revealed
heterogeneity of preferences within an attribute (‘Scope of ori-
entation’), indicating large differences in preference between
respondents, despite no difference being observed at the group
level.
The decision of which method to use in a particular circum-
stance is not straightforward. Both the assumptions underlying
RSE and DCE are not always as robust as they seem. In a
RSE, individuals are supposed to value the space between each
response option equally. However Johnsen et al. [10] found that
individuals’ valuations, as measured with a conjoint analysis,
did not correspond with an equally spaced, linear Likert scale.
In addition, the RSE is prone to end-of-scale-bias. Regarding
the DCE, in our study, it was shown that 20.1% of the partici-
pants did not successfully complete the test–retest exercise,
which violates some of the assumptions of economic theory
(e.g., the stability of preferences over time). In addition,
Philips et al. [6] used focus groups, which demonstrated that
participants often only focused on key attributes or used a
‘threshold’ approach in making choices (i.e., price is only rele-
vant when it is above a certain threshold), instead of making a
trade-off between all attributes. Furthermore, it became appar-
ent that individuals were sometimes frustrated by having to
Table 4. Importance ranking of attributes based on rating scale exercises between version 1 (RSE after
DCE) and version 2 (RSE before DCE).
Importance ranking total RSE (n = 254) Importance ranking version 1 (n = 122) Importance ranking version 2 (n = 132)
Correspondence with personal interests (6.5) Correspondence with personal interests (6.5) Correspondence with personal interests (6.5)
Scope of orientation (5.1) Job opportunity (5.0) Scope of orientation (5.4)†
Job opportunity (5.0) Scope of orientation (4.7)†
Job opportunity (5.0)
Quality of education (4.5) Quality of education (4.4) Quality of education (4.6)
Possible acquainted masters (4.3) Possible acquainted masters (4.0)†
Possible acquainted masters (4.5)†
Hours of self-study (3.3) Hours of self-study (3.1)†
Hours of self-study (3.6)†
†
Significance difference between attributes at 5% level based on independent samples t-test.
DCE: Discrete-choice experiments; RSE: Rating scale exercises.
Research Report Wijnen, van der Putten, Groothuis et al.
726 Expert Rev. Pharmacoecon. Outcomes Res. 15(4), (2015)
Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
make difficult trade-offs. In short, the more complex the ques-
tions were, the more they used simplifying rules. Building on
Philips et al. [6], our findings support the idea that RSEs are
more likely to determine the attitude of respondents toward
individual attributes (i.e., individuals might have a positive atti-
tude toward all presented attributes; however, when making a
decision, some of the attributes might be overlooked or implic-
itly ignored) and that DCEs are more focused on preferences
toward attributes when making a decision. In addition, the
DCE is a more realistic resembling of actual decision making
as it involves trade-offs. RSE could be used to elicit preferences.
By example, Koedoot et al. [22] examined preferences for pallia-
tive chemotherapy, in which patients were asked to rate their
preferences for having chemotherapy on a seven-point Likert
scale. Afterwards, these preferences highly corresponded to
patients’ actual treatment choice. However, in general, if one is
interested in prioritizing multiple aspects of health or healthcare
(i.e., multiple attributes/characteristics), the DCE is the more
preferred method. The DCE is, however, not suited to examine
the attitude of individuals toward multiple aspects of health or
healthcare.
This study has some limitation. First, ranking and other
types of choice-based approaches were not taken into account.
New methods, such as best-worse scaling, have proven to be
useful to gain insight in the relative importance [23]. More
research is needed to compare all these methods. A head-to-
head comparison should be done with caution, as both DCE
and RSE differ regarding presentation, framing, and methodol-
ogy. Hence, effects of framing and fundamental differences
between methods (i.e., the RSE does not ask individuals to
make a trade-off) are likely to induce distinct results. Second, it
is reasonable to assume that both methods serve different objec-
tives (attitude vs preference elicitation). Third, DCEs have
been regarded as more cognitively burdensome compared with
the other types of SP elicitation techniques [3]. As our sample
consists of relatively young (mean age of 19.7 years) and highly
educated participants, we expect that we have not encountered
any problems regarding the difficulty of the DCE. However, it
is important to keep in mind that results (and reliability) of a
DCE might be influenced by the age and socioeconomic status
of the participants, and the complexity of the DCE due to cog-
nitive burden. Fourth, the method which was used to calculate
the relative importance weights for the attributes of the DCE is
not directly related to the significance of the coefficients.
Hence, one will derive importance weights regardless of statisti-
cal significance. However, when an attribute is not significant,
in most cases, it will have a low impact on the decision and
hence a low coefficient and relative importance. This can be
seen in this study as the attributes which do not have a signifi-
cant impact on the decision have a rather low relative impor-
tance weight. Finally, this study was conducted within the field
of education. When transferring our results to other topics,
such as healthcare, it is important to keep in mind that these
topic could have an additional dimension that is not captured
in this study, being that the individually expressed preferences
will be used for public decisions, not only impacting the indi-
vidual respondent but also potentially the entire society. More
research should be done within the healthcare sector to verify
our findings. However, we expect our results to be robust for
such extra dimensions.
In conclusion, our study suggests that attribute importance
could differ when using DCE or RSE. The DCE had a more
differentiating effect on the relative importance of the attributes
but interpretation of the relative importance of attributes from
the DCE should be done with caution, as the absence of a sig-
nificant difference between levels at the group level does not
necessarily mean that the attribute is not important and some
patients could still have preferences for levels as indicated by
the amount of heterogeneity around the parameters.
Financial & competing interests disclosure
The authors have no relevant affiliations or financial involvement with
any organization or entity with a financial interest in or financial conflict
with the subject matter or materials discussed in the manuscript. This
includes employment, consultancies, honoraria, stock ownership or options,
expert testimony, grants or patents received or pending, or royalties.
No writing assistance was utilized in the production of this
manuscript.
Key issues
. In decision making, one should be aware that the determination of the (relative) importance of attributes differs between rating scale
exercises and discrete-choice experiments.
. Determining the relative importance of the attributes valued using discrete-choice experiments should be handled carefully as no
statistical significant difference between levels does not necessarily mean that the attribute is not important.
. Building on Philips et al. (2012), it is reasonable to assume that rating scale exercises are more likely to determine the attitude of
respondents toward attributes and that discrete-choice experiments are more focusing on preferences toward attributes when making a
decision.
DCE vs RSE to evaluate the importance of attributes Research Report
informahealthcare.com 727
Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
References
1. Bridges J. Stated preference methods in
health care evaluation: an emerging
methodological paradigm in health
economics. Appl Health Econ Health Policy
2003;2(4):213-24
2. Bridges JF, Hauber AB, Marshall D, et al.
Conjoint analysis applications in health – a
checklist: a report of the ISPOR good
research practices for conjoint analysis task
force. Value Health 2011;14(4):403-13
3. Ryan M, Scott DA, Reeves C, et al.
Eliciting public preferences for healthcare:
a systematic review of techniques. Health
Technol Assess 2001;5(5):1-186
4. Shackley P, Ryan M. Involving consumers
in health care decision making. Health Care
Anal 1995;3(3):196-204
5. Louviere JJ, Flynn TN, Carson RT.
Discrete choice experiments are not conjoint
analysis. Journal of Choice Modelling 2010;
3(3):57-72
6. Phillips KA, Johnson FR, Maddala T.
Measuring what people value: a comparison
of ‘attitude’ and ‘preference’ surveys. Health
Serv Res 2002;37(6):1659-79
7. Srinivasan V, Netzer O. Adaptive
self-explication of multi-attribute
preferences. Research Papers 1979; Stanford
University, Graduate School of Business;
2007
8. Clark M, Determann D, Petrou S, et al.
Discrete Choice Experiments in Health
Economics: a Review of the Literature.
Pharmacoeconomics 2014;32(9):883-902
9. Bridges JF, Lataille AT, Buttorff C, et al.
Consumer preferences for hearing aid
attributes: a comparison of rating and
conjoint analysis methods. Trends Amplif
2012;16(1):40-8
10. Johnson FR, Hauber AB, Osoba D, et al.
Are chemotherapy patients’ HRQoL
importance weights consistent with linear
scoring rules? A stated-choice approach.
Qual Life Res 2006;15(2):285-98
11. Pignone MP, Brenner AT, Hawley S, et al.
Conjoint analysis versus rating and ranking
for values elicitation and clarification in
colorectal cancer screening. J Gen Intern
Med 2012;27(1):45-50
12. Moust J, Bouhuijs P, Schmidt H.
Introduction to problem-based learning. In:
Collaborative learning in the tutorial group.
Taylor & Francis, Groningen, The
Netherlands; 2007
13. Schmidt HG. Foundations of
problem-based learning: some explanatory
notes. Med Educ 1993;27(5):422-32
14. Choice Metrics. Available from: www.
choice-metrics.com/
15. Hensher DA, Rose JM, Greene WH.
Applied choice analysis: a primer.
Cambridge University Press; 2005
16. Bech M, Kjaer T, Lauridsen J. Does the
number of choice sets matter? Results from
a web survey applying a discrete choice
experiment. Health Econ 2011;20(3):273-86
17. Ryan M, Gerard K. Using discrete choice
experiments to value health care
programmes: current practice and future
research reflections. Appl Health Econ
Health Policy 2003;2(1):55-64
18. Marshall D, Bridges JP, Hauber B, et al.
Conjoint analysis applications in health –
how are studies being designed and
reported? Patient-Patient-Centered-
Outcome-Res 2010;3(4):249-56
19. de Bekker-Grob EW, Hol L, Donkers B,
et al. Labeled versus unlabeled discrete
choice experiments in health economics:
an application to colorectal cancer screening.
Value Health 2010;13(2):315-23
20. Hiligsmann M, Dellaert BG, Dirksen CD,
et al. Patients’ preferences for osteoporosis
drug treatment: a discrete-choice
experiment. Arthritis Res Ther 2014;16:R36
21. Malhotra N, Birks D. Marketing Research:
an applied approach. 3rd European Edition.
In: Multidimensional scaling and conjoint
analysis. Pearson Education, Edinburgh,
England; 2007
22. Koedoot CG, de Haan RJ, Stiggelbout AM,
et al. Palliative chemotherapy or best
supportive care? A prospective study
explaining patients’ treatment preference
and choice. Br J Cancer 2003;89(12):
2219-26
23. Flynn TN, Louviere JJ, Peters TJ, Coast J.
Best–worst scaling: what it can do for health
care research and how to do it. J Health
Econ 2007;26(1):171-89
Research Report Wijnen, van der Putten, Groothuis et al.
728 Expert Rev. Pharmacoecon. Outcomes Res. 15(4), (2015)
Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015

More Related Content

What's hot

Meta analysis: Made Easy with Example from RevMan
Meta analysis: Made Easy with Example from RevManMeta analysis: Made Easy with Example from RevMan
Meta analysis: Made Easy with Example from RevManGaurav Kamboj
 
Mixed Method Research
Mixed Method ResearchMixed Method Research
Mixed Method ResearchABCComputers
 
Sr asummary shelisa thomas
Sr asummary   shelisa thomasSr asummary   shelisa thomas
Sr asummary shelisa thomasShelisa Thomas
 
演講-Meta analysis in medical research-張偉豪
演講-Meta analysis in medical research-張偉豪演講-Meta analysis in medical research-張偉豪
演講-Meta analysis in medical research-張偉豪Beckett Hsieh
 
Seminaar on meta analysis
Seminaar on meta analysisSeminaar on meta analysis
Seminaar on meta analysisPreeti Rai
 
Anova n metaanalysis
Anova n metaanalysisAnova n metaanalysis
Anova n metaanalysisutpal sharma
 
9-Meta Analysis/ Systematic Review
9-Meta Analysis/ Systematic Review9-Meta Analysis/ Systematic Review
9-Meta Analysis/ Systematic ReviewResearchGuru
 
Multimethod research
Multimethod researchMultimethod research
Multimethod researchsahughes
 
An investigation into the optimal number of distractors
An investigation into the optimal number of distractorsAn investigation into the optimal number of distractors
An investigation into the optimal number of distractorsDr Kusa Kumar Shaha
 
Research design: A Major too in Research Methods
Research design: A Major too in Research MethodsResearch design: A Major too in Research Methods
Research design: A Major too in Research MethodsDr S.Sasi Kumar Phd(N)
 
Systematic review intro for librarians
Systematic review intro for librariansSystematic review intro for librarians
Systematic review intro for librariansSon Nghiem
 
EBM Systematic Review Appraisal Template V1
EBM Systematic Review Appraisal Template V1EBM Systematic Review Appraisal Template V1
EBM Systematic Review Appraisal Template V1Imad Hassan
 
Anatomy of a meta analysis i like
Anatomy of a meta analysis i likeAnatomy of a meta analysis i like
Anatomy of a meta analysis i likeJames Coyne
 

What's hot (19)

Grading Strength of Evidence
Grading Strength of EvidenceGrading Strength of Evidence
Grading Strength of Evidence
 
Meta analysis: Made Easy with Example from RevMan
Meta analysis: Made Easy with Example from RevManMeta analysis: Made Easy with Example from RevMan
Meta analysis: Made Easy with Example from RevMan
 
Mixed Method Research
Mixed Method ResearchMixed Method Research
Mixed Method Research
 
When to Select Observational Studies Quiz
When to Select Observational Studies QuizWhen to Select Observational Studies Quiz
When to Select Observational Studies Quiz
 
Gaskin2014 sem
Gaskin2014 semGaskin2014 sem
Gaskin2014 sem
 
Sr asummary shelisa thomas
Sr asummary   shelisa thomasSr asummary   shelisa thomas
Sr asummary shelisa thomas
 
Mixed Method Research Methodology
Mixed Method Research MethodologyMixed Method Research Methodology
Mixed Method Research Methodology
 
演講-Meta analysis in medical research-張偉豪
演講-Meta analysis in medical research-張偉豪演講-Meta analysis in medical research-張偉豪
演講-Meta analysis in medical research-張偉豪
 
Seminaar on meta analysis
Seminaar on meta analysisSeminaar on meta analysis
Seminaar on meta analysis
 
Anova n metaanalysis
Anova n metaanalysisAnova n metaanalysis
Anova n metaanalysis
 
9-Meta Analysis/ Systematic Review
9-Meta Analysis/ Systematic Review9-Meta Analysis/ Systematic Review
9-Meta Analysis/ Systematic Review
 
Multimethod research
Multimethod researchMultimethod research
Multimethod research
 
An investigation into the optimal number of distractors
An investigation into the optimal number of distractorsAn investigation into the optimal number of distractors
An investigation into the optimal number of distractors
 
Research design: A Major too in Research Methods
Research design: A Major too in Research MethodsResearch design: A Major too in Research Methods
Research design: A Major too in Research Methods
 
Meta analysis
Meta analysisMeta analysis
Meta analysis
 
Systematic review intro for librarians
Systematic review intro for librariansSystematic review intro for librarians
Systematic review intro for librarians
 
EBM Systematic Review Appraisal Template V1
EBM Systematic Review Appraisal Template V1EBM Systematic Review Appraisal Template V1
EBM Systematic Review Appraisal Template V1
 
Quantitative Synthesis I
Quantitative Synthesis IQuantitative Synthesis I
Quantitative Synthesis I
 
Anatomy of a meta analysis i like
Anatomy of a meta analysis i likeAnatomy of a meta analysis i like
Anatomy of a meta analysis i like
 

Similar to 2015_Discrete choice experiments versus rating scale exercises to evaluate the importance of attributes

Reliability And Validity
Reliability And ValidityReliability And Validity
Reliability And ValidityCrystal Torres
 
Transitions in M&E of SBC Handout
Transitions in M&E of SBC HandoutTransitions in M&E of SBC Handout
Transitions in M&E of SBC HandoutCORE Group
 
What can discrete choice experiments do for youJennifer Cle.docx
What can discrete choice experiments do for youJennifer Cle.docxWhat can discrete choice experiments do for youJennifer Cle.docx
What can discrete choice experiments do for youJennifer Cle.docxhelzerpatrina
 
Introa-Morton-slides.pdf
Introa-Morton-slides.pdfIntroa-Morton-slides.pdf
Introa-Morton-slides.pdfShahriarHabib4
 
Evaluation of health services
Evaluation of health servicesEvaluation of health services
Evaluation of health serviceskavita yadav
 
RESEARCH Open AccessA methodological review of resilience.docx
RESEARCH Open AccessA methodological review of resilience.docxRESEARCH Open AccessA methodological review of resilience.docx
RESEARCH Open AccessA methodological review of resilience.docxverad6
 
Discussion 1 Leadership Theories in Practice.docx
Discussion 1 Leadership Theories in Practice.docxDiscussion 1 Leadership Theories in Practice.docx
Discussion 1 Leadership Theories in Practice.docxbkbk37
 
The Duty of Loyalty and Whistleblowing Please respond to the fol.docx
The Duty of Loyalty and Whistleblowing Please respond to the fol.docxThe Duty of Loyalty and Whistleblowing Please respond to the fol.docx
The Duty of Loyalty and Whistleblowing Please respond to the fol.docxcherry686017
 
Evidence for Public Health Decision Making
Evidence for Public Health Decision MakingEvidence for Public Health Decision Making
Evidence for Public Health Decision MakingVineetha K
 
Stakeholder Engagement in a Patient-Reported Outcomes Implementation by a Pra...
Stakeholder Engagement in a Patient-Reported Outcomes Implementation by a Pra...Stakeholder Engagement in a Patient-Reported Outcomes Implementation by a Pra...
Stakeholder Engagement in a Patient-Reported Outcomes Implementation by a Pra...Marion Sills
 
behaviour changes for success of antimicrobial stewardship program.pptx
behaviour changes for success of antimicrobial stewardship program.pptxbehaviour changes for success of antimicrobial stewardship program.pptx
behaviour changes for success of antimicrobial stewardship program.pptxPathKind Labs
 
Implementation Research: A Primer
Implementation Research: A PrimerImplementation Research: A Primer
Implementation Research: A Primeramusten
 
Can systematic reviews help identify what works and why?
Can systematic reviews help identify what works and why?Can systematic reviews help identify what works and why?
Can systematic reviews help identify what works and why?Carina van Rooyen
 
Matching the Research Design to the Study Question
Matching the Research Design to the Study QuestionMatching the Research Design to the Study Question
Matching the Research Design to the Study QuestionAcademyHealth
 
An Empirical Study of the Co-Creation of Values of Healthcare Consumers – The...
An Empirical Study of the Co-Creation of Values of Healthcare Consumers – The...An Empirical Study of the Co-Creation of Values of Healthcare Consumers – The...
An Empirical Study of the Co-Creation of Values of Healthcare Consumers – The...Healthcare and Medical Sciences
 
Study design is a specific plan or protocol for the study, which allows to tr...
Study design is a specific plan or protocol for the study, which allows to tr...Study design is a specific plan or protocol for the study, which allows to tr...
Study design is a specific plan or protocol for the study, which allows to tr...MOHAhmed18
 
Appraisal Paper Educ Prim Care 2010 pp445-54
Appraisal Paper Educ Prim Care 2010 pp445-54Appraisal Paper Educ Prim Care 2010 pp445-54
Appraisal Paper Educ Prim Care 2010 pp445-54Mark Rickenbach
 

Similar to 2015_Discrete choice experiments versus rating scale exercises to evaluate the importance of attributes (20)

Reliability And Validity
Reliability And ValidityReliability And Validity
Reliability And Validity
 
Transitions in M&E of SBC Handout
Transitions in M&E of SBC HandoutTransitions in M&E of SBC Handout
Transitions in M&E of SBC Handout
 
What can discrete choice experiments do for youJennifer Cle.docx
What can discrete choice experiments do for youJennifer Cle.docxWhat can discrete choice experiments do for youJennifer Cle.docx
What can discrete choice experiments do for youJennifer Cle.docx
 
Introa-Morton-slides.pdf
Introa-Morton-slides.pdfIntroa-Morton-slides.pdf
Introa-Morton-slides.pdf
 
Evaluation of health services
Evaluation of health servicesEvaluation of health services
Evaluation of health services
 
Dhiwahar ppt
Dhiwahar pptDhiwahar ppt
Dhiwahar ppt
 
RESEARCH Open AccessA methodological review of resilience.docx
RESEARCH Open AccessA methodological review of resilience.docxRESEARCH Open AccessA methodological review of resilience.docx
RESEARCH Open AccessA methodological review of resilience.docx
 
Discussion 1 Leadership Theories in Practice.docx
Discussion 1 Leadership Theories in Practice.docxDiscussion 1 Leadership Theories in Practice.docx
Discussion 1 Leadership Theories in Practice.docx
 
The Duty of Loyalty and Whistleblowing Please respond to the fol.docx
The Duty of Loyalty and Whistleblowing Please respond to the fol.docxThe Duty of Loyalty and Whistleblowing Please respond to the fol.docx
The Duty of Loyalty and Whistleblowing Please respond to the fol.docx
 
Evidence for Public Health Decision Making
Evidence for Public Health Decision MakingEvidence for Public Health Decision Making
Evidence for Public Health Decision Making
 
Stakeholder Engagement in a Patient-Reported Outcomes Implementation by a Pra...
Stakeholder Engagement in a Patient-Reported Outcomes Implementation by a Pra...Stakeholder Engagement in a Patient-Reported Outcomes Implementation by a Pra...
Stakeholder Engagement in a Patient-Reported Outcomes Implementation by a Pra...
 
Dr V K Tiwari
Dr V K TiwariDr V K Tiwari
Dr V K Tiwari
 
behaviour changes for success of antimicrobial stewardship program.pptx
behaviour changes for success of antimicrobial stewardship program.pptxbehaviour changes for success of antimicrobial stewardship program.pptx
behaviour changes for success of antimicrobial stewardship program.pptx
 
Implementation Research: A Primer
Implementation Research: A PrimerImplementation Research: A Primer
Implementation Research: A Primer
 
Can systematic reviews help identify what works and why?
Can systematic reviews help identify what works and why?Can systematic reviews help identify what works and why?
Can systematic reviews help identify what works and why?
 
Matching the Research Design to the Study Question
Matching the Research Design to the Study QuestionMatching the Research Design to the Study Question
Matching the Research Design to the Study Question
 
MCDA devlin nov14
MCDA devlin nov14MCDA devlin nov14
MCDA devlin nov14
 
An Empirical Study of the Co-Creation of Values of Healthcare Consumers – The...
An Empirical Study of the Co-Creation of Values of Healthcare Consumers – The...An Empirical Study of the Co-Creation of Values of Healthcare Consumers – The...
An Empirical Study of the Co-Creation of Values of Healthcare Consumers – The...
 
Study design is a specific plan or protocol for the study, which allows to tr...
Study design is a specific plan or protocol for the study, which allows to tr...Study design is a specific plan or protocol for the study, which allows to tr...
Study design is a specific plan or protocol for the study, which allows to tr...
 
Appraisal Paper Educ Prim Care 2010 pp445-54
Appraisal Paper Educ Prim Care 2010 pp445-54Appraisal Paper Educ Prim Care 2010 pp445-54
Appraisal Paper Educ Prim Care 2010 pp445-54
 

More from Cindy Noben

2016_The exchangeability of self-reports and administrative health care resou...
2016_The exchangeability of self-reports and administrative health care resou...2016_The exchangeability of self-reports and administrative health care resou...
2016_The exchangeability of self-reports and administrative health care resou...Cindy Noben
 
Comparative cost effectiveness of two interventions to promote work functioni...
Comparative cost effectiveness of two interventions to promote work functioni...Comparative cost effectiveness of two interventions to promote work functioni...
Comparative cost effectiveness of two interventions to promote work functioni...Cindy Noben
 
Protecting and promoting mental health of nurses in the hospital setting: is ...
Protecting and promoting mental health of nurses in the hospital setting: is ...Protecting and promoting mental health of nurses in the hospital setting: is ...
Protecting and promoting mental health of nurses in the hospital setting: is ...Cindy Noben
 
Quality appraisal of generic self reported instruments measuring healht relat...
Quality appraisal of generic self reported instruments measuring healht relat...Quality appraisal of generic self reported instruments measuring healht relat...
Quality appraisal of generic self reported instruments measuring healht relat...Cindy Noben
 
Design of a trial based economic evaluation on the cost effectiveness of empl...
Design of a trial based economic evaluation on the cost effectiveness of empl...Design of a trial based economic evaluation on the cost effectiveness of empl...
Design of a trial based economic evaluation on the cost effectiveness of empl...Cindy Noben
 
E-book Proefschrift Cindy Noben
E-book Proefschrift Cindy NobenE-book Proefschrift Cindy Noben
E-book Proefschrift Cindy NobenCindy Noben
 

More from Cindy Noben (6)

2016_The exchangeability of self-reports and administrative health care resou...
2016_The exchangeability of self-reports and administrative health care resou...2016_The exchangeability of self-reports and administrative health care resou...
2016_The exchangeability of self-reports and administrative health care resou...
 
Comparative cost effectiveness of two interventions to promote work functioni...
Comparative cost effectiveness of two interventions to promote work functioni...Comparative cost effectiveness of two interventions to promote work functioni...
Comparative cost effectiveness of two interventions to promote work functioni...
 
Protecting and promoting mental health of nurses in the hospital setting: is ...
Protecting and promoting mental health of nurses in the hospital setting: is ...Protecting and promoting mental health of nurses in the hospital setting: is ...
Protecting and promoting mental health of nurses in the hospital setting: is ...
 
Quality appraisal of generic self reported instruments measuring healht relat...
Quality appraisal of generic self reported instruments measuring healht relat...Quality appraisal of generic self reported instruments measuring healht relat...
Quality appraisal of generic self reported instruments measuring healht relat...
 
Design of a trial based economic evaluation on the cost effectiveness of empl...
Design of a trial based economic evaluation on the cost effectiveness of empl...Design of a trial based economic evaluation on the cost effectiveness of empl...
Design of a trial based economic evaluation on the cost effectiveness of empl...
 
E-book Proefschrift Cindy Noben
E-book Proefschrift Cindy NobenE-book Proefschrift Cindy Noben
E-book Proefschrift Cindy Noben
 

2015_Discrete choice experiments versus rating scale exercises to evaluate the importance of attributes

  • 1. This article was downloaded by: [University of Maastricht], [Inge van der Putten] On: 07 August 2015, At: 00:12 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: 5 Howick Place, London, SW1P 1WG Click for updates Expert Review of Pharmacoeconomics & Outcomes Research Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/ierp20 Discrete-choice experiments versus rating scale exercises to evaluate the importance of attributes Ben FM Wijnen, Inge M van der Putten ab , Siebren Groothuis ab , Reina JA de Kinderen, Cindy YG Noben ab , Aggie TG Paulus ab , Bram LT Ramaekers d , Gaston CWM Vogel ad & Mickael Hiligsmann ab a 1 CAPHRI, Research School for Public Health and Primary Care, Maastricht University, PO Box 616, 6200 MD Maastricht, The Netherlands b 2 Department of Health Services Research, Maastricht University, PO Box 616, 6200 MD Maastricht, The Netherlands c 3 Department of Research and Development, Epilepsy Centre Kempenhaeghe, PO Box 61, 5590 AB Heeze, The Netherlands d 4 Department of Clinical Epidemiology and Medical Technology Assessment, Maastricht University Medical Centre, PO Box 5800, 6202 AZ Maastricht, The Netherlands Published online: 03 Jul 2015. To cite this article: Ben FM Wijnen, Inge M van der Putten, Siebren Groothuis, Reina JA de Kinderen, Cindy YG Noben, Aggie TG Paulus, Bram LT Ramaekers, Gaston CWM Vogel & Mickael Hiligsmann (2015) Discrete-choice experiments versus rating scale exercises to evaluate the importance of attributes, Expert Review of Pharmacoeconomics & Outcomes Research, 15:4, 721-728 To link to this article: http://dx.doi.org/10.1586/14737167.2015.1033406 PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms & Conditions of access and use can be found at http:// www.tandfonline.com/page/terms-and-conditions
  • 2. Discrete-choice experiments versus rating scale exercises to evaluate the importance of attributes Expert Rev. Pharmacoecon. Outcomes Res. 15(4), 721–728 (2015) Ben FM Wijnen*1–3 , Inge M van der Putten1,2 , Siebren Groothuis1,2 , Reina JA de Kinderen1–3 , Cindy YG Noben1,2 , Aggie TG Paulus1,2 , Bram LT Ramaekers4 , Gaston CWM Vogel1,4 and Mickael Hiligsmann1,2 1 CAPHRI, Research School for Public Health and Primary Care, Maastricht University, PO Box 616, 6200 MD Maastricht, The Netherlands 2 Department of Health Services Research, Maastricht University, PO Box 616, 6200 MD Maastricht, The Netherlands 3 Department of Research and Development, Epilepsy Centre Kempenhaeghe, PO Box 61, 5590 AB Heeze, The Netherlands 4 Department of Clinical Epidemiology and Medical Technology Assessment, Maastricht University Medical Centre, PO Box 5800, 6202 AZ Maastricht, The Netherlands *Author for correspondence: Tel.: +31 433 882 294 Fax: +31 433 884 162 b.wijnen@maastrichtuniversity.nl Aim: To examine the difference between discrete-choice experiments (DCE) and rating scale exercises (RSE) in determining the most important attributes using a case study. Methods: Undergraduate health sciences students were asked to complete a DCE and a RSE. Six potentially important attributes were identified in focus groups. Fourteen unlabelled choice tasks were constructed using a statistically efficient design. Mixed multinomial logistic regression analysis was used for DCE data analysis. Results: In total, 254 undergraduate students filled out the questionnaire. In the DCE, only four attributes were statistically significant, whereas in the RSE, all attributes except one were rated four or higher. Conclusion: Attribute importance differs between DCE and RSE. The DCE had a differentiating effect on the relative importance of the attributes; however, determining relative importance using DCE should be done with caution as a lack of statistically significant difference between levels does not necessarily imply that the attribute is not important. KEYWORDS: discrete choice experiment . Likert scale . preferences . rating scale . relative importance of attributes Eliciting preferences has become increasingly important in healthcare [1]. Understanding preferences can be very informative for both policy and clinical decisions. Due to the exces- sive and increasing demand for healthcare and limited resources available, decision makers have to make choices on the allocation of scarce resources among competing alternatives. Over the years, weighing the public opinion in these decisions has become more evident [2]. Under- standing the preferences of patients and incor- porating them in clinical decisions could also lead to improved adherence and outcomes [2]. The most common way to measure prefer- ences in healthcare is using stated preference (SP) methods. SP methods involve the elicita- tion of responses to predefined alternatives in which people hypothetically state their prefer- ences as opposed to revealed preference methods in which preferences are being observed in real life. Three broad categories of SP methods have been distinguished: ranking, ratings scale exercises (RSE), and choice-based approaches [3]. In ranking exercises, respond- ents are given a pre-specified amount of alternatives in which they are asked to rank these alternatives from ‘most preferable’ to ‘least preferable’. In RSE, respondents are mostly asked to rate each alternative on a Likert scale with a pre-specified range (e.g., 1–7). The third category are choice-based approaches, including discrete choice experi- ments (DCE), which involves a presentation of a series of pair wise choice tasks regarding hypothetical scenarios in which respondents are asked each time to select their preferred sce- nario. Rankings are also popular due to their relative ease of administration and analysis [1], but this method fails to provide a measure of strength of preferences [4]. Hence, our study was designed to compare RSE with DCE. Both the RSE and DCE could be used to evaluate the importance of different aspects of health and healthcare. The DCE is based on well-tested theories of choice behavior known as random utility theory (RUT) [5] and eco- nomic theory [3], whereas the RSE has little theoretical basis. RUT proposes that there is a latent (cannot be observed) construct called ‘utility’, which is present in individuals for informahealthcare.com 10.1586/14737167.2015.1033406 Ó 2015 Informa UK Ltd ISSN 1473-7167 721 Research Report Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
  • 3. each choice alternative. According to RUT, this utility can be divided in two components: an explainable component and an unexplainable, random component. The explainable component consists of attributes of the actual choice task and covariates of the individual, whereas the random component consists of all unidentified factors (‘error-terms’). Hence, in DCEs, a utility function is used to determine individuals’ preferences. In addi- tion, economic theory proposes that utilities are stable over time and that each individual will eventually try to maximize his or her utility function. In DCEs, individuals are expected to make trade-offs within a resource constraint and that these decision-making processes conform to the assumptions of these theories and rational choice [6]. Although, in RES, respondents are not typically asked to make choices within a resource constraint, individuals are expected to value each step on the scale equally (e.g., an increase from 1 to 2 points is equally high as an increase from 3 to 4 points). Moreover, in contrast to DCEs, RSE more typi- cally use a holistic approach where respondents evaluate attrib- utes as a discrete whole [6]. Hence, a well-known problem with the RSE is that it does not explicitly capture the tradeoff between attributes [7]. The DCE is considered to better reflect actual decision mak- ing, it allows for the estimation of overall preferences for any combination of attributes and is shown to be one of the most sensitive methods to elicit preferences [3,6]. An increased use of DCEs has been observed in the past few years [8]. However, as a DCE is a complex method, requiring more cognitive effort, it is still convenient for decision makers and investigators to use other methods, such as a RSE. A RSE is less cognitive demanding and more easy to conduct, but has demonstrated limitations to account for the strength of preferences [3]. Few studies have compared these techniques for preference elicitation with different results. Although Bridges et al. [9] emphasized a high level of concordance between DCE and RSE; other studies have suggested that different elicitation methods could lead to different results [6,10,11]. For example, Pignone et al. [11] reported that a DCE produced somewhat different patterns of attribute importance than a rating task. In this study, we assessed the preferences of undergraduate students in choosing a study orientation/speciality. Given the prominence of assessing the relative importance of attributes inside and outside healthcare sector, and the need for more comparison of methods, our primary aim was to examine the difference between a DCE and a RSE to determine the (rela- tive) importance of attributes regarding the choice of orienta- tion in second or third year of the undergraduate program. A secondary aim was to assess possible ordering effects of the questionnaire on both the DCE and RSE. Methods In this study, a comparison was made between preference elici- tation using a RSE and a DCE in a study examining the pref- erences of undergraduate health sciences students for the selection of study specialization for a bachelor orientation. Case description The Health Sciences program at Maastricht University is a 3-year bachelor program, covering a wide range of disciplines within healthcare. The introductory year is broad and multidis- ciplinary, covering the entire field of health and healthcare, including behavior, environmental, social and biological aspects. In the second year, students will have to specialize by choosing for one of four tracks: Policy, Management and Evaluation of Health Care; Biology and Health; Mental Health Sciences; and Prevention and Health. Maastricht University is renowned for its use of Problem- Based Learning, in which students work in small tutorial groups (~10–12 students), looking for practical solutions to real-world problems guided by a tutor (i.e., member of the aca- demic staff) [12,13]. Attribute & levels Several sources were combined to identify all relevant attributes that were used both for the RSE and the DCE. First, manda- tory online reflection forms on the choice of bachelor orienta- tion from a random selection of 30 first year students were studied to identify important aspects of the decisional process. Second, two focus groups were organized with second year stu- dents (n = 8 and n = 11), which were guided by semi- structured questions based on the student reflections. Third, a meeting with educational experts (n = 3) was organized to review all identified attributes, check whether any important attributes were missing and reach consensus regarding the final selection of the attributes. To reduce cognitive burden resulting from too many attrib- utes, six final attributes on the hypothetical choice for a bachelor orientation were determined: ‘possible acquainted masters’, spec- ified as the number of master programs that are strongly related to the bachelor orientation, which results in an enhanced eligibil- ity for those master programs; ‘job opportunity’, specified as the percentage of graduated students who found a job in the field of the bachelor orientation within 12 months of post-graduation; ‘scope of orientation’, specified as the bachelor orientation being more or less multidisciplinary; ‘quality of education’, specified as the overall quality score of education within the bachelor orien- tation by former students; ‘hours of self-study’, specified as the self-reported hours of self-study within the bachelor orientation by former students; and ‘correspondence with personal interests’, specified as the extent to which the bachelor orientation corre- sponds to the personal interests of a participant. Other attributes were considered to be less relevant or were left out because of commonality between attributes (i.e., ‘Information provision’ and ‘Relation between theory and practice’). For each attribute, several levels were defined using expert opinion (for ‘personal interests’ and ‘possible acquainted masters’), evaluations of grad- uated students (for ‘hours of self-study’, ‘scope of orientation’, and ‘quality of education’), and follow-up data of graduated stu- dents (for ‘job opportunity’). The final list of attributes and their corresponding levels was constructed in agreement with experts in the field of education. Research Report Wijnen, van der Putten, Groothuis et al. 722 Expert Rev. Pharmacoecon. Outcomes Res. 15(4), (2015) Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
  • 4. Design of DCE & RSE The design of the questionnaire was constructed using the software Ngene (Version 1.1.1, [14]). A fractional factorial (Bayesian efficient) design was used to construct the questionnaire, as a full fac- torial design would result in 486 hypo- thetical scenarios (five attributes with three levels [35 ] multiplied with one attri- bute with two levels [21 ]). To generate a Bayesian efficient design, a pilot questionnaire was distributed among 10 first year undergraduate stu- dents to obtain prior distributions of likely parameter values (e.g., the beta coefficients in the regression analysis). Furthermore, students were asked to reflect on the questionnaire in terms of comprehension, completeness, and description of attributes and levels. Based on their comments, the final question- naire was adjusted. Using the prior distri- butions of likely parameters, a statistically efficient design, which minimizes the D-error was generated. Bayesian efficient experimental designs maximize the precision of estimated parameters for a given number of choice sets [15]. In this study, 14 choice sets were generated. Within each choice task, stu- dents were asked to choose between two scenarios. No opt-out question was included to force students in making a choice and hence a trade-off between attributes. To measure (in)consistency of students’ decisions, a ‘dominant pair’ comparison was added, in which three attrib- utes of one scenario were assumed to be more preferred (high [80] vs low [40%] job opportunity, high [8] vs low [6] quality of education, and high vs low correspondence with personal interest) and the other levels were similar across alternative options, and one choice set was repeated at the end of the questionnaire (test–retest exercise). Hence, the total question- naire included 16 choice sets, which is in line with other DCEs and is shown to be cognitively acceptable [8,16]. In the RSE, participants were asked to rate the importance of each attribute on a Likert scale ranging from 1 = ‘Attribute is not important at all’ to 7 = ‘Attribute is very important’. Outline of questionnaire The questionnaire started with a short introduction to the nature of the study, followed by a thorough description of the attributes and levels. To clarify the DCE, an example of a com- pleted choice set was provided. To account for possible ordering effects, two versions were constructed. In version 1, participants were given a short introduction to the items and levels, follow- ing by the DCE, and received questions regarding background information and the RSE at the end. In version 2, participants were given a short introduction to the items and levels followed by questions regarding background information and the RSE, and the DCE was put at the end of the questionnaire. Data collection & participants The study was conducted at Maastricht University, the Nether- lands, among all first year health sciences students (n = 267). No sample size calculation was performed as sample size calcu- lations are particularly difficult for DCEs [17]. However, our sample is in line with findings from Marshall et al. [18] who reported that the mean sample size for conjoint analysis studies in health care published between 2005 and 2008 was 259, with nearly 40% of the sample sizes in the range of 100–300 respondents. Data was collected in May 2014. Questionnaires were dis- tributed during tutorial sessions in which a tutor was informed regarding the procedures described in the questionnaire. Both versions of the questionnaire were equally distributed among tutorial groups (version 1 in group 1–12 and version 2 in group 13–24). Students are randomly assigned to each tutorial group by the education office. Undergraduate students were asked to complete the questionnaire and return them to the tutor. The average time to complete the questionnaire was 10–15 min. An example of a DCE and RSE are given in FIGURE 1. Data analyses DCE data were analyzed using Nlogit version 5 (Econometric Software, Inc). Data of undergraduate students who completed less than five choice sets or RSE questions were excluded. Remaining missing values were handled using list-wise deletion. In addition, students who failed the dominance test were Characteristics Possible acquainted masters Job opportunity Scope of orientation Quality of education Hours of self-study Correspondence with personal interests Which option do you prefer? Low correspondence with personal interests High correspondence with personal interests 12 h per week 15 h per week 7 out of 10 8 out of 10 Multidisciplinary bachelor orientation but less deepening Specific bachelor orientation but more deepening Limited (1 or 2 acquainted master programs) 40% of graduated student found a job within the field 60% of graduated student found a job within the field Many (more than 5 acquainted master programs) Method A Method B On a scale from 1 (‘not important’) to 7 (‘very important’), please mark how important each characteristic is to you in general when you are choosing a bachelor orientation? Characteristics 1. Possible acquainted masters 1 2 3 4 5 6 7 Not important Very important Figure 1. Example of a choice set and of a rating scale exercises question. DCE vs RSE to evaluate the importance of attributes Research Report informahealthcare.com 723 Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
  • 5. excluded from the analyses. RSE data were analyzed using SPSS 21 (IBM, Inc). For the DCE, a panel mixed multinomial logit model was used to determine the effect of the attribute levels on patients’ preferences. Mixed multinomial logit model allows for possible heterogeneity across respondents and accounts for the panel nature of the data [15,19]. This model is based on the assumption that parameters are randomly distrib- uted in the population and heterogeneity is captured by esti- mating the standard deviation of the parameters [20]. The following model was estimated: Vij = (b1 + h1i) * SOME_PAM + (b2 + h2i) * MANY_PAM + (b3 + h3i) * JO + (b4 + h4i) * SO + (b5 + h5i) * QE + (b6 + h6i) * HSS + (b7 + h7i) * SOME_PI + (b8 + h8i) * HIGH_PI +"ij, where V represents the observable relative utility of student i for scenario j, b1–b8 are coefficients of the attributes indicating the relative weight placed on the attributes, and hi represents the standard deviation of the random parameter for student i. Finally, hij + "ij captures the individual-specific unexplained variance around the mean. Dummy coding was used to describe all categorical variables, and base-case levels can be found in TABLE 1. As a sensitivity analysis, effects coding was used to exam- ine the impact of coding on the results. All parameters were included as random parameters. Each attribute was assumed to be normally distributed. The estimation was conducted by using 2000 Halton draws. Model fit was assessed using log- likelihood and McFadden’s pseudo-R2 . Interactions between attributes were tested and subgroup analysis between both ver- sions of the questionnaire was done to check for ordering effects. To determine the relative importance of the attributes valued using DCE, relative importance weights were calculated using the method described by Malhotra and Birks [21]. In short, this method assumes that the range of the level coefficients within an attribute represents the relative importance of that respective attribute. Hence, the resulting percentage is the per- centage of explained variance around the choice decision that is attributable to the respective attribute. In the RSE, the impor- tance of the attributes was calculated using the mean values as expressed on the Likert scale by the participants. Finally, a compari- son was made between the ranking of attributes according to the DCE and according to the RSE. Furthermore, we will examine whether results of both the DCE and RSE differ between the two versions of the questionnaire. To compare the mean ratings for each attribute within the RSE paired samples t-tests were done. To compare attributes between RSE ver- sion 1 and version 2 independent-samples t-tests was done. Results A total of 254 (95.1%) undergraduate students completed the questionnaire of which 82.6% was females. The mean age was 19.7 years old, with the youngest student being 17 and the oldest being 30 years old. Version 1 was completed by 122 stu- dents (48%) and version 2 by 132 students (52%). Further- more, regarding the consistency tests, participants chose the dominant scenario in the ‘dominant pair’ comparison 99.2% of the time and the test–retest exercise was successfully repeated 79.9% of the time. Results of the DCE Four of the six attributes appeared to have a significant influ- ence on the choice for a bachelor orientation (TABLE 2). Although Table 1. Attributes and levels for bachelor orientation. Attributes Levels Regression coefficient Possible acquainted masters Limited (1 or 2 acquainted master programs) (Reference level) Some (3 to 5 acquainted master programs) b1 Many (more than 5 acquainted master programs) b2 Job opportunity† 40% of graduated students found a job within the respective field b3 60% of graduated students found a job within the respective field 80% of graduated students found a job within the respective field Scope of orientation Multidisciplinary bachelor orientation but less deepening (Reference level) Specific bachelor orientation but more deepening b4 Quality of education† 6 out of 10 b5 7 out of 10 8 out of 10 Hours of self-study† 12 h per week b6 15 h per week 18 h per week Correspondence with personal interests Low correspondence with personal interests (Reference level) Some correspondence with personal interests b7 High correspondence with personal interests b8 † Estimated as continuous variable within the mixed multinomial logit model model. Research Report Wijnen, van der Putten, Groothuis et al. 724 Expert Rev. Pharmacoecon. Outcomes Res. 15(4), (2015) Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
  • 6. there was no significant difference between some and many acquainted mas- ter programs, a limited number of acquainted master programs negatively influenced the choice for a bachelor ori- entation. Furthermore, an increase in job opportunity (in %), in quality of educa- tion and correspondence with personal interests were all associated with a higher preference for the respective bachelor ori- entation. Finally, the scope of the orien- tation and the hours of self-study did not significantly influence decision making. When looking at the relative impor- tance of the attributes, ‘correspondence with personal interests’ has the largest impact on participants’ preferences (51.5%; TABLE 3). ‘Job opportunity’ was shown to have the second largest impact on participants’ preferences (22.5%) fol- lowed by ‘quality of education’ (19.4%), ‘possible acquainted master programs’ (5.5%), ‘scope of orientation’ (1.0%), and ‘hours of self-study’ (0.1%). The use of effects coding only marginally impacted the relative importance weights and did not impact the ranking of the attributes. Results of the RSE Based on the mean values elicited in the RSE, an importance ranking of the attributes was constructed (TABLE 3). ‘Correspondence with personal interests’ had the highest score (mean 6.5). The second highest valued attributes were ‘scope of orientation’ (5.1) and ‘job opportunity’ (5.0), followed by ‘quality of education’ (4.5), ‘possible acquainted masters’ (4.3), and ‘hours of self-study’ (3.3). Comparison of DCE & RSE A comparison of the importance ranking of the attributes based on DCE and RSE leads to some dissimilarities (TABLE 3). In the DCE, four attributes were statistically important for participants when making a decision but in the RSE, all attributes except ‘hours of self-study’ were rated 4 or higher. Although in both methods respondents expressed a clear preference for ‘correspondence with personal inter- ests’, the importance for the attribute ‘scope of orientation’ varied strongly between RSE (regarded as second most important attribute) and DCE (regarded as second last/before last). According to the DCE, the difference in levels of ‘scope of orientation’ did not significantly impact students’ choice. However, in the RSE respondents, it was regarded as second most important. Albeit ‘scope of ori- entation’ was not statistically significant, a statistically signifi- cant heterogeneity was observed for this attribute meaning that, although, on average, no difference was observed between lev- els, some students had a preference for a multidisciplinary bachelor orientation at the cost of in-depth study materials and other students for a less multidisciplinary bachelor orientation Table 2. Results from mixed multinomial logit model illustrating influence of attributes on utility. Attributes Coefficient (95% CI) Standard deviation (95% CI) Possible acquainted masters† SOME_PAM 0.495 (0.290, À0.701)‡ 0.155‡ (À0.364, 0.675) MANY_PAM 0.512 (0.325, 0.699) 0.444‡ (0.148, 0.741) Job opportunity (per %) 0.052 (0.044, 0.060)‡ 0.035‡ (0.028, 0.042) Scope of orientation 0.094 (À0.116, 0.199) 0.656‡ (0.453, 0.859) Quality of education (per point) 0.596 (0.491, 0.701)‡ 0.293‡ (0.138, 0.447) Hours of self-study (per hour) 0.001 (À0.023, 0.023) 0.024 (À0.056, 0.104) Correspondence with personal interests SOME_PI 2.002 (1.723, 2.280)‡ 0.502§ (0.210, 0.795) HIGH_PI 4.747 (4.190, 5.305)‡ 1.317‡ (0.993, 1.642) Log likelihood À1383.10 Pseudo R-squared 0.43 Number of observations 3528 Number of individuals 252 Table represents b-coefficients from mixed multinomial logit model. The regression coefficients represent the mean part-worth utility of that attribute in the respondent sample. † Reference level is ‘SOME_masterprogr’. ‡ Significance at 1% level. § Significance 5% level. Table 3. (Relative) importance ranking of attributes based on discrete-choice experiments and rating scale exercises. (Relative) Importance ranking based on discrete-choice experiments (% impact on choice) Importance ranking based on rating scale exercises (mean value of a 7-point Likert scale)† Correspondence with personal interests (51.5%) Correspondence with personal interests (6.5) Job opportunity (22.5%) Scope of orientation (5.1)‡ Quality of education (19.4%) Job opportunity (5.0)‡ Possible acquainted masters (5.5%) Quality of education (4.5) Scope of orientation (1.0%) Possible acquainted masters (4.3) Hours of self-study (0.1%) Hours of self-study (3.3) † Significance of the attributes based on paired-samples t-test between attributes within aggregated data of both versions 1 and 2 of the rating scale exercises. ‡ No significance difference between attributes at 5% level based on paired samples t-test. DCE vs RSE to evaluate the importance of attributes Research Report informahealthcare.com 725 Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
  • 7. but more in-depth study materials. Ranking of the other attrib- utes was similar between both methods. Results of the DCE were not significantly affected by the order of the RSE (i.e., version 1 vs version 2). However, results of the RSE were significantly different between both versions (TABLE 4). Valuations of ‘scope of orientation’ (4.7), ‘possible acquainted master programs’ (4.0), and ‘hours of self-study’ (3.1) were significantly different between both versions, leading to a different ranking of the attributes. These attributes were also indicated as being the three least important attributes in the DCE. Discussion This study examined the difference between a DCE and a RSE to determine the most important attributes using the choice of study orientation by undergraduate students as a case study. Our study suggests that attributes importance resulting from DCE and RSE could differ. The DCE had a differentiating effect on the relative importance of attributes, whereas in the RSE, attributes were rated more equally and were, except for ‘hours of self-study’, all considered important. However, as the RSE did not involve a trade-off, one should be careful when interpreting the importance of ranking based on the RSE, as participants were not forced to choose between attributes. Our results are likely to highlight the lack of discriminative power of a RSE. In addition, an order effect was observed between questionnaire versions. RSE scores were significantly different when administering the RSE before or after the DCE. The placement of the RSE did not significantly impact results of the DCE. Our findings are consistent with previous literature. Pignone et al. [11] also showed different patterns of attribute importance between choice-based conjoint analyses and rating tasks in the choice for colorectal cancer screening. Phillips et al. [6] found differences in how respondents valued certain attributes and showed variations in how different attri- bute levels were valued between both methods in their prefer- ence for HIV tests. However, Bridges et al. [9] revealed high levels of concordance between DCE and RSE for preference elicitation for hearing care. Given the increasing use of stated preference methods, our results provide insights for future research on preference elicita- tion. First, it can be concluded that RSE and DCE could result in different relative importance rankings of attributes. Second, our findings support the statement that RSE outcomes should be interpreted with caution as, in a RSE, respondents have the tendency to rate all attributes more equally and (relative) important. The higher discriminative power of DCEs in com- parison to RSE must be taken into account when one intends to use RSE as a way to elicit preferences. Third, interpreting the relative importance of attributes from the DCE should be carefully done. Although one attribute was not significant in the DCE (‘Scope of orientation’), the RSE revealed high importance of this ‘Scope of orientation’ attribute. Fourth, the DCE was shown to be more robust for ordering effects as the order in which the RSE and DCE were presented (RSE before or after the DCE) did not significantly impact the results of the DCE. Finally, the mixed multinomial logit model revealed heterogeneity of preferences within an attribute (‘Scope of ori- entation’), indicating large differences in preference between respondents, despite no difference being observed at the group level. The decision of which method to use in a particular circum- stance is not straightforward. Both the assumptions underlying RSE and DCE are not always as robust as they seem. In a RSE, individuals are supposed to value the space between each response option equally. However Johnsen et al. [10] found that individuals’ valuations, as measured with a conjoint analysis, did not correspond with an equally spaced, linear Likert scale. In addition, the RSE is prone to end-of-scale-bias. Regarding the DCE, in our study, it was shown that 20.1% of the partici- pants did not successfully complete the test–retest exercise, which violates some of the assumptions of economic theory (e.g., the stability of preferences over time). In addition, Philips et al. [6] used focus groups, which demonstrated that participants often only focused on key attributes or used a ‘threshold’ approach in making choices (i.e., price is only rele- vant when it is above a certain threshold), instead of making a trade-off between all attributes. Furthermore, it became appar- ent that individuals were sometimes frustrated by having to Table 4. Importance ranking of attributes based on rating scale exercises between version 1 (RSE after DCE) and version 2 (RSE before DCE). Importance ranking total RSE (n = 254) Importance ranking version 1 (n = 122) Importance ranking version 2 (n = 132) Correspondence with personal interests (6.5) Correspondence with personal interests (6.5) Correspondence with personal interests (6.5) Scope of orientation (5.1) Job opportunity (5.0) Scope of orientation (5.4)† Job opportunity (5.0) Scope of orientation (4.7)† Job opportunity (5.0) Quality of education (4.5) Quality of education (4.4) Quality of education (4.6) Possible acquainted masters (4.3) Possible acquainted masters (4.0)† Possible acquainted masters (4.5)† Hours of self-study (3.3) Hours of self-study (3.1)† Hours of self-study (3.6)† † Significance difference between attributes at 5% level based on independent samples t-test. DCE: Discrete-choice experiments; RSE: Rating scale exercises. Research Report Wijnen, van der Putten, Groothuis et al. 726 Expert Rev. Pharmacoecon. Outcomes Res. 15(4), (2015) Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
  • 8. make difficult trade-offs. In short, the more complex the ques- tions were, the more they used simplifying rules. Building on Philips et al. [6], our findings support the idea that RSEs are more likely to determine the attitude of respondents toward individual attributes (i.e., individuals might have a positive atti- tude toward all presented attributes; however, when making a decision, some of the attributes might be overlooked or implic- itly ignored) and that DCEs are more focused on preferences toward attributes when making a decision. In addition, the DCE is a more realistic resembling of actual decision making as it involves trade-offs. RSE could be used to elicit preferences. By example, Koedoot et al. [22] examined preferences for pallia- tive chemotherapy, in which patients were asked to rate their preferences for having chemotherapy on a seven-point Likert scale. Afterwards, these preferences highly corresponded to patients’ actual treatment choice. However, in general, if one is interested in prioritizing multiple aspects of health or healthcare (i.e., multiple attributes/characteristics), the DCE is the more preferred method. The DCE is, however, not suited to examine the attitude of individuals toward multiple aspects of health or healthcare. This study has some limitation. First, ranking and other types of choice-based approaches were not taken into account. New methods, such as best-worse scaling, have proven to be useful to gain insight in the relative importance [23]. More research is needed to compare all these methods. A head-to- head comparison should be done with caution, as both DCE and RSE differ regarding presentation, framing, and methodol- ogy. Hence, effects of framing and fundamental differences between methods (i.e., the RSE does not ask individuals to make a trade-off) are likely to induce distinct results. Second, it is reasonable to assume that both methods serve different objec- tives (attitude vs preference elicitation). Third, DCEs have been regarded as more cognitively burdensome compared with the other types of SP elicitation techniques [3]. As our sample consists of relatively young (mean age of 19.7 years) and highly educated participants, we expect that we have not encountered any problems regarding the difficulty of the DCE. However, it is important to keep in mind that results (and reliability) of a DCE might be influenced by the age and socioeconomic status of the participants, and the complexity of the DCE due to cog- nitive burden. Fourth, the method which was used to calculate the relative importance weights for the attributes of the DCE is not directly related to the significance of the coefficients. Hence, one will derive importance weights regardless of statisti- cal significance. However, when an attribute is not significant, in most cases, it will have a low impact on the decision and hence a low coefficient and relative importance. This can be seen in this study as the attributes which do not have a signifi- cant impact on the decision have a rather low relative impor- tance weight. Finally, this study was conducted within the field of education. When transferring our results to other topics, such as healthcare, it is important to keep in mind that these topic could have an additional dimension that is not captured in this study, being that the individually expressed preferences will be used for public decisions, not only impacting the indi- vidual respondent but also potentially the entire society. More research should be done within the healthcare sector to verify our findings. However, we expect our results to be robust for such extra dimensions. In conclusion, our study suggests that attribute importance could differ when using DCE or RSE. The DCE had a more differentiating effect on the relative importance of the attributes but interpretation of the relative importance of attributes from the DCE should be done with caution, as the absence of a sig- nificant difference between levels at the group level does not necessarily mean that the attribute is not important and some patients could still have preferences for levels as indicated by the amount of heterogeneity around the parameters. Financial & competing interests disclosure The authors have no relevant affiliations or financial involvement with any organization or entity with a financial interest in or financial conflict with the subject matter or materials discussed in the manuscript. This includes employment, consultancies, honoraria, stock ownership or options, expert testimony, grants or patents received or pending, or royalties. No writing assistance was utilized in the production of this manuscript. Key issues . In decision making, one should be aware that the determination of the (relative) importance of attributes differs between rating scale exercises and discrete-choice experiments. . Determining the relative importance of the attributes valued using discrete-choice experiments should be handled carefully as no statistical significant difference between levels does not necessarily mean that the attribute is not important. . Building on Philips et al. (2012), it is reasonable to assume that rating scale exercises are more likely to determine the attitude of respondents toward attributes and that discrete-choice experiments are more focusing on preferences toward attributes when making a decision. DCE vs RSE to evaluate the importance of attributes Research Report informahealthcare.com 727 Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015
  • 9. References 1. Bridges J. Stated preference methods in health care evaluation: an emerging methodological paradigm in health economics. Appl Health Econ Health Policy 2003;2(4):213-24 2. Bridges JF, Hauber AB, Marshall D, et al. Conjoint analysis applications in health – a checklist: a report of the ISPOR good research practices for conjoint analysis task force. Value Health 2011;14(4):403-13 3. Ryan M, Scott DA, Reeves C, et al. Eliciting public preferences for healthcare: a systematic review of techniques. Health Technol Assess 2001;5(5):1-186 4. Shackley P, Ryan M. Involving consumers in health care decision making. Health Care Anal 1995;3(3):196-204 5. Louviere JJ, Flynn TN, Carson RT. Discrete choice experiments are not conjoint analysis. Journal of Choice Modelling 2010; 3(3):57-72 6. Phillips KA, Johnson FR, Maddala T. Measuring what people value: a comparison of ‘attitude’ and ‘preference’ surveys. Health Serv Res 2002;37(6):1659-79 7. Srinivasan V, Netzer O. Adaptive self-explication of multi-attribute preferences. Research Papers 1979; Stanford University, Graduate School of Business; 2007 8. Clark M, Determann D, Petrou S, et al. Discrete Choice Experiments in Health Economics: a Review of the Literature. Pharmacoeconomics 2014;32(9):883-902 9. Bridges JF, Lataille AT, Buttorff C, et al. Consumer preferences for hearing aid attributes: a comparison of rating and conjoint analysis methods. Trends Amplif 2012;16(1):40-8 10. Johnson FR, Hauber AB, Osoba D, et al. Are chemotherapy patients’ HRQoL importance weights consistent with linear scoring rules? A stated-choice approach. Qual Life Res 2006;15(2):285-98 11. Pignone MP, Brenner AT, Hawley S, et al. Conjoint analysis versus rating and ranking for values elicitation and clarification in colorectal cancer screening. J Gen Intern Med 2012;27(1):45-50 12. Moust J, Bouhuijs P, Schmidt H. Introduction to problem-based learning. In: Collaborative learning in the tutorial group. Taylor & Francis, Groningen, The Netherlands; 2007 13. Schmidt HG. Foundations of problem-based learning: some explanatory notes. Med Educ 1993;27(5):422-32 14. Choice Metrics. Available from: www. choice-metrics.com/ 15. Hensher DA, Rose JM, Greene WH. Applied choice analysis: a primer. Cambridge University Press; 2005 16. Bech M, Kjaer T, Lauridsen J. Does the number of choice sets matter? Results from a web survey applying a discrete choice experiment. Health Econ 2011;20(3):273-86 17. Ryan M, Gerard K. Using discrete choice experiments to value health care programmes: current practice and future research reflections. Appl Health Econ Health Policy 2003;2(1):55-64 18. Marshall D, Bridges JP, Hauber B, et al. Conjoint analysis applications in health – how are studies being designed and reported? Patient-Patient-Centered- Outcome-Res 2010;3(4):249-56 19. de Bekker-Grob EW, Hol L, Donkers B, et al. Labeled versus unlabeled discrete choice experiments in health economics: an application to colorectal cancer screening. Value Health 2010;13(2):315-23 20. Hiligsmann M, Dellaert BG, Dirksen CD, et al. Patients’ preferences for osteoporosis drug treatment: a discrete-choice experiment. Arthritis Res Ther 2014;16:R36 21. Malhotra N, Birks D. Marketing Research: an applied approach. 3rd European Edition. In: Multidimensional scaling and conjoint analysis. Pearson Education, Edinburgh, England; 2007 22. Koedoot CG, de Haan RJ, Stiggelbout AM, et al. Palliative chemotherapy or best supportive care? A prospective study explaining patients’ treatment preference and choice. Br J Cancer 2003;89(12): 2219-26 23. Flynn TN, Louviere JJ, Peters TJ, Coast J. Best–worst scaling: what it can do for health care research and how to do it. J Health Econ 2007;26(1):171-89 Research Report Wijnen, van der Putten, Groothuis et al. 728 Expert Rev. Pharmacoecon. Outcomes Res. 15(4), (2015) Downloadedby[UniversityofMaastricht],[IngevanderPutten]at00:1207August2015