Trading up
from tradeoffs:
heuristic questionnaire design
Closed models have drawbacks
“We got him!”
Why should static attribute
models still drive research?
• Developed before automated data collection
• Designs based on team illusions, qualitative
research, “limited” room
• Require introspection and abstraction outside
normal life, disclaiming the “X factor”
• Create falsely dichotomous assumptions that
inadequately inform decisions
• Assume an overall “trade-off” between quality
and price, when stakeholders may not
When is a decision
difficult to model?
• Specifiers, purchasers and users may differ
• Manufacturers cannot easily change
attributes and levels, even with high need
• High stakes: decisions can disable or kill
• Benefits and risks cannot be reliably
generalized or projected
• Commitment time permits re-evaluation, but
decision burden for self (and often others) is
long-term: cars, colleges, real estate
When is a decision
a tradeoff?
• Attributes are intrinsic to the product, and can
only exist at one level at one time
• Reasonable people agree on a product’s
actual attribute levels, and are aware of those
salient to them
• Stakes and commitment times are significant;
only infrequent re-evaluation is possible
• Product data is empirical, not proxy
• Many decisions will not meet these criteria
“Messaging for
market segments” isn’t real life
• Product profiles often present an artificial “full
information” context
– What stakeholders don’t care about, they are less
likely to actually know
• Specifiers seldom have a complete
competitive set to consider
• People make decisions, not segments, strata
or audience groups
– Context meets content
– Mfr label says one thing, we do another
Human brains make
bite-sized decisions
• We use “heuristics” –decision shortcuts –
because time is short and our brains are too
small to consider everything at once
• Heuristics can be simple: “Never pay extra for
national brand peas,” or as complex as
choosing a life partner
• We always break our own rules
– Heuristics are subject to mediating factors, e.g.
anchoring and adjustment, priming; as well as
situational constraints
Conjoint methods are
product-centric
• Experimental designs sometimes select
profiles based on initial preferences, but
attributes/levels are still pre-fabricated
• Conjoint designs also assume:
– Attributes represent a single construct, apart from
interactions used in the experimental design
– The distance between levels represents a finite,
measurable value, that exists irrespective of any
respondent’s reference point
The choice task subject
is only human
• She’s focusing on a few attributes, and
making assumptions about those not shown
– To complete the task in a reasonable time within
her context
• But analysts assume that she considered all
and only the stimuli, in a zero-sum game
• Table stakes may assume false importance,
because excluded factors are the real drivers,
and/or because the levels offered were not
salient or even believable
Heuristic designs identify and
leverage decision drivers
• Respondents’ domains, measures and
thresholds populate and limit the stimuli (not
just profiles) presented
– No two respondents may see the same questions
or profiles
– Base sizes for simulations will differ, since those
whose benchmarks are unmet will not “contribute”
to projected interest or share
– Range of threshold values is defined by
respondents, not a priori
• Studies are cheap, fast, transparent and thus
easily integrated
The “voice of the customer” is
an N of 1
• Traditional respondent-level conjoint outputs:
– Profile 1 preference share = XX% and so on
– Imputed importance utilities and interpolated
preference shares for the scenarios not presented
• Heuristic studies:
– Domain/measure (“attribute”) 1 = Z, with threshold of
X, attribute 2 = C, with threshold of Y, and so on
– Preference share given respondent’s thresholds (+/- X
%) = XX%
– Multiple scenarios can be presented, all salient to the
respondent’s benchmarks
Heuristic designs help
optimize decision support
• Eliciting barriers to information-seeking,
consideration, selection and purchase,
including communication gaps
• Developing support to facilitate use
• Validating domains of unmet need and
benchmark(s), often contrasting user, retailer,
distributor, funder perspectives
– In one case, arguing for a subsequently successful
launch, albeit with an inferior delivery system
Thank you for listening!
Laurie Gelb
lmgelb@profitbychange.com
profitbychange.com

Conjoint Analysis Alternatives in Questionnaire Design

  • 1.
  • 2.
    Closed models havedrawbacks “We got him!”
  • 3.
    Why should staticattribute models still drive research? • Developed before automated data collection • Designs based on team illusions, qualitative research, “limited” room • Require introspection and abstraction outside normal life, disclaiming the “X factor” • Create falsely dichotomous assumptions that inadequately inform decisions • Assume an overall “trade-off” between quality and price, when stakeholders may not
  • 4.
    When is adecision difficult to model? • Specifiers, purchasers and users may differ • Manufacturers cannot easily change attributes and levels, even with high need • High stakes: decisions can disable or kill • Benefits and risks cannot be reliably generalized or projected • Commitment time permits re-evaluation, but decision burden for self (and often others) is long-term: cars, colleges, real estate
  • 5.
    When is adecision a tradeoff? • Attributes are intrinsic to the product, and can only exist at one level at one time • Reasonable people agree on a product’s actual attribute levels, and are aware of those salient to them • Stakes and commitment times are significant; only infrequent re-evaluation is possible • Product data is empirical, not proxy • Many decisions will not meet these criteria
  • 6.
    “Messaging for market segments”isn’t real life • Product profiles often present an artificial “full information” context – What stakeholders don’t care about, they are less likely to actually know • Specifiers seldom have a complete competitive set to consider • People make decisions, not segments, strata or audience groups – Context meets content – Mfr label says one thing, we do another
  • 7.
    Human brains make bite-sizeddecisions • We use “heuristics” –decision shortcuts – because time is short and our brains are too small to consider everything at once • Heuristics can be simple: “Never pay extra for national brand peas,” or as complex as choosing a life partner • We always break our own rules – Heuristics are subject to mediating factors, e.g. anchoring and adjustment, priming; as well as situational constraints
  • 8.
    Conjoint methods are product-centric •Experimental designs sometimes select profiles based on initial preferences, but attributes/levels are still pre-fabricated • Conjoint designs also assume: – Attributes represent a single construct, apart from interactions used in the experimental design – The distance between levels represents a finite, measurable value, that exists irrespective of any respondent’s reference point
  • 9.
    The choice tasksubject is only human • She’s focusing on a few attributes, and making assumptions about those not shown – To complete the task in a reasonable time within her context • But analysts assume that she considered all and only the stimuli, in a zero-sum game • Table stakes may assume false importance, because excluded factors are the real drivers, and/or because the levels offered were not salient or even believable
  • 10.
    Heuristic designs identifyand leverage decision drivers • Respondents’ domains, measures and thresholds populate and limit the stimuli (not just profiles) presented – No two respondents may see the same questions or profiles – Base sizes for simulations will differ, since those whose benchmarks are unmet will not “contribute” to projected interest or share – Range of threshold values is defined by respondents, not a priori • Studies are cheap, fast, transparent and thus easily integrated
  • 11.
    The “voice ofthe customer” is an N of 1 • Traditional respondent-level conjoint outputs: – Profile 1 preference share = XX% and so on – Imputed importance utilities and interpolated preference shares for the scenarios not presented • Heuristic studies: – Domain/measure (“attribute”) 1 = Z, with threshold of X, attribute 2 = C, with threshold of Y, and so on – Preference share given respondent’s thresholds (+/- X %) = XX% – Multiple scenarios can be presented, all salient to the respondent’s benchmarks
  • 12.
    Heuristic designs help optimizedecision support • Eliciting barriers to information-seeking, consideration, selection and purchase, including communication gaps • Developing support to facilitate use • Validating domains of unmet need and benchmark(s), often contrasting user, retailer, distributor, funder perspectives – In one case, arguing for a subsequently successful launch, albeit with an inferior delivery system
  • 13.
    Thank you forlistening! Laurie Gelb lmgelb@profitbychange.com profitbychange.com