MRS Advanced Analytics
Innovation Symposium
10th November 2016
#MRSlive
68 Lombard Street, London EC3V 9LJ Tel: 0870 787 4490
Modelling insurance online
buying behaviour
Presentation to
MRS ADAN Network Symposium
November 2016
Kathy Ellison & John McConnell
What we are covering
3
The context
 The challenge
 Translating insurance product choice into Conjoint
The testing challenge
 The conjoint task – a standard CBC or something more?
 Why Conjoint & advantage over solely testing live
What we learnt
 Conclusions to the client
 When it doesn’t match what consumer says
 Being pragmatic
The challenge
4
|
The challenge we were given
Increase buying of the ‘premium’ home insurance product once clicked
through to site from Price Comparison Website (PCW)
Key questions were:
1. What is the right price differential? (when premium is personal)
2. What is the optimal cover combination?
3. Should premium product be shown 1st or last?
4. Should we show 2 or 3 products on the screen?
Online survey
 500 home insurance switchers who have used a PCW or planning
to do so
 In 3 parts
1. An exploration of the product in context of a PCW
2. A Trade off exercise that replicates real life purchasing
decisions
3. A deep dive into the detailed features
1.
3.
2.
Translating insurance product choice into Conjoint
5
 12 screens each with 2 or 3 ‘randomly produced’ products
 Combination of 7 cover features with 2 or 3 levels of each
 Premium of each ‘product’ was automatically calculated using ‘real’ prices
 Respondents chose between products on each screen in turn
 Choices fed into a model which calculated:
– how decisions were being made
– influence on choice of the different features of cover
1. Compulsory Excess £50 (Increases price by x%) £100 (No addl cost)
2. Buildings Sums Insured* £A,000 (reduces price
by 5%)
£B,000 (No addl cost) £C,000 (Increases
price by 5%)
£D,000 (Increases price
by 10%)
3. Contents Sums Insured £E,000 (No addl cost) £F,000
(Increases price by 5%)
£G,000
(Increases price by 10%)
4. Feature A  (Optional) (No addl cost)  (Included) (Increases price by £15)
5. Feature B  (Optional) (No addl cost)  (Included) (Increases price by £20)
6. Feature C  (Optional) (No addl cost)  (Included) (Increases price by £20)
7. Feature D  (Optional) (No addl cost)  (Included) (Increases price by £35)
 It was explained
that a tick
represented the
element was
included
 It was explained
that a cross
represented the
element was
optional
The testing challenge:
6
A standard CBC
1. Dynamic Pricing
2. Optimising the product
3. Testing Order
4. Testing 2 versus 3
choices
The testing challenge:
7
• We want to understand share of
preference
• 7 attributes
• 4 levels max
• 8 random tasks, 3 tasks per concept
• 2 fixed tasks
• The current “Standard” product
• A current “Premium” product
• We export the design and our ops
agency programs it into the main
survey
A standard CBC
The testing challenge:
8
• Each respondent has a different
anchor point
• We need to calculate a total price (not
a tested attribute) from other pricing
level in other attributes
1. Dynamic Pricing
• To answer the question “what is the
product with the highest increase in
preference relative to the standard
product?”
• This was an additional calculation on
top of the interactive simulator
• Essentially a programmatic way of
testing all choice combinations
2. Optimising
The testing challenge:
9
• We used the fixed tasks
• Comparing “standard” and “upgrade”
products
• Presented in different orders
3. Testing Order
• Sawtooth doesn’t let us test variable
numbers of concepts per task
• So we create a 2nd separate project
and randomly assigned 2 concepts
per task back into the main project
rotations
4. Testing 2 versus 3 choices 27%
23%23%
16%
0%
10%
20%
30%
40%
50%
60%
Pos 1 Pos 2 Pos 3
Testing Order
Standard Control Premium
Why Conjoint & advantage over solely testing live
10
Here we are looking at product sales specifically through the
client’s website
Platforms like “Optimizely” offer A/B Testing or Multivariate
Testing (“MVT”) on live web sites (or other digital devices).
Based on Design of Experiments. Typically applying either
GLM, MANOVA or Taguchi methods.
So in theory we could have tested in the live environment.
Reasons not to use this approach include:
• We will impact revenue
• We may confuse customers
• Overall it is likely to be more expensive than a study
What we learnt: Conclusions
11|
Key questions were:
1. What is the right price differential? (when premium is personal)
2. What is the optimal cover combination?
3. Should premium product be shown 1st or last?
4. Should we show 2 or 3 products on the screen?
What we learnt: Conclusions
12|
Answers were:
1. What is the right price differential? (when premium is personal)
No ‘optimum’ price differential to make people think ‘I might as well upgrade’. Decision
made based on a trade off between incremental price and inclusions of cover.
2. What is the optimal cover combination?
A ‘premium’ product, must differentiate from the ‘basic’ in a popular way – the mix of
covers makes a big difference (20% more preference). & do not offer too high a SI limit.
3. Should premium product be shown 1st or last?
If premium product is shown first more likely to choose this
4. Should we show 2 or 3 products on the screen?
Offering a choice of 3 products rather than 2 does not encourage more to upgrade,
provided that the 2nd, more comprehensive, option has the most popular cover
elements
What we learnt: Matching reality & being pragmatic!
13
Model tried to replicate reality but…..
 Premium product chosen in model in 56% of cases
 BUT was the stated choice only in 25% of cases
 Why?
– Model simpler (7 vs 15 features) ?
– Price differential lower?
– Consumers not logical - often CHEAP wins over quality?
And some conclusions are just not practical….
 We concluded that price anchoring had an impact - the jump from the ‘basic’ (shown in the
PCW) to the ‘premium’ can be too great to encourage upgrades so they could:
– LOWER the price of premium product & reduce covers – NOT recommended
– RAISE the price of basic product –recommended
 However they would never do the latter – PCWs must show the CHEAPEST price possible!
14© GfK 2016 | Turbo Event | February 2016
Willingness to Pay in the Conjoint
Space
Chris Moore
GfK UK
ADAN Innovation Symposium – Conjoint Analysis
November 10th
15© GfK 2016 | Turbo Event | February 2016
What is WTP
16© GfK 2016 | Turbo Event | February 2016
Jedidi/Zhang (2002), S. 1352.
What we talk about when we talk about WTP
“Price at which the consumer is indifferent
between buying and not buying the product,
given the alternative(s) available”
17© GfK 2016 | Turbo Event | February 2016
Markets are full of alternatives
18© GfK 2016 | Turbo Event | February 2016
WTP - There are three different terms to distinguish
Reservation Price
=
Maximum Price
=
Willingness-to-Pay
= min{ }
Breidert (2006)
19© GfK 2016 | Turbo Event | February 2016
Apple vs Samsung
• A famous example is the $2.5 billion law suit against Samsung regarding infringement of patents
• Apple commissioned 2 conjoint studies to quantify the damages (one on iPhones and the other on Tablets). Both
contained 7 attributes and 16 choice tasks
• Real-life dynamics of what people are willing to pay is more complex as it needs to take in to account the supply part of
the equation as well as the demand side (called the equilibrium price)
• Analyst was tasked with calculating the WTP for the demand side of the equation. Other experts provided the Supply
part of the equation
• Apple were awarded $1 billion (at the initial hearing)
http://www.sawtoothsoftware.com/download/apple_v_samsung_conjoint_analysis.pdf
http://www.sawtoothsoftware.com/support/technical-papers/general-conjoint-analysis/assessing-the-monetary-value-of-attribute-levels-with-conjoint-analysis-warnings-and-suggestions-2001
20© GfK 2016 | Turbo Event | February 2016
Gilligan’s Island
http://www.sawtoothsoftware.com/support/technical-papers/general-conjoint-analysis/assessing-the-monetary-value-of-attribute-levels-with-conjoint-analysis-warnings-and-suggestions-2001
• One of the most important aspects within WTP is the notion of
competition
• WPT will vary depending on what the competition is !
• The Sawtooth paper discusses the WTP of a person trying to
escape Gilligan’s island when there is only one way of escape
versus when there are multiply ways to escape
21© GfK 2016 | Turbo Event | February 2016
Common WTP methods
22© GfK 2016 | Turbo Event | February 2016
Approach 1: Post-hoc monetary scaling of utilities
Individual part worth utility structures… …can be converted into utility differences.
Brand A Brand B Brand C
Brand A 3.7 8.9
Brand B -3.7 5.2
Brand C -8.9 -5.2
By using a linear price parameter… …utility differences can by scaled in
monetary units.
Brand A Brand B Brand C
Brand A 37 € 89 €
Brand B -37 € 52 €
Brand C -89 € -52 €
10€ Util
Orme (2001).
4.2
0.5
-4.7-6
-4
-2
0
2
4
6
Brand A Brand B Brand C
Utility
2.0
0.0
-2.0
-2.5
-2.0
-1.5
-1.0
-0.5
0.0
0.5
1.0
1.5
2.0
2.5
€ 10 € 30 € 50
23© GfK 2016 | Turbo Event | February 2016
Approach 1: Post-hoc monetary scaling of utilities
WTP CONCEP MEASUREMENT OBJECTSTRENGTHS
 No special software required
 Easy to implement
WEAKNESSES
 Assumption that Price is linear
 Not linked to a specific product
 Competition not included
 Not linked to actual purchase behavior (e.g.,
none option is not considered).
 Respondents that react highly insensitive to
price will bias the estimates upwards (cleaning
procedure required).
 Does not conform to the definition to WTP
24© GfK 2016 | Turbo Event | February 2016
Approach 2: Market compensation approach
Sim. Share
Price£21,000
20%
Price-
Premium:
£600
£21,600
Automotive example: Adding Air Con to the trim line
No Aircon Aircon
25%
Aircon
Orme (2001).
25© GfK 2016 | Turbo Event | February 2016
Approach 2: Market compensation approach
WTP CONCEPSTRENGTHS
 Linked to a specific product
 Realistic competitive environment
 Easy to interpret
 No special software required
WEAKNESSES
 WTP estimation relates only to the specific
product tested
 Not based on the idea of an Indifference point
but at the point at which the share of a product
returns to the same value
 Trial and error to find exact price difference
 Not guaranteed that change in share can be
compensated within the given price range
 While taking in to account individual
respondents, analysis is typically based at the
aggregate level
26© GfK 2016 | Turbo Event | February 2016
A
e.g. Miller et al. (2011)
Approach 3: Individual point of indifference analysis
Utility
Respondent 1
Price HighLow
None
B
C
TEST
TESTHigh
Price 
Utility 
Indifference!
WTP
27© GfK 2016 | Turbo Event | February 2016
Approach 3: Individual point of indifference analysis
How to arrive at WTP on level-basis for a certain individual:
Run point of indifference analysis for
every combination of attribute levels
Contrast the WTP for test products with a
certain feature
 WTPi = £20,000
 WTPi(blue) = £22,000
 Premiumi(blue - grey) = +£4,000
 WTPi(grey) = £18,000
28© GfK 2016 | Turbo Event | February 2016
Approach 3: Individual point of indifference analysis
WTP CONCEPSTRENGTHS
 Takes into account a pre-defined competitive
environment
 WTP estimates are not linked to a specific product
 Based on the definition of WTP (indifference point)
 Less prone to outliers: WTP estimates are restricted to
a pre-defined range
 Not prone to human error
 Can fix attribute levels to be static and can respect
alternative specific rules
WEAKNESSES
 Client need to be educated on how to interpret
the resulting estimates correctly
 Problems may arise if many respondents are
insensitive to price or the competitive scenario
contains dominating or dominated alternatives.
 Specialist routines need to be developed (in R
typically)
29© GfK 2016 | Turbo Event | February 2016
Other approaches
Sonnier, G., Ainslie, A., Otter, T. (2007), Heterogeneity Distributions of Willingness-to-Pay
in Choice Models, Quantitative Marketing & Economics, 5, 3, 313–331.
Allenby, Greg M., Jeff D. Brazell, John R. Howell, Peter E. Rossi, 2014. “Economic
Valuation of Product Features” Quantitative Marketing and Economics 12:421-456
Allenby, Greg M., Jeff Brazell, John R. Howell, Peter E. Rossi, 2014. “Valuation of
Patented Product Features” Journal of Law and Economics 3:629-663
Sawtooth Software (2012)
30© GfK 2016 | Turbo Event | February 2016
Observations
 WTP is no objective measurement concept. The outcome is strongly based on assumptions.
Make this assumptions transparent for you client and help them to interpret the delivered figures
correctly
 The monetary scaling approach should not be used as it is likely to produce extreme results and
lacks theoretical foundation
 The individual point of indifference approach tends to lead to more conservative estimates but is
the only approach that measure the true definition of WTP
 The price attribute should be constrained to avoid scenarios of infinite WTP
 Generally advised to use the Median WTP rather than average WTP
31© GfK 2016 | Turbo Event | February 2016
 Do not just give 1 WTP figure. Obtain different WTP results with different competitive context and
give a range of WTP outputs
 WTP results can be heavily affected by dominated and/or dominating competitors and will skew
results
 Keep in mind that you are calculating WTP from stated preferences. There is always a
hypothetical bias which flaws the measured WTPs*
 Has the conjoint produced an accurate measure off price sensitivity?
 Have the relevant attributes been included?
 Have the relevant competitors been included?
Observations
*Orme (2001).
32© GfK 2016 | Turbo Event | February 2016
33© GfK 2016 | Turbo Event | February 2016
References
Breidert, C. (2006), Estimation of Willingness-to-Pay, Theory, Measurement, Application, 1st ed.
Jedidi, K., Zhang, Z. J. (2002), Augmenting Conjoint Analysis to Estimate Consumer Reservation
Price, Management Science, 48, 10, 1350–1368.
Miller, K. M., Hofstetter, R., Krohmer, H., Zhang, Z. J. (2011), How Should Consumers' Willingness
to Pay Be Measured? An Empirical Comparison of State-of-the-Art Approaches, Journal of
Marketing Research, 48, 1, 172–184.
Orme, B. K. (2001), Assessing the Monetary Value of Attribute Levels with Conjoint Analysis:
Warnings and Suggestions, Sawtooth Software Research Paper Series.
When the marketplace seems
too big: Using evoked sets to
model how shoppers buy
Kees van der Wagt | Senior Director Methodology &
Innovation
November 2016
Conjoint analysis used
to understand tradeoffs
• Many shopper decisions involve
tradeoffs
• Conjoint analysis can be used to
understand and predict how shoppers
will make tradeoffs
Some tradeoffs occur in
a large competitive space
• A Grocery Store may have hundreds of SKUs relevant
to your category
• We can program realistic shelf sets where we vary
prices and products to understand tradeoffs
• But a computer screen is not a store
What if we have too many products
to show on a computer screen?
Evoked sets can help
when you have a large
market space
• Most shoppers make tradeoffs
between a smaller set of products in
their consideration set
• For each respondent, we can customize
the conjoint screen to show only those
products that are relevant to them
>
Additional Reasons to Use Evoked Set
Easier for
respondent to
focus
Respondent more
engaged
Survey seems
more relevant
Better data quality
How do we customize the products shown?
Ask respondents to tell us what products are relevant to them
Past behavior Future behavior Required Features /
Unacceptable Features
Multiple screening criteria to avoid eliminating items hastily
Custom shelf sets require
programming expertise
Customized but Structured and Meaningful Shelf Set
Evoked from
multiple
screening
criteria
Random
non-evoked
products
Rules
apply
Disadvantages of Evoked Set
We may be eliminating
some products the
respondent would buy
Introduces “Selection Bias”
must do more complex
modeling to account for this
Evoked Sets Require Analytical Expertise
1) Selection Bias
Most mathematical models assume
this missing data is missing at random
Raw conjoint data only shows that a
respondent has not seen certain items
Need to inform our predictive model that missing means “undesirable”
A. Add Synthetic Data
1. Add non-evoked items to model (not picked)
2. Define Threshold
> Evoked products beat a threshold
> Other products lose to threshold
B. Respondent Level Penalized
Regression
> Individual level constraints
> Can set predictions at 0
Explanation of threshold
-
+
-
+
threshold threshold
Evoked Sets Require Analytical Expertise
2) Large Marketplace Means Sparsity of Data
Sparsity Easy to overfit
the data
Calibrate/Tune
model for sparsity
Evoked Sets Require Analytical Expertise
3) Large Marketplace Typically Has Nesting Structure
Some items are grouped together as more similar to
each other more likely to choose between these
Brand
A
Diet Not-Diet
Brand
B
Size1 Size2 Size3
Use Nested Logit or similar approach
Ensembles of Different Nests
Conclusion
Evoked Sets Enable Us to Study
a Large Marketplace of Products
> Survey customized to respondent
> More engaged respondents
> Requires programming expertise
Evoked Sets Require Careful Screening
> Adding other products to evoked set is recommended
Evoked Sets Require Analytical Expertise
> Solutions to Selection Bias
> Calibrate for Data Sparsity
> Model Natural Groupings or Nests
Contact me
Kees van der Wagt
Tel: +31 10 282 3500
email: k.vanderwagt@skimgroup.com
www.skimgroup.com
@skimgroup
The impact of
choice environment
on choice behavior
Studio GerART
Gerard Loosschilder
ONCE UPON A TIME …
ONE DAY ….
Animal Welfare Conference
Call to action
Hotel Chain, Type
& Style
Distance to city
center
Placement of 50 hotel on the results page - top to bottom
Review Score on
cleanliness, staff and
facilities; result in a mean
score and a label
Including room price per
night
MEANWHILE @ THE BOOKING SITE
CHARLOTTE, INTERACTION DESIGNER
Introducing
Filter functions
on price and ratings
Sort functions on
price and rating
BACK TO ASTRID
One year later
THE RESULTS ARE IN!
Meanwhile, back in the office …
Of those having
the functions
available
67%
uses sort and/or filter
functions at least once
across the four tasks
47%
uses the filter
function at least once
42%
uses filter on price
27%
uses filter on rating
40%
uses the sort
function at least once
34%
uses Sort on price
11%
uses Sort on rating
33%
does not use the
functions, not even once
Ideal situation
0% 2% 4% 6% 8% 10% 12%
50
47
44
41
38
35
32
29
26
23
20
17
14
11
8
5
2
Likelihood of chosing a room
Positionontheresultspage
Ideal situation
Situation before redesign
0% 2% 4% 6% 8% 10% 12%
50
47
44
41
38
35
32
29
26
23
20
17
14
11
8
5
2
Likelihood of chosing a room
Positionontheresultspage
S&F not available
Sort & filter made available
0% 2% 4% 6% 8% 10% 12%
50
47
44
41
38
35
32
29
26
23
20
17
14
11
8
5
2
Likelihood of chosing a room
Positionontheresultpage
S&F available
If sort & filter are used
0% 2% 4% 6% 8% 10% 12%
50
47
44
41
38
35
32
29
26
23
20
17
14
11
8
5
2
Likelihood of chosing a room
Positionontheresultspage
S&F used
If sort & filter are not used
0% 2% 4% 6% 8% 10% 12%
50
47
44
41
38
35
32
29
26
23
20
17
14
11
8
5
2
Likelihood of chosing a room
Positionontheresultspage
S&F not used
All compared
0% 2% 4% 6% 8% 10% 12%
50
48
46
44
42
40
38
36
34
32
30
28
26
24
22
20
18
16
14
12
10
8
6
4
2
Likelihood of chosing the room
Positionontheresultpage
S&F not available
S&F used
S&F not used
S&F available
Top box
satisfaction
if sort and
filter are ...
Not available:
56%
Available:
62%
OSCAR STRIKES BACK
But then,
Average room
price
if filter is...
Not available
€123
Available
€116
Not used
€126
Used
€102
Use of the none
option, if filter was
Not available:
12%
Not used:
12%
Used:
17%
WHO DID WIN?
So what?
 Flatter distribution of choices
 Higher task satisfaction
 Lower room prices
 Higher drop-out rates
 Flatter distribution of choices
 Higher task satisfaction
 Lower room prices
 Higher drop-out rates
WHAT’S IN IT FOR YOU?
Dear Colleagues
Dude it’s just a story
CHANGE
AHEAD
Stakeholder
the market
Stakeholder,
protagonist
Stakeholder
antagonist
Experiment with us
Gerard Loosschilder, Paolo Cordella,
Jean-Pierre van der Rest and Zvi Schwartz
gerard.loosschilder@gmail.com
www.studiogerart.com

10 nov 16 adan

  • 1.
    MRS Advanced Analytics InnovationSymposium 10th November 2016 #MRSlive
  • 2.
    68 Lombard Street,London EC3V 9LJ Tel: 0870 787 4490 Modelling insurance online buying behaviour Presentation to MRS ADAN Network Symposium November 2016 Kathy Ellison & John McConnell
  • 3.
    What we arecovering 3 The context  The challenge  Translating insurance product choice into Conjoint The testing challenge  The conjoint task – a standard CBC or something more?  Why Conjoint & advantage over solely testing live What we learnt  Conclusions to the client  When it doesn’t match what consumer says  Being pragmatic
  • 4.
    The challenge 4 | The challengewe were given Increase buying of the ‘premium’ home insurance product once clicked through to site from Price Comparison Website (PCW) Key questions were: 1. What is the right price differential? (when premium is personal) 2. What is the optimal cover combination? 3. Should premium product be shown 1st or last? 4. Should we show 2 or 3 products on the screen? Online survey  500 home insurance switchers who have used a PCW or planning to do so  In 3 parts 1. An exploration of the product in context of a PCW 2. A Trade off exercise that replicates real life purchasing decisions 3. A deep dive into the detailed features 1. 3. 2.
  • 5.
    Translating insurance productchoice into Conjoint 5  12 screens each with 2 or 3 ‘randomly produced’ products  Combination of 7 cover features with 2 or 3 levels of each  Premium of each ‘product’ was automatically calculated using ‘real’ prices  Respondents chose between products on each screen in turn  Choices fed into a model which calculated: – how decisions were being made – influence on choice of the different features of cover 1. Compulsory Excess £50 (Increases price by x%) £100 (No addl cost) 2. Buildings Sums Insured* £A,000 (reduces price by 5%) £B,000 (No addl cost) £C,000 (Increases price by 5%) £D,000 (Increases price by 10%) 3. Contents Sums Insured £E,000 (No addl cost) £F,000 (Increases price by 5%) £G,000 (Increases price by 10%) 4. Feature A  (Optional) (No addl cost)  (Included) (Increases price by £15) 5. Feature B  (Optional) (No addl cost)  (Included) (Increases price by £20) 6. Feature C  (Optional) (No addl cost)  (Included) (Increases price by £20) 7. Feature D  (Optional) (No addl cost)  (Included) (Increases price by £35)  It was explained that a tick represented the element was included  It was explained that a cross represented the element was optional
  • 6.
    The testing challenge: 6 Astandard CBC 1. Dynamic Pricing 2. Optimising the product 3. Testing Order 4. Testing 2 versus 3 choices
  • 7.
    The testing challenge: 7 •We want to understand share of preference • 7 attributes • 4 levels max • 8 random tasks, 3 tasks per concept • 2 fixed tasks • The current “Standard” product • A current “Premium” product • We export the design and our ops agency programs it into the main survey A standard CBC
  • 8.
    The testing challenge: 8 •Each respondent has a different anchor point • We need to calculate a total price (not a tested attribute) from other pricing level in other attributes 1. Dynamic Pricing • To answer the question “what is the product with the highest increase in preference relative to the standard product?” • This was an additional calculation on top of the interactive simulator • Essentially a programmatic way of testing all choice combinations 2. Optimising
  • 9.
    The testing challenge: 9 •We used the fixed tasks • Comparing “standard” and “upgrade” products • Presented in different orders 3. Testing Order • Sawtooth doesn’t let us test variable numbers of concepts per task • So we create a 2nd separate project and randomly assigned 2 concepts per task back into the main project rotations 4. Testing 2 versus 3 choices 27% 23%23% 16% 0% 10% 20% 30% 40% 50% 60% Pos 1 Pos 2 Pos 3 Testing Order Standard Control Premium
  • 10.
    Why Conjoint &advantage over solely testing live 10 Here we are looking at product sales specifically through the client’s website Platforms like “Optimizely” offer A/B Testing or Multivariate Testing (“MVT”) on live web sites (or other digital devices). Based on Design of Experiments. Typically applying either GLM, MANOVA or Taguchi methods. So in theory we could have tested in the live environment. Reasons not to use this approach include: • We will impact revenue • We may confuse customers • Overall it is likely to be more expensive than a study
  • 11.
    What we learnt:Conclusions 11| Key questions were: 1. What is the right price differential? (when premium is personal) 2. What is the optimal cover combination? 3. Should premium product be shown 1st or last? 4. Should we show 2 or 3 products on the screen?
  • 12.
    What we learnt:Conclusions 12| Answers were: 1. What is the right price differential? (when premium is personal) No ‘optimum’ price differential to make people think ‘I might as well upgrade’. Decision made based on a trade off between incremental price and inclusions of cover. 2. What is the optimal cover combination? A ‘premium’ product, must differentiate from the ‘basic’ in a popular way – the mix of covers makes a big difference (20% more preference). & do not offer too high a SI limit. 3. Should premium product be shown 1st or last? If premium product is shown first more likely to choose this 4. Should we show 2 or 3 products on the screen? Offering a choice of 3 products rather than 2 does not encourage more to upgrade, provided that the 2nd, more comprehensive, option has the most popular cover elements
  • 13.
    What we learnt:Matching reality & being pragmatic! 13 Model tried to replicate reality but…..  Premium product chosen in model in 56% of cases  BUT was the stated choice only in 25% of cases  Why? – Model simpler (7 vs 15 features) ? – Price differential lower? – Consumers not logical - often CHEAP wins over quality? And some conclusions are just not practical….  We concluded that price anchoring had an impact - the jump from the ‘basic’ (shown in the PCW) to the ‘premium’ can be too great to encourage upgrades so they could: – LOWER the price of premium product & reduce covers – NOT recommended – RAISE the price of basic product –recommended  However they would never do the latter – PCWs must show the CHEAPEST price possible!
  • 14.
    14© GfK 2016| Turbo Event | February 2016 Willingness to Pay in the Conjoint Space Chris Moore GfK UK ADAN Innovation Symposium – Conjoint Analysis November 10th
  • 15.
    15© GfK 2016| Turbo Event | February 2016 What is WTP
  • 16.
    16© GfK 2016| Turbo Event | February 2016 Jedidi/Zhang (2002), S. 1352. What we talk about when we talk about WTP “Price at which the consumer is indifferent between buying and not buying the product, given the alternative(s) available”
  • 17.
    17© GfK 2016| Turbo Event | February 2016 Markets are full of alternatives
  • 18.
    18© GfK 2016| Turbo Event | February 2016 WTP - There are three different terms to distinguish Reservation Price = Maximum Price = Willingness-to-Pay = min{ } Breidert (2006)
  • 19.
    19© GfK 2016| Turbo Event | February 2016 Apple vs Samsung • A famous example is the $2.5 billion law suit against Samsung regarding infringement of patents • Apple commissioned 2 conjoint studies to quantify the damages (one on iPhones and the other on Tablets). Both contained 7 attributes and 16 choice tasks • Real-life dynamics of what people are willing to pay is more complex as it needs to take in to account the supply part of the equation as well as the demand side (called the equilibrium price) • Analyst was tasked with calculating the WTP for the demand side of the equation. Other experts provided the Supply part of the equation • Apple were awarded $1 billion (at the initial hearing) http://www.sawtoothsoftware.com/download/apple_v_samsung_conjoint_analysis.pdf http://www.sawtoothsoftware.com/support/technical-papers/general-conjoint-analysis/assessing-the-monetary-value-of-attribute-levels-with-conjoint-analysis-warnings-and-suggestions-2001
  • 20.
    20© GfK 2016| Turbo Event | February 2016 Gilligan’s Island http://www.sawtoothsoftware.com/support/technical-papers/general-conjoint-analysis/assessing-the-monetary-value-of-attribute-levels-with-conjoint-analysis-warnings-and-suggestions-2001 • One of the most important aspects within WTP is the notion of competition • WPT will vary depending on what the competition is ! • The Sawtooth paper discusses the WTP of a person trying to escape Gilligan’s island when there is only one way of escape versus when there are multiply ways to escape
  • 21.
    21© GfK 2016| Turbo Event | February 2016 Common WTP methods
  • 22.
    22© GfK 2016| Turbo Event | February 2016 Approach 1: Post-hoc monetary scaling of utilities Individual part worth utility structures… …can be converted into utility differences. Brand A Brand B Brand C Brand A 3.7 8.9 Brand B -3.7 5.2 Brand C -8.9 -5.2 By using a linear price parameter… …utility differences can by scaled in monetary units. Brand A Brand B Brand C Brand A 37 € 89 € Brand B -37 € 52 € Brand C -89 € -52 € 10€ Util Orme (2001). 4.2 0.5 -4.7-6 -4 -2 0 2 4 6 Brand A Brand B Brand C Utility 2.0 0.0 -2.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 € 10 € 30 € 50
  • 23.
    23© GfK 2016| Turbo Event | February 2016 Approach 1: Post-hoc monetary scaling of utilities WTP CONCEP MEASUREMENT OBJECTSTRENGTHS  No special software required  Easy to implement WEAKNESSES  Assumption that Price is linear  Not linked to a specific product  Competition not included  Not linked to actual purchase behavior (e.g., none option is not considered).  Respondents that react highly insensitive to price will bias the estimates upwards (cleaning procedure required).  Does not conform to the definition to WTP
  • 24.
    24© GfK 2016| Turbo Event | February 2016 Approach 2: Market compensation approach Sim. Share Price£21,000 20% Price- Premium: £600 £21,600 Automotive example: Adding Air Con to the trim line No Aircon Aircon 25% Aircon Orme (2001).
  • 25.
    25© GfK 2016| Turbo Event | February 2016 Approach 2: Market compensation approach WTP CONCEPSTRENGTHS  Linked to a specific product  Realistic competitive environment  Easy to interpret  No special software required WEAKNESSES  WTP estimation relates only to the specific product tested  Not based on the idea of an Indifference point but at the point at which the share of a product returns to the same value  Trial and error to find exact price difference  Not guaranteed that change in share can be compensated within the given price range  While taking in to account individual respondents, analysis is typically based at the aggregate level
  • 26.
    26© GfK 2016| Turbo Event | February 2016 A e.g. Miller et al. (2011) Approach 3: Individual point of indifference analysis Utility Respondent 1 Price HighLow None B C TEST TESTHigh Price  Utility  Indifference! WTP
  • 27.
    27© GfK 2016| Turbo Event | February 2016 Approach 3: Individual point of indifference analysis How to arrive at WTP on level-basis for a certain individual: Run point of indifference analysis for every combination of attribute levels Contrast the WTP for test products with a certain feature  WTPi = £20,000  WTPi(blue) = £22,000  Premiumi(blue - grey) = +£4,000  WTPi(grey) = £18,000
  • 28.
    28© GfK 2016| Turbo Event | February 2016 Approach 3: Individual point of indifference analysis WTP CONCEPSTRENGTHS  Takes into account a pre-defined competitive environment  WTP estimates are not linked to a specific product  Based on the definition of WTP (indifference point)  Less prone to outliers: WTP estimates are restricted to a pre-defined range  Not prone to human error  Can fix attribute levels to be static and can respect alternative specific rules WEAKNESSES  Client need to be educated on how to interpret the resulting estimates correctly  Problems may arise if many respondents are insensitive to price or the competitive scenario contains dominating or dominated alternatives.  Specialist routines need to be developed (in R typically)
  • 29.
    29© GfK 2016| Turbo Event | February 2016 Other approaches Sonnier, G., Ainslie, A., Otter, T. (2007), Heterogeneity Distributions of Willingness-to-Pay in Choice Models, Quantitative Marketing & Economics, 5, 3, 313–331. Allenby, Greg M., Jeff D. Brazell, John R. Howell, Peter E. Rossi, 2014. “Economic Valuation of Product Features” Quantitative Marketing and Economics 12:421-456 Allenby, Greg M., Jeff Brazell, John R. Howell, Peter E. Rossi, 2014. “Valuation of Patented Product Features” Journal of Law and Economics 3:629-663 Sawtooth Software (2012)
  • 30.
    30© GfK 2016| Turbo Event | February 2016 Observations  WTP is no objective measurement concept. The outcome is strongly based on assumptions. Make this assumptions transparent for you client and help them to interpret the delivered figures correctly  The monetary scaling approach should not be used as it is likely to produce extreme results and lacks theoretical foundation  The individual point of indifference approach tends to lead to more conservative estimates but is the only approach that measure the true definition of WTP  The price attribute should be constrained to avoid scenarios of infinite WTP  Generally advised to use the Median WTP rather than average WTP
  • 31.
    31© GfK 2016| Turbo Event | February 2016  Do not just give 1 WTP figure. Obtain different WTP results with different competitive context and give a range of WTP outputs  WTP results can be heavily affected by dominated and/or dominating competitors and will skew results  Keep in mind that you are calculating WTP from stated preferences. There is always a hypothetical bias which flaws the measured WTPs*  Has the conjoint produced an accurate measure off price sensitivity?  Have the relevant attributes been included?  Have the relevant competitors been included? Observations *Orme (2001).
  • 32.
    32© GfK 2016| Turbo Event | February 2016
  • 33.
    33© GfK 2016| Turbo Event | February 2016 References Breidert, C. (2006), Estimation of Willingness-to-Pay, Theory, Measurement, Application, 1st ed. Jedidi, K., Zhang, Z. J. (2002), Augmenting Conjoint Analysis to Estimate Consumer Reservation Price, Management Science, 48, 10, 1350–1368. Miller, K. M., Hofstetter, R., Krohmer, H., Zhang, Z. J. (2011), How Should Consumers' Willingness to Pay Be Measured? An Empirical Comparison of State-of-the-Art Approaches, Journal of Marketing Research, 48, 1, 172–184. Orme, B. K. (2001), Assessing the Monetary Value of Attribute Levels with Conjoint Analysis: Warnings and Suggestions, Sawtooth Software Research Paper Series.
  • 34.
    When the marketplaceseems too big: Using evoked sets to model how shoppers buy Kees van der Wagt | Senior Director Methodology & Innovation November 2016
  • 35.
    Conjoint analysis used tounderstand tradeoffs • Many shopper decisions involve tradeoffs • Conjoint analysis can be used to understand and predict how shoppers will make tradeoffs
  • 36.
    Some tradeoffs occurin a large competitive space • A Grocery Store may have hundreds of SKUs relevant to your category • We can program realistic shelf sets where we vary prices and products to understand tradeoffs • But a computer screen is not a store What if we have too many products to show on a computer screen?
  • 37.
    Evoked sets canhelp when you have a large market space • Most shoppers make tradeoffs between a smaller set of products in their consideration set • For each respondent, we can customize the conjoint screen to show only those products that are relevant to them >
  • 38.
    Additional Reasons toUse Evoked Set Easier for respondent to focus Respondent more engaged Survey seems more relevant Better data quality
  • 39.
    How do wecustomize the products shown? Ask respondents to tell us what products are relevant to them Past behavior Future behavior Required Features / Unacceptable Features Multiple screening criteria to avoid eliminating items hastily
  • 40.
    Custom shelf setsrequire programming expertise Customized but Structured and Meaningful Shelf Set Evoked from multiple screening criteria Random non-evoked products Rules apply
  • 41.
    Disadvantages of EvokedSet We may be eliminating some products the respondent would buy Introduces “Selection Bias” must do more complex modeling to account for this
  • 42.
    Evoked Sets RequireAnalytical Expertise 1) Selection Bias Most mathematical models assume this missing data is missing at random Raw conjoint data only shows that a respondent has not seen certain items Need to inform our predictive model that missing means “undesirable” A. Add Synthetic Data 1. Add non-evoked items to model (not picked) 2. Define Threshold > Evoked products beat a threshold > Other products lose to threshold B. Respondent Level Penalized Regression > Individual level constraints > Can set predictions at 0
  • 43.
  • 44.
    Evoked Sets RequireAnalytical Expertise 2) Large Marketplace Means Sparsity of Data Sparsity Easy to overfit the data Calibrate/Tune model for sparsity
  • 45.
    Evoked Sets RequireAnalytical Expertise 3) Large Marketplace Typically Has Nesting Structure Some items are grouped together as more similar to each other more likely to choose between these Brand A Diet Not-Diet Brand B Size1 Size2 Size3 Use Nested Logit or similar approach Ensembles of Different Nests
  • 46.
    Conclusion Evoked Sets EnableUs to Study a Large Marketplace of Products > Survey customized to respondent > More engaged respondents > Requires programming expertise Evoked Sets Require Careful Screening > Adding other products to evoked set is recommended Evoked Sets Require Analytical Expertise > Solutions to Selection Bias > Calibrate for Data Sparsity > Model Natural Groupings or Nests
  • 47.
    Contact me Kees vander Wagt Tel: +31 10 282 3500 email: k.vanderwagt@skimgroup.com www.skimgroup.com @skimgroup
  • 48.
    The impact of choiceenvironment on choice behavior Studio GerART Gerard Loosschilder
  • 49.
    ONCE UPON ATIME …
  • 53.
  • 54.
  • 57.
    Call to action HotelChain, Type & Style Distance to city center Placement of 50 hotel on the results page - top to bottom Review Score on cleanliness, staff and facilities; result in a mean score and a label Including room price per night
  • 64.
    MEANWHILE @ THEBOOKING SITE
  • 71.
  • 75.
    Filter functions on priceand ratings Sort functions on price and rating
  • 80.
  • 89.
    THE RESULTS AREIN! Meanwhile, back in the office …
  • 90.
    Of those having thefunctions available 67% uses sort and/or filter functions at least once across the four tasks 47% uses the filter function at least once 42% uses filter on price 27% uses filter on rating 40% uses the sort function at least once 34% uses Sort on price 11% uses Sort on rating 33% does not use the functions, not even once
  • 91.
    Ideal situation 0% 2%4% 6% 8% 10% 12% 50 47 44 41 38 35 32 29 26 23 20 17 14 11 8 5 2 Likelihood of chosing a room Positionontheresultspage Ideal situation
  • 92.
    Situation before redesign 0%2% 4% 6% 8% 10% 12% 50 47 44 41 38 35 32 29 26 23 20 17 14 11 8 5 2 Likelihood of chosing a room Positionontheresultspage S&F not available
  • 93.
    Sort & filtermade available 0% 2% 4% 6% 8% 10% 12% 50 47 44 41 38 35 32 29 26 23 20 17 14 11 8 5 2 Likelihood of chosing a room Positionontheresultpage S&F available
  • 94.
    If sort &filter are used 0% 2% 4% 6% 8% 10% 12% 50 47 44 41 38 35 32 29 26 23 20 17 14 11 8 5 2 Likelihood of chosing a room Positionontheresultspage S&F used
  • 95.
    If sort &filter are not used 0% 2% 4% 6% 8% 10% 12% 50 47 44 41 38 35 32 29 26 23 20 17 14 11 8 5 2 Likelihood of chosing a room Positionontheresultspage S&F not used
  • 96.
    All compared 0% 2%4% 6% 8% 10% 12% 50 48 46 44 42 40 38 36 34 32 30 28 26 24 22 20 18 16 14 12 10 8 6 4 2 Likelihood of chosing the room Positionontheresultpage S&F not available S&F used S&F not used S&F available
  • 97.
    Top box satisfaction if sortand filter are ... Not available: 56% Available: 62%
  • 98.
  • 99.
    Average room price if filteris... Not available €123 Available €116 Not used €126 Used €102
  • 100.
    Use of thenone option, if filter was Not available: 12% Not used: 12% Used: 17%
  • 101.
  • 102.
     Flatter distributionof choices  Higher task satisfaction  Lower room prices  Higher drop-out rates
  • 103.
     Flatter distributionof choices  Higher task satisfaction  Lower room prices  Higher drop-out rates
  • 104.
    WHAT’S IN ITFOR YOU? Dear Colleagues
  • 105.
  • 106.
  • 107.
  • 110.
    Experiment with us GerardLoosschilder, Paolo Cordella, Jean-Pierre van der Rest and Zvi Schwartz gerard.loosschilder@gmail.com www.studiogerart.com