Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

DIY Max-Diff webinar slides

2,457 views

Published on

Slides used in webinar on Max-Diff, presented by Tim Bock on 30 May 2017.

Published in: Data & Analytics
  • If u need a hand in making your writing assignments - visit ⇒ www.WritePaper.info ⇐ for more detailed information.
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • My last month paycheck was for 11000 dollars… All i did was simple online work from comfort at home for 3-4 hours/day that I got from this agency I discovered over the internet and they paid me for it 95 bucks every hour.... click here  http://t.cn/AisJWCv6
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • If you want to enjoy the Good Life: making money in the comfort of your own home with just your laptop, then this is for YOU... ♣♣♣ http://ishbv.com/goldops777/pdf
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • Nine Signs Wealth is Coming Your Way... ♥♥♥ http://ishbv.com/manifmagic/pdf
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
  • The Surprising Reason 11:11 Keeps Popping-Up: Free report reveals the Universe's secret "Sign Posts" that point the way to success, wealth and happiness. Claim your copy right now! ◆◆◆ https://bit.ly/30Ju5r6
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here

DIY Max-Diff webinar slides

  1. 1. T I M B O C K P R E S E N T S If you have any questions, enter them into the Questions field. Questions will be answered at the end. If we do not have time to get to your question, we will email you. We will email you a link to the video, slides, and data. Get a free one-month trial of Q from www.q-researchsoftware.com. DIY Max-Diff
  2. 2. 2 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  3. 3. Thinking about the type of person you would like to have as the President of the USA, how appealing are these characteristics to you? 3 Decent/ethical Good in a crisis Concerned about global warming Entertaining Plain-speaking Experienced in government Concerned about poverty Male Healthy Focuses on minorities Has served in the military From a traditional American background Successful in business Understands economics Multilingual Christian
  4. 4. 4
  5. 5. 5
  6. 6. A max-diff question (One of 10 questions, each asked with a different subset of the alternatives) 6 An experimental design indicates which alternatives appear are shown in which question.
  7. 7. 7
  8. 8. Use max-diff when: 1. Ratings are likely to get too many ties 2. There are too many items to rank 3. Respondents are going to provide data with noise, such as when they: • Are tired • Are lazy • Change their mind 8
  9. 9. Typical applications • Understanding preferences. E.g., • Preferences between new products (“should we launch concept A, B, C, etc.”) • Preferences for existing brands • Message testing • Segmentation. Identify groups of people that differ in the importance they assign to different attributes, traits, values, characteristics, etc. • General-purpose measurement. Collecting data that can be used in lots of different ways. For example: • As one of multiple different types of data in segmentation • As general profiling data, used to contextualize other variables in a study, in much the same way as is done with demographics. 9
  10. 10. 10 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  11. 11. Worked example: appeal of 10 technology brands Apple Microsoft IBM Google Intel Samsung Sony Dell Yahoo Nokia 11
  12. 12. Typical applications Application Implications for experimental design Understanding preferences. E.g., • Preferences between new products (“should we launch concept A, B, C, etc.”) • Preferences for existing brands • Message testing • Separate design is best for each person • The design can be “poor” for each person, so long as it is good in aggregate • A large number of alternatives can be included in the study Segmentation. Identify groups of people that differ in the importance they assign to different attributes, traits, values, characteristics, etc. • The design needs to be “good” for each person • Each person should see the same design • A smaller number of alternatives should be included in the study (e.g., less than 20) General-purpose measurement. Collecting data that can be used in lots of different ways. For example: • As one of multiple different types of data in segmentation • As general profiling data, used to contextualize other variables in a study, in much the same way as is done with demographics. • The design needs to be “good” for each person • Each person should see the same design • A smaller number of alternatives should be included in the study (e.g., less than 20) 12
  13. 13. Randomization of the order of alternatives • Randomizing the order with which alternatives appear • One order for each respondent • For example, Apple at the top for the first respondent, in the middle for the next respondent, etc. • Randomizing the order with which the questions appear • For example, the first respondent sees question 4, 3, 1, 2, 6, and 5, the next sees 6, 3, 2, 1, 5, 4, etc. 13 If randomization works, it means that the randomization will be a source of variance in any between-respondent comparisons, reducing the validity of any resulting segmentation. Tip: get the randomization done in the data collection software, and removed from the data prior to doing any analysis.
  14. 14. More advanced experimental design issues • Too many alternatives for each person to evaluate all of them • Prohibitions: sets of alternatives that should not be shown together • Anchored max-diff: combination of max-diff with other data See http://docs.displayr.com/wiki/Creating_Max- Diff_Experimental_Designs#Advanced_designs_for_max-diff 14
  15. 15. 15 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  16. 16. Counting analysis gives the wrong answers, as: • It ignores the experimental design • It does not deal with inconsistent preferences • It ignores differences between people 16
  17. 17. Problems with counting analysis: Example 1 17 Best Worst Best - Worst Apple 464 155 309 Google 348 40 308 Samsung 333 103 230 Sony 227 69 158 Microsoft 187 87 100 Dell 82 255 -173 Nokia 64 282 -218 Intel 48 176 -128 IBM 32 314 -282 Yahoo 27 331 -304 Counting analysis • If we look at the number of times Apple is chosen as Best, it is the clear winner • But, if we look at the Best - Worst scores, Apple and Google are tied • Which of these analyses is correct? • Both seem plausible at first look • They cannot both be valid • Neither is valid…
  18. 18. Problems with counting analysis: Example 2 18 • In this experiment, each alternative appeared three times. So, if a brand is chosen as best three times, we know it is most preferred. • Samsung is clearly the second most preferred brand. • However, the counting analysis on the previous slide suggested that Google was second most preferred. • Counting analysis confuses breadth of popularity with strength of preference Never Once Twice 3 times Apple % 35 13 16 36 Microsoft % 58 28 8 6 IBM % 90 9 1 0 Google % 32 32 23 12 Intel % 88 10 2 1 Samsung % 46 18 17 20 Sony % 53 26 13 8 Dell % 80 15 3 2 Yahoo % 94 3 2 0 Nokia % 86 9 4 2 Times chosen as Best
  19. 19. Problems with counting analysis: Example 3 19 ID 1 Questions 1 2 3 4 5 Choices 1 Apple Microsoft IBM Google Nokia Best 2 Apple Sony Dell Yahoo Nokia Worst 3 Microsoft Intel Samsung Sony Nokia 4 IBM Google Intel Sony Dell 5 Microsoft Google Samsung Dell Yahoo 6 Apple IBM Intel Samsung Yahoo Alternatives Counting analysis 3 Microsoft 1 Dell, Google, Samsung 0 Sony, Intel, Apple -1 Yahoo -2 Nokia -3 IBM • The counting analysis shows that Yahoo is the 8th most popular of the brands for this respondent (i.e., the 3rd worst score, at -1, of any of the 10 brands). • This is based on being chosen as Worst in Question 5. • However, in Question 5 Yahoo was against the four most popular brands, so the actual data provides no evidence that Yahoo is any less popular than Sony, Intel, and Apple.
  20. 20. Problems with counting analysis: Example 4 20 • The counting analysis suggests that Microsoft is 3 times as popular as Google. • The data shows us that in the two questions where the person had a choice between Microsoft and Google (Questions 1 and 5) they chose Microsoft. It contains no information to suggest that Microsoft is three times as appealing as Google. ID 1 Questions 1 2 3 4 5 Choices 1 Apple Microsoft IBM Google Nokia Best 2 Apple Sony Dell Yahoo Nokia Worst 3 Microsoft Intel Samsung Sony Nokia 4 IBM Google Intel Sony Dell 5 Microsoft Google Samsung Dell Yahoo 6 Apple IBM Intel Samsung Yahoo Alternatives Counting analysis 3 Microsoft 1 Dell, Google, Samsung 0 Sony, Intel, Apple -1 Yahoo -2 Nokia -3 IBM
  21. 21. Problems with counting analysis: Example 5 21 • Question 1 tells us that Apple is preferred to Google • Question 5 tells us that Google is preferred to Samsung • Therefore, Apple is preferred to Samsung • But, Question 6 tells us that Samsung is preferred to Apple • Such inconsistencies are typical in survey data ID 13 Questions 1 2 3 4 5 Choices 1 Apple Microsoft IBM Google Nokia Best 2 Apple Sony Dell Yahoo Nokia Worst 3 Microsoft Intel Samsung Sony Nokia 4 IBM Google Intel Sony Dell 5 Microsoft Google Samsung Dell Yahoo 6 Apple IBM Intel Samsung Yahoo Alternatives
  22. 22. 22 More advanced methods, such as latent class analysis, mitigate/solve all these problems
  23. 23. 23 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  24. 24. Process for latent class analysis max-diff • Create > Marketing > Max-Diff > Latent Class Analysis • Set Questions left out for cross validation to about 20% of your max-diff data (e.g., to 1 if you have 6 questions per respondent) • Repeat for from 1 through to 10 segments • Select number of segments based on: • Bayesian Information Criterion (BIC): Smaller is better • Prediction accuracy (only if using cross-validation) • Stability of preference shares (i.e., similar to with one fewer and one more class) • Good discrimination within the segments (a small number of preferred alternatives) • Ease of explaining to clients • Re-run with : • The chosen number of segments • Questions left out for cross validation set to 0 24
  25. 25. How many segments? 25 • Predictive accuracy (CV) is maximized at 5 segments • The BIC is lowest at 9 segments, but the differences are very small from 6 onwards • The average preference shares for the difference brands stabilize after 6 segments • I would probably choose 5 segments in this example, as I really like predictive accuracy as a criterion
  26. 26. Advanced Latent Class Analysis in Q • Two applications • Latent class analysis of anchored max-diff • Latent class using multiple different types of data (e.g., max-diff + choice model + ratings) • These are done in Q by: • Setting up the max-diff as an Experiment question: http://wiki.q-researchsoftware.com/wiki/Marketing_-_Max-Diff_-_Max- Diff_Setup_from_an_Experimental_Design • Using the standard latent class analysis option (Create > Segments > Latent Class Analysis) 26
  27. 27. 27 When to use max-diff Experimental design Counting analysis (bad) Latent class analysis Computing the preference share for each respondent A G E N DA | D I Y M A X - D I F
  28. 28. Typical applications Application Implications for experimental design Implications for analysis Understanding preferences. E.g., • Preferences between new products (“should we launch concept A, B, C, etc.”) • Preferences for existing brands • Message testing • Separate design is best for each person • The design can be “poor” for each person, so long as it is good in aggregate • A large number of alternatives can be included in the study 1. Compute the preference share for each person (e.g., using latent class analysis, hierarchical Bayes, varying coefficients) 2. Compute the average preference share Segmentation. Identify groups of people that differ in the importance they assign to different attributes, traits, values, characteristics, etc. • The design needs to be “good” for each person • Each person should see the same design • A smaller number of alternatives should be included in the study (e.g., less than 20) Use latent class analysis to identify preference shares within segments General-purpose measurement. Collecting data that can be used in lots of different ways. For example: • As one of multiple different types of data in segmentation • As general profiling data, used to contextualize other variables in a study, in much the same way as is done with demographics. • The design needs to be “good” for each person • Each person should see the same design • A smaller number of alternatives should be included in the study (e.g., less than 20) 1. Compute the preference share for each person (e.g., using latent class analysis, hierarchical Bayes, varying coefficients) 2. Use these preference shares in other analyses (e.g., comparing averages by other groups) 28
  29. 29. Computing the preference share for each respondent • Method 1: Using a standard max-diff latent class model • Select the output in Q • Create > Marketing > Max-Diff > Save variable(s) > Compute Preference Shares • Method 2: Same as Method 1, but with many more classes (e.g., 20). If this is too slow, use the Advanced Latent Class Analysis method instead, as it is faster and has no time limit set on it. • Method 3: Normal mixing distribution (aka Hierarchical Bayes) • On the Data tab, set the Case IDs (top-left of the screen) • Set up as Advanced Latent Class Analysis (see earlier slide) • Set Number of segments to 1 • Latent Class Analysis > Advanced and set Distribution to Multivariate Normal – Full Covariance • Press OK twice • Right-click on the tree and select Save Individual-Level Parameter Means and Standard Deviations • Create > Marketing > Max-Diff > Compute Preference Shares from Individual-Level Parameter Means (All Alternatives) • Method 4: Mixtures of normal mixing distributions: Same as Method 3, except: • Set Number of segments to Automatic or some number • In Advanced, untick Pooled (to the right of Multivariate Normal) • Method 5: Varying coefficients • Create > Marketing > Max-Diff > Varying Coefficients • Setup as for latent class analysis • Select additional predictor variables as Varying Coefficients • Method 6: Ensemble • Use all the methods above • Ignore the data for any method that performs poorly (e.g., based on BIC, or, if you can compute it, cross-validation) • For each respondent, compute their preference share as the average of the preference share of the different methods 29
  30. 30. T I M B O C K P R E S E N T S Q&A Session Type questions into the Questions fields in GoToWebinar. If we do not get to your question during the webinar, we will write back via email. We will email you a link to the slides and data. Get a free one-month trial of Q from www.q-researchsoftware.com.

×