Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our User Agreement and Privacy Policy.

Slideshare uses cookies to improve functionality and performance, and to provide you with relevant advertising. If you continue browsing the site, you agree to the use of cookies on this website. See our Privacy Policy and User Agreement for details.

Like this presentation? Why not share!

- How to Use Conjoint and MaxDiff to ... by QuestionPro 4246 views
- Qualtrics and MaxDiff Analysis: Und... by Qualtrics 1882 views
- MaxDiff Scaling: More Quantifiable ... by QuestionPro 1630 views
- Introduction to Max Diff - Approach... by Saurabh Aggarwal 1878 views
- SurveyAnalytics MaxDiff Webinar Slides by QuestionPro 2964 views
- TURF Analysis by QuestionPro 25408 views

12,435 views

Published on

No Downloads

Total views

12,435

On SlideShare

0

From Embeds

0

Number of Embeds

3,075

Shares

0

Downloads

264

Comments

0

Likes

5

No embeds

No notes for slide

- 1. An Introduction to MaxDiffWebinar: Tuesday August 24th, 2010<br />
- 2. Agenda<br />What is MaxDiff?<br />Why all the fuss?<br />Problems with Ratings Scales, and why MaxDiff is better<br />Fielding and Analyzing MaxDiff questions<br />Getting started<br />2<br />
- 3. What is MaxDiff?<br />Maximum Difference Scaling (MaxDiff) is a way of evaluating the importance (or preference) of a number of alternatives<br />It is a discrete choice technique: respondents are asked to make simple best/worst choices<br />Maxdiff has the advantage that it is very simple for the respondent, but gives extremely rich information to the researcher<br />3<br />
- 4. But Wait! I can Do That Without MaxDiff!<br />Traditionally, Marketing Research has used ratings scales to determine importance or preference, like this:<br />There seems to be so much more information here. So why not just use ratings scales?<br />4<br />
- 5. The Dark Secrets of Importance Ratings Scales<br />Despite their popularity, ratings scales have several significant flaws that researchers are often unaware of, including:<br />Scale-use bias<br />Poor discrimination and comparison<br />Poor predictive capability<br />Cultural variance<br />Let’s look at a some of these issues<br />5<br />For a good overview, see “Testing Alternatives to Importance Ratings”, Chrzanand Golovashkina, 2007<br />
- 6. Scale Use Bias<br />Are these respondents really different in their preferences?<br />6<br />
- 7. Ratings Scales and the Lack of Constraints<br />A big problem with traditional ratings scales is that they do not force the respondent to make a choice. Often this can mean that our data is meaningless.<br />Consider the following question:<br />Believe it or not, we once saw almost exactly this question fielded in a study. Please don’t do this!<br />7<br />
- 8. The Advantages of MaxDiff Methods<br />OK! I am convinced that traditional ratings scales have many problems. But what about MaxDiff?<br />A well-designed Maxdiff exercise has the following properties:<br />It is easy for respondents<br />It is culturally invariant (as “most” and “least” are easily translated and have little room for interpretation)<br />It forces a trade-off – respondents cannot just say everything is ‘very important’<br />There is no scale bias<br />The data can be very powerful for predictive and clustering purposes<br />It is robust for testing a ‘laundry list’ of unalike things<br />8<br />
- 9. MaxDiff Questions<br />In its simplest form a MaxDiff question is just a list of alternatives, with the respondent being asked to identify the Most/Least (or Best/Worst) pair: i.e. the ones with the ‘Maximum Difference’<br />For a small number of alternatives, asking a single question can suffice<br />9<br />
- 10. MaxDiff with Multiple Questions<br />When there are more alternatives we will want to ask each respondent multiple MaxDiff questions, with different combinations of alternatives in each ‘task’<br />The number of tasks shown depends on the total number of alternatives, and the number shown per task. Specialist software is typically used to design the tasks<br />e.g. SPSS ORTHOPLAN, Sawtooth<br />10<br />
- 11. Analyzing the Results<br />There are three major options for analyzing the results of a MaxDiff exercise, each with its own strengths and weaknesses:<br />Count Analysis: Simply tallying the number of times each alternative is chosen as ‘Most’ or ‘Least’ important by the population<br />Pros: Very simple, provides population-level preferences<br />Cons: Limited usefulness for analysis beyond the basics<br />Logit Modeling: Using a logit model to calculate ‘utilities’ for each alternative<br />Pros: Fast, scaled utilities can be compared and ‘share of utility’ can be calculated for population and for segments<br />Cons: More complex than simple counts. Aggregate-level only results<br />Hierarchical Bayes (HB) or Latent Class: More advanced mathematical techniques<br />Pros: Robust techniques that produce respondent-level utilities. Results can be used in simulators or for segmentation<br />Cons: Require advanced software and practitioner expertise<br />11<br />
- 12. Interpreting the Numbers<br />Counts: The simple counts can be expressed as percentages, for the number of times an item was shown as ‘best’ or ‘worst’. These can be ordered and reported. Some analysts like to show the differences (i.e. the number of times something was identified as ‘best’ minus the number of times it was identified as ‘worst’<br />Utilities: Logit and HB methods give each attribute tested a ‘utility’. Broadly speaking this is a measure of how strongly the attribute contributes to a decision<br />Share of Utility: The utilities can be rescaled and converted into probabilities (using a logit transform). These can then be compared to give the relative weight of each attribute<br />Simulation and Segmentation: Respondent-level utilities can be further analyzed either through segmentation or simulation<br />12<br />
- 13. Example: A Custom MaxDiff Simulator<br />13<br />
- 14. Some Tips for Effective MaxDiff<br />Start simple: Don’t try to cram too much into your MaxDiff<br />Keep it Short: Even though MaxDiff is a very easy task for respondents, having too many items in a task, or too many tasks, will cause respondents to answer with less care<br />Make the scenario specific: A good scenario leaves the respondent with no doubt how to answer, and makes your analysis more effective<br />14<br />
- 15. Getting Started<br />MaxDiff is an exciting technique that is getting more and more popular – however, researchers need to continue moving away from traditional importance ratings and to these new techniques<br />While the more advanced uses of MaxDiff are incredibly powerful, don’t be intimidated by their complexity or cost – even a simple MaxDiff with Counting Analysis can be better at extracting true user importance than the use of ratings scales<br />Single Question MaxDiff with Logit Utilities Analysis is available now from Survey Analytics – see the following demonstration<br />15<br />
- 16. Parametric Marketing LLC<br />Parametric® provides research and analytics services to business professionals and policy makers. <br />We specialize in analyzing consumer behavior and advising companies on the financial implications of product, pricing, promotional and brand decisions. With our expertise in advanced research, data mining and business & financial modeling, we are uniquely equipped to help our clients address their most demanding problems. Our innovative, interactive tools bring advanced techniques – such as conjoint, discrete choice modeling and customer valuation – within the reach of real business people. <br />We also license our technology and provide analytics consulting to marketing research industry partners.<br />Parametric was founded in 2003 by Chris Robson PRC, and Scott Laing PRC<br />400 E Evergreen Boulevard, Suite 303<br />Vancouver WA 98660<br />Phone: +1 360.696.2929<br />info@paramktg.com<br />www.paramktg.com<br />16<br />

No public clipboards found for this slide

Be the first to comment