Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

What your customers really think about you (parts 1 & 2)

Lori Gauthier, Ph.D., Director of Marketing Research, Zendesk

In this two-session workshop, you’ll learn how to create survey questions that deliver insightful responses and inspire measurable actions. Each attendee will leave the workshop with two surveys that quickly and accurately measure customer satisfaction (CSAT) and customer effort (CES) -- two surveys that can be used by any organization whether they serve customers, employees, students, volunteers, vendors, or the general public. During the first session, Lori will review the key do's and don'ts of designing methodologically sound surveys. You'll learn how to accurately define and measure what you really want to measure while avoiding data-destroying random error and bias.

  • Be the first to comment

What your customers really think about you (parts 1 & 2)

  1. 1. #RelateLive What Your Customers Really Think About You Part 1: Do’s and Don'ts of Survey Design
  2. 2. Lori Gauthier, Ph.D. Zendesk Director of Marketing Research @datadocgauthier
  3. 3. Know What You Need from Your Data Destination Information Construct Question
  4. 4. What Are You Measuring? Are You Sure?
  5. 5. “I know you think you understand what you thought I said but I'm not sure you realize that what you heard is not what I meant” - Unknown
  6. 6. Define What You Need to Measure Words Mean Things Search definitions, synonyms, antonyms.
  7. 7. Source: snappywords.com
  8. 8. Define What You Need to Measure Words Mean Things Search definitions, synonyms, antonyms. Use the language and tone appropriate for your population. Result: Respondents answer the question you think you’re asking.
  9. 9. What Questions Should You Ask? What Response Options Should You Provide? Understanding Construct Polarity and Scale Sensitivity
  10. 10. Which Way Do We Go? Construct polarity Unipolar Construct Bipolar Construct Very common; typically specific; often descriptive Very rare; typically global; occasionally comparative Measures absence to maximum: not at all likely to extremely likely Measures maximum negative to maximum positive: disapprove a great deal to approve a great deal Midpoint represents half of construct Midpoint represents ambiguity or neutrality 5-point scale is ideal 7- or 9-point scale is ideal How likely are you to vote in a primary this year? Do you approve or disapprove of negative campaigning? Examples: likelihood, frequency, duration, intensity Examples: bad/good, dis/satisfied, dis/like, worse/better common labels: not at all, slightly, moderately, very, extremely none, a little, a moderate amount, a lot, a great deal common labels (mirrored sides): extremely, very, moderately, slightly, neither/nor … a great deal, a lot, a moderate amount, a little, neither/nor …
  11. 11. Ideal scale sensitivity (example 1) How Many Scale Points Should You Use? unipolar 1000 5025 75 bipolar 1000 5025 75-25-75-100 -50
  12. 12. Ideal scale sensitivity (example 2) How Many Scale Points Should You Use? unipolar 1000 5025 75 bipolar 1000 5025 75-25-75-100 -50
  13. 13. How Many Scale Points Should You Use? Sensitivity reduced as scale points removed unipolar not at all likely extremely likely moderately likely slightly likely very likely ???? not likely likely
  14. 14. How Many Scale Points Should You Use? Sensitivity reduced as scale points removed bipolar neither like nor dislike like a great deal like a moderate amount like a little like a lot dislike a little dislike a lot dislike a great deal dislike a moderate amount
  15. 15. How Many Scale Points Should You Use? Sensitivity reduced as scale points removed neither like nor dislike like a great deal like a moderate amount like a little dislike a little dislike a great deal dislike a moderate amount bipolar
  16. 16. How Many Scale Points Should You Use? Sensitivity reduced as scale points removed bipolar neither like nor dislike like a great deal like a moderate amount dislike a great deal dislike a moderate amount
  17. 17. How Many Scale Points Should You Use? Sensitivity reduced as scale points removed bipolar neither like nor dislike like a great deal dislike a great deal
  18. 18. A step-by-step approach to designing sound surveys What Have We Learned So Far? start at your destination define your construct scale your construct draft your question
  19. 19. Is Measurement Error Destroying Your Data? Done with the Do’s. Let’s get to the Don’ts.
  20. 20. Stewie Data Look at him go!
  21. 21. Stewie Data Look at him go! Random Error Bad survey design can introduce data- destroying random error, making your data — and decisions — bounce all over the place.
  22. 22. Rooting Out Random Error So long, Stewie! double barreled question unexpected scale direction insensitive scale overly sensitive scale scale without midpoint scale without verbal labels overlapping scale labels non construct-specific scale confusing question or scale true|false, yes|no, agree|disagree scale
  23. 23. Tower of Pisa Data One way or another, it’s gonna getcha! Systematic Error Bad survey design can introduce data-destroying systematic error, leading you to make biased decisions.
  24. 24. Banishing Bias Arrivederci, Pisa! unbalanced scale leading question true|false, yes|no, agree|disagree scale missing extreme endpoints bipolar scale without midpoint order effects context effects unbalanced question question formatted as statement
  25. 25. That’s A Lot of Stuff to Remember. Let’s Recap. Phew!
  26. 26. A step-by-step approach to designing sound surveys What Have We Learned So Far? start at your destination define your construct scale your construct check for random error check for systematic error collect good data draft your question
  27. 27. Q&A
  28. 28. Let Me Know What YOU Think! Share your thoughts about Part 1 of today’s workshop. Two minutes, a few taps in your Relate Live app, and I’ll know what you think. Thank you! Your finger here!
  29. 29. #RelateLive
  30. 30. #RelateLive What Your Customers Really Think About You Part 2: Critique and Create Survey Questions
  31. 31. Problem leading/unbalanced question unbalanced scale no construct-specific verbal labels missing low-end scale point scale missing midpoint RE/SE Response Effect How satisfied are you with Acme’s customer support? 1 3 42 What’s wrong with this question? Measuring Customer Satisfaction
  32. 32. Problem leading/unbalanced question unbalanced scale no construct-specific verbal labels missing low-end scale point scale missing midpoint RE/SE RE Response Effect semantic confusion ups volatility How satisfied are you with Acme’s customer support? 1 3 42 What’s wrong with this question? Measuring Customer Satisfaction
  33. 33. Problem RE/SE Response Effect What’s wrong with this question? Measuring Customer Effort To what extent do you agree or disagree with the following statement? The company made it easy for me to handle my issue. Strongly disagree Strongly agree Neither agree nor disagree Disagree Agree Somewhat disagree Somewhat agree
  34. 34. Problem statement as question RE/SE SE Response Effect acquiescence bias inflates ratings What’s wrong with this question? Measuring Customer Effort To what extent do you agree or disagree with the following statement? The company made it easy for me to handle my issue. Strongly disagree Strongly agree Neither agree nor disagree Disagree Agree Somewhat disagree Somewhat agree
  35. 35. Critique Two Questions in EIGHT Minutes Group Work
  36. 36. Review Question Critiques Group Work
  37. 37. Problem leading/unbalanced question unbalanced scale no construct-specific verbal labels missing low-end scale point scale missing midpoint RE/SE SE SE RE SE RE Response Effect STM bias inflates ratings DS/NN Rs pick 1, inflating ratings semantic confusion ups volatility zero sat Rs pick 1, inflating ratings midpoint Rs pick?, upping volatility How satisfied are you with Acme’s customer support? 1 3 42 What’s wrong with this question? Measuring Customer Satisfaction
  38. 38. Problem incorrectly defined construct leading/unbalanced question confusing scale scale missing N/N midpoint missing scale extremes RE/SE — SE RE RE SE Response Effect won’t measure CSAT STM bias inflates ratings misinterpretations up volatility ambig Rs pick?, upping volatility “all the time” Rs pushed inward What’s wrong with this question? Measuring Customer Satisfaction What do you think about Acme’s customer support? Are you happy with it? no, most of the time no, some of the time yes, some of the time yes, most of the time no yes
  39. 39. Problem incorrectly defined construct awkward question confusing scale missing low-end scale point scale missing actual midpoint RE/SE — RE RE SE RE Response Effect won’t measure org-created effort misinterpretations up volatility “neutral” misinterps up volatility zero Rs pick low, inflating ratings mod Rs pick?, upping volatility What’s wrong with this question? Measuring Customer Effort How much effort did you personally have to put forth to get your issue resolved? Very low effort Very high effortNeutral High effortLow effort
  40. 40. Problem statement as question A/DA scale non construct-specific scale A/DA scale confusing scale RE/SE SE SE RE RE RE Response Effect acquiescence bias inflates ratings acquiescence bias inflates ratings mismapping ups volatility misinterpretations up volatility moderately A/DA Rs pick? What’s wrong with this question? Measuring Customer Effort To what extent do you agree or disagree with the following statement? The company made it easy for me to handle my issue. Strongly disagree Strongly agree Neither agree nor disagree Disagree Agree Somewhat disagree Somewhat agree
  41. 41. Create One New Question in FOUR Minutes Group Work
  42. 42. Review New Questions Group Work
  43. 43. A methodologically sound question Measuring Customer Satisfaction Overall, how satisfied or dissatisfied are you with Acme’s customer support? moderately dissatisfied slightly dissatisfied neither satisfied nor dissatisfied slightly satisfied moderately satisfied extremely dissatisfied extremely satisfied 7-point, fully labeled, construct-specific, bipolar scale measures what we want to measure: satisfaction with customer support “overall” appropriate for global-level measure balanced question ambivalent midpoint
  44. 44. Measuring Customer Effort A methodologically sound question How easy was it to get the help you needed from us today? not at all easy extremely easy moderately easy very easy slightly easy measures what we want to measure: effort needed to get company’s help “today” appropriate for transaction-level measure 5-point, fully labeled, construct-specific, unipolar scale
  45. 45. Measuring Customer Effort What is driving customer effort? Content source for drivers of effort: The Effortless Experience How did we make it difficult? (Check all that apply) You didn’t solve the problem I had to contact the company multiple times I felt like I was talking to a robot I had to repeat myself I had to use a channel I don’t like (phone, web form, chat, email, FAQ) I was transferred from person to person Some other reason (Please specify) don’t assume resolution pick list Q measures freq of known responses open-ended option captures unknown responses limit list to 7-9 options random rotate pick list
  46. 46. Workshop Recap What Your Customers Really Think About You
  47. 47. start at your destination define your construct scale your construct check for random error check for systematic error collect good data draft your question Remember! Use this step-by-step approach for designing sound surveys
  48. 48. Thank You! Questions? Contact me at lgauthier@zendesk.com or @datadocgauthier.
  49. 49. Let Me Know What YOU Think! Your finger here! Share your thoughts about Parts 1 + 2 of today’s workshop. Two minutes, a few taps in your Relate Live app, and I’ll know what you think. Thank you!
  50. 50. #RelateLive

    Be the first to comment

    Login to see the comments

  • nadinemuremere

    Jun. 28, 2016

Lori Gauthier, Ph.D., Director of Marketing Research, Zendesk In this two-session workshop, you’ll learn how to create survey questions that deliver insightful responses and inspire measurable actions. Each attendee will leave the workshop with two surveys that quickly and accurately measure customer satisfaction (CSAT) and customer effort (CES) -- two surveys that can be used by any organization whether they serve customers, employees, students, volunteers, vendors, or the general public. During the first session, Lori will review the key do's and don'ts of designing methodologically sound surveys. You'll learn how to accurately define and measure what you really want to measure while avoiding data-destroying random error and bias.

Views

Total views

440

On Slideshare

0

From embeds

0

Number of embeds

3

Actions

Downloads

8

Shares

0

Comments

0

Likes

1

×