Successfully reported this slideshow.
We use your LinkedIn profile and activity data to personalize ads and to show you more relevant ads. You can change your ad preferences anytime.

10 Diagnostic Techniques to Help Optimize Mobile Surveys

618 views

Published on

Implementing Mobile First best practices when designing surveys and properly identifying and diagnosing problem areas minimize the risk of respondents misreporting. They will also help deliver strong insights. Whether you’re new to writing surveys for the modern respondent or you’re looking to strengthen existing survey designs, here are 10 diagnostic techniques to help optimize your mobile surveys.

Published in: Data & Analytics
  • Be the first to comment

  • Be the first to like this

10 Diagnostic Techniques to Help Optimize Mobile Surveys

  1. 1. 10 DIAGNOSTIC TECHNIQUES TO HELP OPTIMIZE MOBILE SURVEYS Steve Wigmore + Alex Wheatley
  2. 2. 1. Dropouts 2. Respondent Satisfaction 3. Redundant Questions 4. Click Counts 5. Text Analysis 6. Timing 7. Answer Balance 8. Straight-lining 9. Effective Sample 10. Respondent Workload What are they? 2
  3. 3. Why we care: • Sample representation • Cost • Longevity of panel research Watch out for: • More than 1% of people dropping out at an individual question: a sign of a problem question • More than 20% of people not completing a survey: a sign of a problem survey • The hump: the hump is where content is introduced • More smartphone dropouts at the end of a survey: a sign of frustration or lack of time The fix: • Reduce the length of interview • Some dropouts are inevitable, but dropouts will increase with longer surveys • Get over the hump faster by: • Eliminate demographic questions and append data instead • Avoid repetitive starts • Spend time perfecting the introduction 1. Dropouts 3
  4. 4. Why we care: • Engaged respondents = better data Watch out for: • Lower Survey Health Scores for smartphones and longer surveys • Critical respondent feedback: the easiest way to measure is to ask the respondent • Poor Survey Health Scores: projects can be compared against country, device and survey norms The fix: • Reduce the length of interview • Use narrative • Utilize direct respondent feedback when making changes • Open-end engagement questions help troubleshoot problems 2. Respondent Satisfaction 4
  5. 5. Why we care: • Redundancy can cause disengagement and respondent fatigue Watch out for: • Overlap between questions: repetitive scale questions often repeat questions • 0.65+ indicates a correlation worthy of investigation The fix: • Use correlations for analysis; they can indicate which questions to remove • Use historic or pilot data to create correlations 3. Redundant Questions 5
  6. 6. Why we care: • Over reporting Watch out for: • Extensive answer choices: greedy options reduce the chances a question is being read The fix: • Implement hard option limits: 15 max • Use rules, i.e. “Top 3” “most important” 4. Click Counts 6
  7. 7. Why we care: • The volume and quality of text is a clear indication of respondent engagement • Open-ends (O.E.) are often panelist’s only opportunity to express themselves. Watch out for: • Nonsense O.E. don’t mean the rest of the respondent’s data is bad. O.E. questions can be hard work, especially on smartphones • Repetitive O.E. questions will lead to a reduction in text entered The fix: • Use O.E. with care and moderately • Keep it clear • Give constraints • If you aren’t going to analyze it don’t ask it! 5. Text Analysis 7
  8. 8. Why we care: • “Speedsters” show where people switch off Watch out for: • Those completing in less than 40% of the median completion time • Account for device and valid “shortcuts” • Answer times that indicate pinch points and waning attention The fix: • Re-order and narrate the survey • Address troublesome points • Use time to your advantage: • Timed challenges • Implicit tests • Set speed traps 6. Timing 8
  9. 9. Why we care: • Reduce irrelevant questions • Collect actionable data Watch out for: • Midpoint spikes: can indicate questions that respondents are unable to answer honestly – often caused when asked about irrelevant things • Endpoint skews: can indicate self-evident questions – at best only reinforcing what you already knew • Scale choices with over 1/3 of the sample The fix: • Find the “marmite” questions • Change the question and force a trade off 7. Answer Balance 9
  10. 10. Why we care: • Data quality • Misleading results Watch out for: • Repetitive questions • Low standards of deviation (SD): Individual straight-lining respondents can be removed but low SD can indicate an issue with the questions as a whole The fix: • Custom scales and answer choices • Reduce options and avoid scrolling • Pilot to identify issues • Avoid repetition and standard formats 8. Straight-lining 10
  11. 11. Why we care: • Sample needs vary by questions, not by surveys Watch out for: • Only measure what you need: • Rough error boundaries • Sampling 100 = ±6% • Sampling 200 = ±5% • Sampling 400 = ±4% • Sampling 1000+ = ±2% The fix: • Group on sample needs, putting the most demanding first and rotating the rest 9. Effective Sample 11
  12. 12. Why we care: • Overworked respondents lead to poor data quality Watch out for: • Review how much there is to answer the questions: • Questionnaire word counts • Number of options & scrolling • Wording and scales • Length of interview • Flow of sections The fix: • Read questions as a respondent 10. Respondent Workload 12
  13. 13. www.lightspeedresearch.com info@lightspeedresearch.com Steve.Wigmore@lightspeedresearch.com Alex.Wheatley@lightspeedresearch.com Contact Us! 13

×