Steps to Design a      Better SurveyJean E. Fox            Scott S. Fricker     Office of Survey Methods Research         ...
Introduction   Our backgrounds    Usability    Survey Methodology   Goal of the presentation    Combine what we know ...
Types of Usability Surveys   Usability Tests    Post-task    Post-test (e.g., SUS)   Ethnographic work    Learn how p...
Introduction   Three steps we‟ll discuss    1. Decide what you really need to know    2. Write the questions following ...
Step 1Decide what you really need to know
Decide What You        Really Need to Know   Are you asking for data you really    need?    Will you really use it?    ...
Decide What You        Really Need to Know   Are you asking questions respondents    can answer?    Can you include “scr...
Decide What You        Really Need to Know   Are you asking for data in a format you    can analyze?    Open-ended vs mu...
Step 2Write the questions following best practices
Best Practices Rating scales Rankings Double-barreled questions Agree/Disagree items Satisficing
Types of Scales   Likert-type item   Semantic Differential
Types of Scales   Bi-polar    Previous examples   Uni-polar
Rating Scales   How many response options do you    usually use in a rating scale?    3…5…7…10… or something else?   Nu...
Scales Do you usually have a neutral midpoint? Odd or Even number of options    Without a midpoint, respondents tend to...
Rating Scales Do you label the endpoints, a few  options, or all of them? Labels    Use text labels for each option    ...
Rating Scales   Be sure the scale is balanced.    This scale has 3 “satisfied” options, but     only one “dissatisfied” ...
Ranking   Definitions    Rating: Select a value for individual items     from a scale    Ranking: Select an order for t...
Ranking   Consider other options before using    ranking    Ranking is difficult and less enjoyable than     other evalu...
Ranking   Recommendations    Use ratings instead if you can.      – Determine ranks from average ratings.    Use rankin...
Question Wording
Double-Barreled Questions   Avoid double-barreled questions    They force respondents to make a single     response to m...
Agree / Disagree Items   Who uses agree / disagree items? Why?    They are fairly easy to write    You can cover lots o...
Agree / Disagree Items   Unfortunately, they can be problematic    They are prone to acquiescence bias       – The tende...
Agree / Disagree Items   Recommendation    Avoid agree / disagree items if possible    Use “construct specific” responses
Other Issues Be sure the responses match the  question. Speak the respondent‟s language    Avoid jargon unless appropri...
Broader Issue - Satisficing Responding to surveys often requires  considerable effort Rather than finding the „optimal‟ ...
Satisficing – Remedies   Minimize task difficulty    Minimize number of words in questions    Avoid double-barreled que...
Satisficing – Remedies, cont.   Maximize motivation    Describe purpose and value of study    Provide instructions to t...
Satisficing – Remedies, cont.   Minimize “response effects”    Avoid blocks of ratings on the same scale     (prevents „...
Step 3Test the survey
Testing Surveys Be sure your questions work Consider an expert review    Need an expert For usability testing, be sure...
Cognitive Interviewing   Cognitive interviewing basics    Have participant complete the survey    Afterwards, ask parti...
Cognitive Interviewing   Cognitive interviewing basics (con‟t)    Review the qualitative data you get to identify     po...
Summary Decide what you really need to know Write the questions following best  practices Test the survey
Contact Information Jean E. Fox       Scott S. FrickerFox.Jean@bls.gov   Fricker.Scott@bls.gov  202-691-7370         202-6...
ReferencesAlwin, D.F. (1997). Feeling Thermometers Versus 7-Point Scales: Which   Are Better? Sociological Methods and Res...
Upcoming SlideShare
Loading in …5
×

Steps to Design a Better Survey (Jean Fox & Scott Fricker)

1,040
-1

Published on

Given at UXPA-DC's User Focus Conference, Oct. 19, 2012

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total Views
1,040
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
17
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Steps to Design a Better Survey (Jean Fox & Scott Fricker)

  1. 1. Steps to Design a Better SurveyJean E. Fox Scott S. Fricker Office of Survey Methods Research Bureau of Labor Statistics October 19, 2012
  2. 2. Introduction Our backgrounds Usability Survey Methodology Goal of the presentation Combine what we know from our fields to improve usability surveys
  3. 3. Types of Usability Surveys Usability Tests Post-task Post-test (e.g., SUS) Ethnographic work Learn how people do their work Solicit input from users Administered Self-administered (online, paper) By interviewer (oral)
  4. 4. Introduction Three steps we‟ll discuss 1. Decide what you really need to know 2. Write the questions following best practices 3. Test the survey
  5. 5. Step 1Decide what you really need to know
  6. 6. Decide What You Really Need to Know Are you asking for data you really need? Will you really use it? Can you get the data somewhere else?
  7. 7. Decide What You Really Need to Know Are you asking questions respondents can answer? Can you include “screeners”? – Questions to allow respondents skip irrelevant questions Do you need separate surveys?
  8. 8. Decide What You Really Need to Know Are you asking for data in a format you can analyze? Open-ended vs multiple choice Are you really going to analyze it?
  9. 9. Step 2Write the questions following best practices
  10. 10. Best Practices Rating scales Rankings Double-barreled questions Agree/Disagree items Satisficing
  11. 11. Types of Scales Likert-type item Semantic Differential
  12. 12. Types of Scales Bi-polar Previous examples Uni-polar
  13. 13. Rating Scales How many response options do you usually use in a rating scale? 3…5…7…10… or something else? Number of options Generally, scales with 5-7 options are the most reliable. The optimum size depends on the issue being rated (Alwin, 1997; Garner, 1960) – More options for bi-polar scales
  14. 14. Scales Do you usually have a neutral midpoint? Odd or Even number of options Without a midpoint, respondents tend to choose randomly between two middle options. For usability, generally include a mid-point.
  15. 15. Rating Scales Do you label the endpoints, a few options, or all of them? Labels Use text labels for each option Avoid numbers, unless they are meaningful – Especially avoid using negative numbers. Respondents do not like to select negative options.
  16. 16. Rating Scales Be sure the scale is balanced. This scale has 3 “satisfied” options, but only one “dissatisfied” option.
  17. 17. Ranking Definitions Rating: Select a value for individual items from a scale Ranking: Select an order for the items, comparing each against all the others.
  18. 18. Ranking Consider other options before using ranking Ranking is difficult and less enjoyable than other evaluation methods (Elig and Frieze, 1979). You don‟t get any interval level data
  19. 19. Ranking Recommendations Use ratings instead if you can. – Determine ranks from average ratings. Use rankings if you need respondents to prioritize options.
  20. 20. Question Wording
  21. 21. Double-Barreled Questions Avoid double-barreled questions They force respondents to make a single response to multiple questions They assume that respondents logically group the topics together, which may or may not be true Recommendations – Watch for the use of “and” in questions. – Eliminate all double-barreled questions. – Divide them into multiple questions.
  22. 22. Agree / Disagree Items Who uses agree / disagree items? Why? They are fairly easy to write You can cover lots of topics with one scale It‟s a fairly standard scale It‟s familiar to respondents
  23. 23. Agree / Disagree Items Unfortunately, they can be problematic They are prone to acquiescence bias – The tendency to agree with a statement They require an additional level of processing for the respondent – Respondents need to translate their response to the agree/disagree scale.
  24. 24. Agree / Disagree Items Recommendation Avoid agree / disagree items if possible Use “construct specific” responses
  25. 25. Other Issues Be sure the responses match the question. Speak the respondent‟s language Avoid jargon unless appropriate Remember that responses can be impacted by Question order The size of the text field Graphics, even seemingly innocuous ones
  26. 26. Broader Issue - Satisficing Responding to surveys often requires considerable effort Rather than finding the „optimal‟ answer, people may take shortcuts, choose the first minimally acceptable answer “Satisficing” (Krosnick, 1991) – depends on: Task difficulty, respondent ability and motivation
  27. 27. Satisficing – Remedies Minimize task difficulty Minimize number of words in questions Avoid double-barreled questions Decompose questions when needed – Instead of asking how much someone spent on clothing, ask about different types of clothing separately Use ratings not rankings Label response options
  28. 28. Satisficing – Remedies, cont. Maximize motivation Describe purpose and value of study Provide instructions to think carefully Include random probes (“why do you say that?”) Keep surveys short Put important questions early
  29. 29. Satisficing – Remedies, cont. Minimize “response effects” Avoid blocks of ratings on the same scale (prevents „straight-lining‟) Do not offer „no opinion‟ response options Avoid agree/disagree, yes/no, true/false questions
  30. 30. Step 3Test the survey
  31. 31. Testing Surveys Be sure your questions work Consider an expert review Need an expert For usability testing, be sure to include the survey in your pilot test. A common technique for evaluating surveys is Cognitive Interviewing (see Willis, 2005)
  32. 32. Cognitive Interviewing Cognitive interviewing basics Have participant complete the survey Afterwards, ask participants questions, such as – In your own words, what was the question asking? – What did you consider in determining your response? – Was there anything difficult about this question?
  33. 33. Cognitive Interviewing Cognitive interviewing basics (con‟t) Review the qualitative data you get to identify potential problems and solutions Like usability testing, there are different approaches (e.g., think aloud)
  34. 34. Summary Decide what you really need to know Write the questions following best practices Test the survey
  35. 35. Contact Information Jean E. Fox Scott S. FrickerFox.Jean@bls.gov Fricker.Scott@bls.gov 202-691-7370 202-691-7390
  36. 36. ReferencesAlwin, D.F. (1997). Feeling Thermometers Versus 7-Point Scales: Which Are Better? Sociological Methods and Research, 25(3), pp 318 – 340Elig, T. W., & Frieze, I.H. (1979). Measuring causal attributions for success and failure. Journal of Personality and Social Psychology, 37(4), 621- 634.Garner, W.R. (1960). Rating scales, discriminability, and information transmission. The Psychological Review, 67 (6), 343-352.Krosnick, J.A. (1991). Response strategies for coping with the cognitive demands of attitude strength in surveys. In J.M. Tanur (ed.) Questions About Questions: Inquiries into the Cognitive Bases of Surveys. New York: Russell Sage Foundation, pp. 177 – 203.Krosnick, J.A. and Presser, S. (2010). Question and questionnaire design. In Handbook of Survey Research, 2nd Edition, Peter V. Marsden and James D. Wright (Eds). Bingley, UK: Emerald Group Publishing Ltd.Willis, G. (2005). Cognitive Interviewing: A Tool for Improving Questionnaire Design, Thousand Oaks, CA: Sage Publications, Inc.
  1. A particular slide catching your eye?

    Clipping is a handy way to collect important slides you want to go back to later.

×