• Share
  • Email
  • Embed
  • Like
  • Save
  • Private Content
Steps to Design a Better Survey (Jean Fox & Scott Fricker)
 

Steps to Design a Better Survey (Jean Fox & Scott Fricker)

on

  • 829 views

Given at UXPA-DC's User Focus Conference, Oct. 19, 2012

Given at UXPA-DC's User Focus Conference, Oct. 19, 2012

Statistics

Views

Total Views
829
Views on SlideShare
829
Embed Views
0

Actions

Likes
0
Downloads
6
Comments
0

0 Embeds 0

No embeds

Accessibility

Categories

Upload Details

Uploaded via as Adobe PDF

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment

    Steps to Design a Better Survey (Jean Fox & Scott Fricker) Steps to Design a Better Survey (Jean Fox & Scott Fricker) Presentation Transcript

    • Steps to Design a Better SurveyJean E. Fox Scott S. Fricker Office of Survey Methods Research Bureau of Labor Statistics October 19, 2012
    • Introduction Our backgrounds Usability Survey Methodology Goal of the presentation Combine what we know from our fields to improve usability surveys
    • Types of Usability Surveys Usability Tests Post-task Post-test (e.g., SUS) Ethnographic work Learn how people do their work Solicit input from users Administered Self-administered (online, paper) By interviewer (oral)
    • Introduction Three steps we‟ll discuss 1. Decide what you really need to know 2. Write the questions following best practices 3. Test the survey
    • Step 1Decide what you really need to know
    • Decide What You Really Need to Know Are you asking for data you really need? Will you really use it? Can you get the data somewhere else?
    • Decide What You Really Need to Know Are you asking questions respondents can answer? Can you include “screeners”? – Questions to allow respondents skip irrelevant questions Do you need separate surveys?
    • Decide What You Really Need to Know Are you asking for data in a format you can analyze? Open-ended vs multiple choice Are you really going to analyze it?
    • Step 2Write the questions following best practices
    • Best Practices Rating scales Rankings Double-barreled questions Agree/Disagree items Satisficing
    • Types of Scales Likert-type item Semantic Differential
    • Types of Scales Bi-polar Previous examples Uni-polar
    • Rating Scales How many response options do you usually use in a rating scale? 3…5…7…10… or something else? Number of options Generally, scales with 5-7 options are the most reliable. The optimum size depends on the issue being rated (Alwin, 1997; Garner, 1960) – More options for bi-polar scales
    • Scales Do you usually have a neutral midpoint? Odd or Even number of options Without a midpoint, respondents tend to choose randomly between two middle options. For usability, generally include a mid-point.
    • Rating Scales Do you label the endpoints, a few options, or all of them? Labels Use text labels for each option Avoid numbers, unless they are meaningful – Especially avoid using negative numbers. Respondents do not like to select negative options.
    • Rating Scales Be sure the scale is balanced. This scale has 3 “satisfied” options, but only one “dissatisfied” option.
    • Ranking Definitions Rating: Select a value for individual items from a scale Ranking: Select an order for the items, comparing each against all the others.
    • Ranking Consider other options before using ranking Ranking is difficult and less enjoyable than other evaluation methods (Elig and Frieze, 1979). You don‟t get any interval level data
    • Ranking Recommendations Use ratings instead if you can. – Determine ranks from average ratings. Use rankings if you need respondents to prioritize options.
    • Question Wording
    • Double-Barreled Questions Avoid double-barreled questions They force respondents to make a single response to multiple questions They assume that respondents logically group the topics together, which may or may not be true Recommendations – Watch for the use of “and” in questions. – Eliminate all double-barreled questions. – Divide them into multiple questions.
    • Agree / Disagree Items Who uses agree / disagree items? Why? They are fairly easy to write You can cover lots of topics with one scale It‟s a fairly standard scale It‟s familiar to respondents
    • Agree / Disagree Items Unfortunately, they can be problematic They are prone to acquiescence bias – The tendency to agree with a statement They require an additional level of processing for the respondent – Respondents need to translate their response to the agree/disagree scale.
    • Agree / Disagree Items Recommendation Avoid agree / disagree items if possible Use “construct specific” responses
    • Other Issues Be sure the responses match the question. Speak the respondent‟s language Avoid jargon unless appropriate Remember that responses can be impacted by Question order The size of the text field Graphics, even seemingly innocuous ones
    • Broader Issue - Satisficing Responding to surveys often requires considerable effort Rather than finding the „optimal‟ answer, people may take shortcuts, choose the first minimally acceptable answer “Satisficing” (Krosnick, 1991) – depends on: Task difficulty, respondent ability and motivation
    • Satisficing – Remedies Minimize task difficulty Minimize number of words in questions Avoid double-barreled questions Decompose questions when needed – Instead of asking how much someone spent on clothing, ask about different types of clothing separately Use ratings not rankings Label response options
    • Satisficing – Remedies, cont. Maximize motivation Describe purpose and value of study Provide instructions to think carefully Include random probes (“why do you say that?”) Keep surveys short Put important questions early
    • Satisficing – Remedies, cont. Minimize “response effects” Avoid blocks of ratings on the same scale (prevents „straight-lining‟) Do not offer „no opinion‟ response options Avoid agree/disagree, yes/no, true/false questions
    • Step 3Test the survey
    • Testing Surveys Be sure your questions work Consider an expert review Need an expert For usability testing, be sure to include the survey in your pilot test. A common technique for evaluating surveys is Cognitive Interviewing (see Willis, 2005)
    • Cognitive Interviewing Cognitive interviewing basics Have participant complete the survey Afterwards, ask participants questions, such as – In your own words, what was the question asking? – What did you consider in determining your response? – Was there anything difficult about this question?
    • Cognitive Interviewing Cognitive interviewing basics (con‟t) Review the qualitative data you get to identify potential problems and solutions Like usability testing, there are different approaches (e.g., think aloud)
    • Summary Decide what you really need to know Write the questions following best practices Test the survey
    • Contact Information Jean E. Fox Scott S. FrickerFox.Jean@bls.gov Fricker.Scott@bls.gov 202-691-7370 202-691-7390
    • ReferencesAlwin, D.F. (1997). Feeling Thermometers Versus 7-Point Scales: Which Are Better? Sociological Methods and Research, 25(3), pp 318 – 340Elig, T. W., & Frieze, I.H. (1979). Measuring causal attributions for success and failure. Journal of Personality and Social Psychology, 37(4), 621- 634.Garner, W.R. (1960). Rating scales, discriminability, and information transmission. The Psychological Review, 67 (6), 343-352.Krosnick, J.A. (1991). Response strategies for coping with the cognitive demands of attitude strength in surveys. In J.M. Tanur (ed.) Questions About Questions: Inquiries into the Cognitive Bases of Surveys. New York: Russell Sage Foundation, pp. 177 – 203.Krosnick, J.A. and Presser, S. (2010). Question and questionnaire design. In Handbook of Survey Research, 2nd Edition, Peter V. Marsden and James D. Wright (Eds). Bingley, UK: Emerald Group Publishing Ltd.Willis, G. (2005). Cognitive Interviewing: A Tool for Improving Questionnaire Design, Thousand Oaks, CA: Sage Publications, Inc.