Questionnaire design


Published on

These slides were produced by Emma Angell (SAPPHIRE group, University of Leicester) for a presentation to the University's Bioscience Pedagogical Research meeting in November 2011.

Published in: Education, Technology
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide

Questionnaire design

  1. 1. Some take-home messages Biol Sci PedR meeting, 15th November 2011 – Emma Angell
  2. 2.  25 years’ of research Grounded in the literature and primary research Psychologist’s perspective Still evolving…
  3. 3. 1. Interpret meaning of the question2. Search memory for information3. Integrate into summary judgments4. Respond in a way to convey those judgments’ meanings
  4. 4.  Compromising on one or more of these steps A cross between satisfying and sufficing: Herbert Simon, 1957
  5. 5.  Execute all four steps, but less than thorough in doing so  Less thoughtful about meaning of question  Search memory less thoroughly  Integrate information more carelessly  Select response more haphazardly ▪ The first answer a respondent considers acceptable is the one he/she offers
  6. 6.  Omitting the retrieval and judgment steps  Interpret question superficially  Select answer they believe will appear to be reasonable answer ▪ Use cues in the question to identify a response that seems easily defensible with little thought
  7. 7. 1. Selecting the first alternative in a closed question that is acceptable (yielding response order effects) weak2. Acquiescence (agreement with satisficing assertions offered in agree/disagree, true/false, yes/no questions)3. Selecting the status quo (easy to strong defend keeping things the way they satisficing are now)4. Selecting no opinion (easy to claim ignorance)
  8. 8. 1. Difficulty of the task2. Ability to perform the required task3. Motivation to perform the task The greater the task difficulty and the lower the respondent’s ability and motivation to optimise, the more likely satisficing is to occur.
  9. 9.  Complexity and familiarity with language and concepts Extent of retrieval process Complexity of information to be integrated into summary judgment Ease with which judgments can be expressed with response alternatives Speed of asking questions Significant distraction Length
  10. 10.  Optimising easier for respondents adept at retrieving information and forming a summary judgment Easier for individuals who have had practice in thinking about the topic Easier for people who have stored in memory a preconsolidated answer to precisely the question asked
  11. 11.  Need for cognition – some people like thinking Personal importance of topic Perceived value of survey Interviewer encouraging optimising Accountability of respondent Motivation reduces over the length of the interview (fatigue)
  12. 12.  To prevent (or reduce) satisficing To encourage optimising To develop strategies to adjust for satisficing
  13. 13. 1. Conversational norms, rules and conventions2. Open vs closed questions3. Scales, options, labels4. Response order effects
  14. 14.  Follow conversational norms and rules Give background info first, more important info second  Violations cause misunderstandings
  15. 15.  Follow conversational conventions  e.g. “every man, woman and child”, “now and then”, “sooner or later”  Positives first “Are you going to buy it or not?” Violations distract Expectations are violated, people are surprised and distracted, so responses are made more slowly and with more error These effects are most apparent among respondents with the least cognitive skills, those with low grades or little formal education
  16. 16.  Say what you need to say, when you need to say it, given the purpose of the conversation (cooperative principle) Example violations  Don’t ask the same question twice  All information provided is relevant and necessary  Offered response options are comprehensive and appropriate  All assertions are true
  17. 17.  More reliable and valid than closed questions for numerical data or categorical data with an unlimited universe Problems  Articulation? No  Salience? No  Frame of reference? Sometimes
  18. 18.  Non-attitudes Response alternatives suggest normal/expected answers  Most choose the middle options Incomplete response alternatives Closed questions make respondents work harder (answer the question then code it into category)
  19. 19.  When you cannot be sure of the universe of possible answers to a categorical question  “other-specify” does NOT work The only way to be sure of the universe of possible answers is to pre-test  To do the pre-test, need to ask open-ended questions with a full representative sample; i.e. do the study Also ask open ended questions when you seek a numerical answer
  20. 20.  Continuum line – people place themselves on 7 points 7 points better for bipolar dimensions  Keep the middle option The more points, the more moderate the ratings Magnitude scaling – infinite number of scale points relative to an arbitrary point (ratio) – no more reliable or valid
  21. 21.  Instead of giving a 7-point scale, better to use branching – works really well on the internet
  22. 22. Q1. Are you republican, democrat, independent, or something else? If republican, are you strongly or not very strongly republication? If democrat, are you strongly or not very strongly democrat? If independent or something else, do you consider yourself to be closer to republican or democrat?extremely quite slightly neither slightly quite extremely strongly strongly rep rep nor dem strongly strongly rep rep dem dem dem
  23. 23.  Works on paper too like neither dislike a lot lean towards a lot a little or neutral a little Branching is quicker Unipolar – five points better than seven Midpoints are GOOD
  24. 24.  Dimensions with no natural metric  Liking, importance, certainty, friendliness, etc Dimensions with natural metrics  Frequency, probability, etc
  25. 25.  satisfaction reliability (esp. for low/med educated) validity More widely-spread end points Equal spacing 5-7 point scales – very little overlap between points Can control for it with software People are drawn towards labels ▪ Label all the points
  26. 26.  Vagueness is attractive People prefer vague quantifiers over numbers  Test re-test reliability is the same But descriptions of physical characteristics are better with numbers, e.g. “how tall…” Phrases are affected by previous context When reviewing information, people prefer numbers  Frequency vague quantifiers are biased
  27. 27.  Use natural metric if there is one  How good was the quality?  How good does it need to be for you to be satisfied? instead of  How satisfied were you?
  28. 28.  Extremely, very, moderately, slightly, not at all Definitely will, probably will, might or might not, probably won’t, definitely won’t A great deal, a lot, moderate amount, a little, none at all Always, most of the time, about half of the time, sometimes, never
  29. 29.  Extremely good, moderately good, slightly good, neither good or bad, slightly bad, moderately bad, extremely bad Like a great deal, moderately like, like a little, neither like nor dislike, dislike a little, moderately dislike, dislike a great deal 7-point is better, but can use 5-point
  30. 30.  Three efficient but flawed methods  Agree/disagree (or scale)  Yes/no  True/false (or scale) People tend to agree People say yes more often if the opposing view is omitted USE CONSTRUCT-SPECIFIC LABELS
  31. 31.  Primacy effects Recency effects
  32. 32.  Education  Later question placement Cognitive skills  Priming knowledge from Grades previous questions (the More sentences more questions, the less Words per sentence the effect) Letters per word  Completion time Longer response options  Proconsolidated opinions Response options not mutually exclusive
  33. 33.  More likely with verbal Delayed processing questions  respondents typically cannot start thinking about the response alternatives until all have been read, so they more fully process response alternatives read last Seemingly open-ended questions Seemingly yes/no questions All show recency effects to some extent (but seemingly open-ended questions less so)
  34. 34.  More likely with rating scales People choose the first option within latitude of acceptance – weak satisficing Both oral and visual Switching  Requires more cognitive power if violates conversational norms  Requires elimination of systematic errors in both directions – not equal due to regulators
  35. 35.  Just go with one order..?  Yes, when respondents expect it  Yes, when you are predicting a survey behaviour, e.g. polls
  36. 36.  Rotate order across respondents?  Costly  Individual responses still distorted Creates systematic error  Can control for order in analysis  Represent interactions of order + respondent characteristics, e.g. education, etc Instead: increase motivation and lessen cognitive demand
  37. 37. SUMMARY1. Conversational norms, rules and conventions  Follow when expected2. Open vs closed questions  Use open-ended when you can3. Scales, options, labels  Use 7-point for bipolar or 5-point scale for unipolar  Label all options  Use construct-specific labels4. Response order effects  Be(a)ware of primacy and recency effects