Carma internet research module n-bias

846 views

Published on

0 Comments
0 Likes
Statistics
Notes
  • Be the first to comment

  • Be the first to like this

No Downloads
Views
Total views
846
On SlideShare
0
From Embeds
0
Number of Embeds
6
Actions
Shares
0
Downloads
7
Comments
0
Likes
0
Embeds 0
No embeds

No notes for slide

Carma internet research module n-bias

  1. 1. N-BIAS Nonresponse BiasAssessment Techniques<br />CARMA Internet Research Module<br />Jeffrey Stanton<br />
  2. 2. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-2)<br />N-BIAS Methods<br />Rogelberg, S. G, & Stanton, J. M. (2007). Understanding and dealing with organizational survey nonresponse. Organizational Research Methods, 10, 195-209.<br />
  3. 3. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-3)<br />N-BIAS Methods<br />Wave Analysis<br />Archival Analysis<br />Follow-up survey of non-respondents<br />Passive non-responseanalysis (time-to-respond vs. substantive)<br />Active non-respondentpre-study analysis<br />Interest level analysis<br />Worst case resistance<br />Benchmarking<br />Triangulation<br />Detect, estimate, and optionally compensate for the presence, direction, and magnitude of non-response bias. Gives clearer picture of extent of bias but few good options for eliminating it.<br />
  4. 4. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-4)<br />N-BIAS Methods<br />N-BIAS comprises eight techniques<br />Archival Analysis<br />Follow-up Approach<br />Wave Analysis<br />Passive Nonresponse Analysis<br />Interest Level Analysis<br />Active Nonresponse Analysis<br />Worst Case Resistance<br />Demonstrate Generalizability<br />
  5. 5. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-5)<br />Technique 1: Archival Analysis<br />Most common technique<br />The researcher identifies an archival database that contains the members of the whole survey sample (e.g. personnel records).<br />That data set, usually containing demographic data, can be described:<br />50% Female; 40% Supervisors, etc<br />After data collection, code numbers on the returned surveys (or access passwords) can be used to identify respondents, and by extension nonrespondents. Using this information, the archival database can be partitioned into two segments: 1) data concerning respondents; and 2) data concerning nonrespondents.<br />
  6. 6. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-6)<br />So, if you found the above do you have nonresponse bias?<br />
  7. 7. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-7)<br />Technique 2: Follow-up Approach<br />Using identifiers attached to returned surveys (or access passwords), respondents can be identified and by extension nonrespondents.<br />The follow-up approach involves randomly selecting and resurveying a small segment of nonrespondents often by phone. The full or abridged survey is then administered.<br />In the absence of identifiers, telephone a small random sample and ask whether they responded or not to the initial survey. Follow-up with survey relevant questions<br />
  8. 8. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-8)<br />Technique 3: Wave Analysis<br />By noting in the data set whether each survey was returned before the deadline, after an initial reminder note, after the deadline, and so on, responses from pre-deadline surveys can be compared with the late responders on actual survey variables (e.g. compare job satisfaction levels).<br />
  9. 9. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-9)<br />Technique 4: Passive Nonresponse Analysis<br />Rogelberg et al. (2003) found that the vast majority of nonresponse can be classified as being passive in nature (approx. 85%).<br />Passive nonresponse does not appear to be planned.<br />When asked (upon receipt of the survey), these individuals indicate a general willingness to complete the survey – if they have the time. Given this, it is not surprising that they generally do not differ from respondents with regard to job satisfaction or related variables.<br />
  10. 10. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-10)<br />Technique 5: Interest Level Analysis<br />Researchers have repeatedly identified that interest level in the survey topic is one of the best predictors of a respondent’s likelihood of completing the survey.<br />As a result, if interest level is related to attitudinal standing on the topics making up the survey, the survey results are susceptible to bias.<br />E.g., if low interest individuals tend to be more dissatisfied on the survey constructs in question, results will be biased “high”<br />
  11. 11. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-11)<br />Technique 6: Active Nonresponse Analysis<br />Active nonrespondents, in contrast to passive nonrespondents, are those that overtly choose not to respond to a survey effort. The nonresponse is volitional and a priori (i.e. it occurs when initially confronted with a survey solicitation).<br />Active nonrespondents tend to differ from respondents on a number of dimensions typically relevant to the organizational survey researcher (e.g. job satisfaction)<br />
  12. 12. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-12)<br />Technique 7: Worst Case Resistance<br />Given the data collected from study respondents in an actual study, one can empirically answer the question of what proportion of nonrespondents would have to exhibit the opposite pattern of responding to adversely influence sample results.<br />Similar philosophy as what occurs in meta-analyses when considering the “file-drawer problem”<br />By adding simulated data to an existing data set, one can explore how resistant the dataset is to worst case responses from non-respondents.<br />
  13. 13. Technique 8<br />Benchmarking<br />Using measures with norms for the population under examination, compare means and standard deviations of the collected sample to the norms<br />
  14. 14. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-14)<br />Technique 9: Demonstrate Generalizability<br />By definition, nonresponse bias is a phenomenon that is peculiar to a given sample under particular study conditions.<br />Triangulating with a sample collected using a different method, or varying the conditions under which the study is conducted should also have effects on the composition of the nonrespondents group.<br />
  15. 15. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-15)<br />Exercise 2: Item Writing<br />Write three Likert scaled items (each) to tap into the following response-related constructs:<br />Busyness (how busy the respondent feels)<br />Topical interest (how interested the respondent is in the topic of the survey)<br />Satisfaction with the sponsoring entity (the group who is running the survey)<br />Response modality preference (e.g., got paper survey but preferred web)<br />
  16. 16. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-16)<br />Establish <br />Generalizability<br />Demographic Profile:<br />1) Archival analysis<br />Attitudinal Profile:<br />1) Follow-up approach<br />2) Wave analysis<br />Warning Signs:<br />1) Interest level analysis<br />2) Passive NR analysis<br />3) Active NR analysis<br />4) Worst-case analysis<br />
  17. 17. May 15-17, 2008<br />Internet Data Collection Methods (Day 2-17)<br />Important Footnote<br />In many of the N-BIAS techniques described above, one is amassing evidence for the absence of non-response bias by making comparisons and tests that preferably lead to non-significant statistical results.<br />What is the problem with this?<br />

×