Research design fw 2011


Published on

A brief overview of epidemiological research designs as well as survey construction.

Published in: Education, Technology, Business
1 Like
  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • Before showing the slides, have the class brainstorm what some of the possible Pros and Cons to a cohort study could be. After we go through them in class, have them get with a partner to try and think of ways to avoid some of the cons.
  • Identify the purpose and focus of study Why are you creating the survey Reviewing the literature on the topic Reviewing other measures Obtain feedback from stakeholders to clarify the purpose and focus Obtain feedback from experts as to the purpose of the survey Identify the research methodology and type of instrument to use for data collection Self report or observational? What type of instrument is best? Begin to formulate questions or items Create survey items Many ways to do this Pretest items and preliminary draft Give preliminary items to experts and/or population of interest Revise instrument based on feedback Revise survey based on feedback Pilot test and revise Take revised instrument and give to population of interest Revise based on feedback Conduct analyses (pca, cronbach’s alpha, etc.) To create final instrument Administer final instrument and analyze and report results Collect data using final instrument Assess reliability and validity of survey
  • Content Validity : refers to representativeness of the questions in the instrument. Face Validity : measures what it is intended to measure. Criterion-Based Validity : determined by established criteria and looks at the degree of relationship between two measures of the same phenomenon. Concurrent Validity : two measurements that are focused on the present (e.g., exercise self-efficacy and current exercise participation). Predictive Validity : the measurement used will be correlated with a future measurement of the same phenomenon (e.g., MCAT score to predict medical school GPA). Construct Validity : refers to the extent to which a scale measures the construct, or theoretical framework, it is designed to measure. Convergent Validity : the extent to which an instrument’s output is associated with that of other instruments intended to measure the same exposure of interest. Discriminant/Divergent Validity : based on the idea that two instruments should not correlate highly if they measure different concepts.
  • Title Should convey to the user what this survey sets out to measure Shouldn’t use a detailed title if giving it to participants (biases responses)       Introduction Should convey to the user the purpose of the instrument, how it is used, and what information this survey collects from participants   Directions or instructions This should be detailed for participants How to answer questions Define any constructs and/or terms used in survey Need directions for both self report and observational surveys   Items Selection items: items that give choices (rating scale) to participants; closed ended questions Supply items: items that allow participants to write their own responses; open ended questions   Demographics Important to include some demographics in survey Only include those necessary for study Used to see if your sample is representative as well as make comparisons between groups of participants   Closing section May have this section, not always necessary Thank you to participants Information on where to go for more information or contact info of data collectors
  • Literature review Here you review the published literature for other measures of the construct of interest Don’t reinvent the wheel Use existing surveys to give you ideas for how to create your survey There should be a deficiency in the literature…which is why you are creating a new survey Use of existing processes Sometimes policies and procedures of an organization dictate how a survey can be created Ex. Curriculum of a teenage pregnancy prevention program…dictates the knowledge questions for the survey you create Brainstorming Usually a group activity Think about topic and jot down ideas (individually) then discuss as a group Organize ideas and review new iterations for feedback from group Ex: round robin activity. Each participant puts forth an idea and everyone discusses Delphi technique Here you use experts in the area to generate items Send lists of items that are generated to each expert Review recommendations and revise list of items Less likely to be influenced by others in a group setting Can easily be done over the internet
  • -Avoid Double-barreled questions -Appropriate readability -Sentence length -Simple Language -Clear, specific terminology -Exhaustive response sets -Provide instructions for questions -Culturally appropriate -Sensitive items -Social desirability -Avoid bias -Limit negatively worded items -Don’t include superfluous items
  • Ask the Class: Why don’t people respond? Why is nonresponse a problem
  • Research design fw 2011

    1. 1. Comparing Research Designs Tiffany Smith, Eric Heidel, & Patrick Barlow Research and Statistical Design Consultants
    2. 2. Comparing Research Designs <ul><li>Cohort Studies </li></ul><ul><li>Case Control Studies </li></ul><ul><li>Cross-Sectional Studies </li></ul><ul><li>Developing Survey Instruments </li></ul>
    3. 3. Cohort Studies <ul><li>A “cohort” is a group of individuals who are followed or traced over a period of time. </li></ul><ul><li>A cohort study analyzes an exposure/disease relationship within the entire cohort. </li></ul>
    4. 4. Cohort Design
    5. 5. Prospective versus Retrospective Cohort Studies Exposure Outcome Prospective Assessed at the beginning of the study Followed into the future for outcome Retrospective Assessed at some point in the past Outcome has already occurred
    6. 6. Advantages and Disadvantages of Cohort Studies <ul><li>Advantages </li></ul><ul><li>Establish population-based incidence </li></ul><ul><li>Accurate relative risk </li></ul><ul><li>Examine rare exposures </li></ul><ul><li>Temporal relationship inferred </li></ul><ul><li>Time-to-event analysis possible </li></ul><ul><li>Used when randomization not possible </li></ul><ul><li>Quantifiable risk magnitude </li></ul><ul><li>Reduces biases (selection, information) </li></ul><ul><li>Can study multiple outcomes </li></ul><ul><li>Disadvantages </li></ul><ul><li>Lengthy and costly </li></ul><ul><li>May require very large samples </li></ul><ul><li>Not suitable for rare/long-latency diseases </li></ul><ul><li>Unexpected environmental changes </li></ul><ul><li>Nonresponse, migration and loss-to-follow-up </li></ul><ul><li>Sampling, ascertainment and observer biases </li></ul><ul><li>Changes over time in staff/methods </li></ul>
    7. 7. Case-Control Studies <ul><li>A sample with the disease from a population is selected (cases). </li></ul><ul><li>A sample without the disease from a population is selected (controls). </li></ul><ul><li>Various predictor variables are selected. </li></ul>
    8. 8. Strategies for Sampling Controls <ul><li>Hospital- or clinic-based controls </li></ul><ul><li>Matching </li></ul><ul><li>Using a population-based sample of cases </li></ul><ul><li>Using 2 or more control groups </li></ul>
    9. 9. Advantages and Disadvantages of Case-Control Studies <ul><li>Advantages </li></ul><ul><li>High information yield with few participants </li></ul><ul><li>Useful for rare outcomes </li></ul><ul><li>Disadvantages </li></ul><ul><li>Cannot estimate incidence/prevalence of disease </li></ul><ul><li>Limited outcomes can be studied </li></ul><ul><li>Highly susceptible to biases </li></ul>
    10. 10. For Discussion <ul><li>“ How much does a family history of lung cancer increase the risk of lung cancer?” The PI plans a case-control study to answer this question. </li></ul><ul><ul><li>How should she pick the cases? </li></ul></ul><ul><ul><li>How should she pick the controls? </li></ul></ul><ul><ul><li>What are some potential sources of bias in the sampling of cases and controls? </li></ul></ul>
    11. 11. Cross-Sectional Studies <ul><li>“ Snapshot” of a population. </li></ul><ul><li>Cross-sectional studies include surveys. </li></ul><ul><li>People are studied at a “point” in time, without follow-up. </li></ul><ul><li>Data are collected all at the same time (or within a short time frame). </li></ul><ul><li>Can measure attitudes, beliefs, behaviors, personal or family history, genetic factors, existing or past health conditions, or anything else that does not require follow-up to assess. </li></ul><ul><li>The source of most of what we know about the population. </li></ul>
    12. 12. Advantages and Disadvantages of Cross-Sectional Studies <ul><li>Advantages </li></ul><ul><li>Fast and inexpensive </li></ul><ul><li>No loss to follow-up </li></ul><ul><li>Springboard to expand/inform research question </li></ul><ul><li>Can target a larger sample size </li></ul><ul><li>Disadvantages </li></ul><ul><li>Can’t determine causal relationship </li></ul><ul><li>Impractical for rare diseases </li></ul><ul><li>Risk for nonresponse </li></ul>
    13. 13. INSTRUMENTATION RESULTS Junk In Junk Out =
    14. 14. Steps in Assembling the Instruments for the Study <ul><li>Identify the purpose and focus of study </li></ul><ul><li>Obtain feedback from experts to clarify the purpose and focus </li></ul><ul><li>Identify the research methodology and type of instrument to use for data collection </li></ul><ul><li>Begin to formulate questions or items </li></ul><ul><li>Pretest items and preliminary draft </li></ul><ul><li>Revise instrument based on feedback </li></ul><ul><li>Pilot test and revise </li></ul><ul><li>Administer final instrument and analyze and report results </li></ul>
    15. 15. Reliability & Validity (Colton & Covert, 2007) <ul><li>Validity: The extent to which we measure what we purport to measure (synonyms: accuracy). </li></ul><ul><ul><li>Types: Face, Concurrent, Predictive, Convergent, Discriminant </li></ul></ul><ul><li>Reliability: The extent to which an instrument produces the same information at a given time or over a period of time (synonyms: stable, dependable, repeatable, consistent, constant, regular). </li></ul><ul><ul><li>If the instrument is reliable, we would expect a patient who receives a high score the first time he/she completes the instrument to receive a high score the next time he/she completes it (all things being equal). </li></ul></ul>
    16. 16. Parts of a Survey <ul><li>Title </li></ul><ul><li>Introduction </li></ul><ul><li>Directions or instructions </li></ul><ul><li>Items </li></ul><ul><li>Demographics </li></ul><ul><li>Closing section </li></ul>
    17. 17. How to Create Survey Items (Colton & Covert, 2007) <ul><li>Literature review </li></ul><ul><li>Use of existing processes </li></ul><ul><li>Brainstorming </li></ul><ul><li>Snowballing or pyramiding </li></ul><ul><li>Delphi technique </li></ul>
    18. 18. Strategies for Writing Good Items <ul><li>Avoid Double-barreled questions </li></ul><ul><li>Appropriate readability </li></ul><ul><ul><li>Sentence length </li></ul></ul><ul><ul><li>Simple Language </li></ul></ul><ul><ul><li>Clear, specific terminology </li></ul></ul><ul><li>Exhaustive response sets </li></ul><ul><li>Provide instructions for questions </li></ul><ul><li>Culturally appropriate </li></ul><ul><li>Sensitive items </li></ul><ul><ul><li>Social desirability </li></ul></ul><ul><li>Avoid bias </li></ul><ul><li>Limit negatively worded items </li></ul><ul><li>Don’t include superfluous items </li></ul>
    19. 19. Modes of Administration <ul><li>Post Mail </li></ul><ul><li>Internet/Email </li></ul><ul><li>Telephone </li></ul><ul><li>Group Administration </li></ul><ul><li>One-on-one/Interview </li></ul><ul><li>A response rate of 50-60% is often considered an acceptable return rate for survey research. </li></ul>
    20. 20. Sources of Error <ul><li>Sampling Error: A result of the measuring a characteristic in some, but not all, of the units or people in the population of interest. Reduced by larger samples. </li></ul><ul><li>Coverage Error: The sample drawn fails to contain all the subjects in the population of interest. </li></ul><ul><li>Measurement Error: Error exists in the instrument itself (i.e. not valid and/or reliable). </li></ul><ul><li>Nonresponse Error: The inability to obtain data for all questionnaire items from a person in the sample population </li></ul><ul><ul><li>Unit/total Nonresponse </li></ul></ul><ul><ul><li>Item Nonresponse </li></ul></ul>
    21. 21. How to Increase Response Rate <ul><li>Know your population of interest </li></ul><ul><li>Pre-Incentives </li></ul><ul><ul><li>Preferably monetary </li></ul></ul><ul><li>Post-Incentives </li></ul><ul><ul><li>Raffle/random give away </li></ul></ul><ul><li>Cognitive dissonance </li></ul><ul><li>Electronic format </li></ul><ul><li>Cover letters </li></ul><ul><ul><li>Attending physicians rather than residents/fellows </li></ul></ul><ul><li>Clear informed consent </li></ul><ul><ul><li>Not coercive </li></ul></ul><ul><ul><li>Not forced </li></ul></ul><ul><li>Follow-up emails </li></ul><ul><ul><li>Pre-survey emails </li></ul></ul>
    22. 22. Material Learned <ul><li>Cohort Studies </li></ul><ul><li>Case Control Studies </li></ul><ul><li>Cross-Sectional Studies </li></ul><ul><li>Developing Survey Instruments </li></ul>Questions?