Your SlideShare is downloading. ×
Comparing Research Designs
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×
Saving this for later? Get the SlideShare app to save on your phone or tablet. Read anywhere, anytime – even offline.
Text the download link to your phone
Standard text messaging rates apply

Comparing Research Designs

2,578

Published on

This is the handout version of a lecture I give to medical residents and fellows on the basics of clinical research designs and the inherent issues that go along with each one. I give this lecture as …

This is the handout version of a lecture I give to medical residents and fellows on the basics of clinical research designs and the inherent issues that go along with each one. I give this lecture as part of a multi-module lecture series on research design and statistical analysis.

Published in: Health & Medicine, Education
1 Comment
10 Likes
Statistics
Notes
  • Would you please share this for knowledge enhancement of self and, to educate some students?
       Reply 
    Are you sure you want to  Yes  No
    Your message goes here
No Downloads
Views
Total Views
2,578
On Slideshare
0
From Embeds
0
Number of Embeds
0
Actions
Shares
0
Downloads
0
Comments
1
Likes
10
Embeds 0
No embeds

Report content
Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
No notes for slide
  • Most of this presentation will address the observational research designs because those are the ones students will see most often in their own work (most likely). I briefly touch on the common problems in research design as well as a couple of different RCT designs.
  • The focus is not to give an in depth description of confounding because you will be hitting it on a separate lecture. I just need to introduce a couple of these concepts so describing the strengths and weaknesses of the various designs can be clearer.
  • Using one of the example findings from this week’s article to illustrate confounding.
  • Very general introduction to bias. I will be asking the class to think of possible sources of bias before moving to the next slide.
  • Would love any examples you may have for these biases based on your area of research.
  • I’m using a social science version of these constructs, but from what I see Epi/Med lit uses “precision” and “accuracy” in place of reliability and validity.
  • Since the Olympics are likely still fresh on everyone’s minds, and I coached/played the sport for many years, I have this example of inter-rater inter-observer reliability as the judging for Olympic gymnastics.My more professional examples are:Multiple judges for professional presentations (we have to calculate it every year for our annual residents’ research day)Multiple physicians looking at the same x-ray or CT (have several stories about those)Multiple scorers on the same essay (e.g. the ACT or GRE writing portion)Doctoral/Master’s committees (while they don’t calculate it, the example still makes sense)
  • Archery is a classic example of validity, and again I went with the Olympic theme for fun.While there are numerous types of validity, I choose to briefly hit on internal and external as they are most related to experiments/studies themselves.
  • Several of these have great Public Health implications such as History.
  • Cross-sectional Studies considered 2.3 by the OBGYN journal I looked at, but it is not technically a “Clinical” study.Double check to see if this is the same pyramid of evidence you see used in Epi.
  • Can measure attitudes, beliefs, behaviors, personal or family history, genetic factors, existing or past health conditions, or anything else that does not require follow-up to assess.The source of most of what we know about the population.Before moving to the next slide I will be asking for some students to discuss what they think could be drawbacks of these designs.
  • For each of these study types I like to layout the advantages and disadvantages for students to easily see.
  • I ran into an incidence v prevalence situation with two students on Wednesday, so hitting it here will be good.The largest yet simplest factor in differentiating case-control from cohort is in the way the groups are selected. That has been the best way for me to describe it to students in the past.
  • Discuss advantages/disadvantages of each. I have examples of how each would be done.
  • I would be VERY open to suggestions as to what a more relevant example would be for these students.
  • Another Olympic example for fun.
  • I find that people oftentimes get confused on figuring out how these are separated. I usually say that in a retrospective you start in the past and work towards the present while in a prospective you start in the present and work towards the future.
  • Before showing the slides, have the class brainstorm what some of the possible Pros and Cons to a cohort study could be. After we go through them in class, have them get with a partner to try and think of ways to avoid some of the cons.
  • This part of the presentation is meant to introduce RCTs, and some of the language associated with them. It is by no means meant to be a all-encompassing presentation on the topic.
  • Therapeutic misconception: control patients believe they are getting the best treatmentRegarding the external validity comment: I plan to acknowledge that both types of validity are high in RCT designs; however, the more like a laboratory you make an experiment (i.e. the more controlled it is), the less it is reflective of how things work in the real world.
  • Transcript

    • 1. Comparing Research Designs Patrick BarlowStatistical and Research Design Consultant, Graduate School of Medicine, UTK PhD Student in Evaluation, Statistics, and Measurement, UTK
    • 2. On the Agenda Common problems in research design  Confounding  Bias  Reliability  Validity Observational Research Designs  Cross-Sectional study  Case-control study  Cohort study Experimental Research Designs  Randomized Control Trial
    • 3. Common Problems in Research Design Confounding Bias Reliability Validity
    • 4. Confounding A confounder is a variable that is causally associated with the outcome (DV) and may or may not be causally associated with the exposure (IV) Causes spurious conclusions & inferences to be made about a set of variables Reduced through  Randomization  Matching  Statistically controlling (covariates)
    • 5. Confounding ObesityPMH Use ? Colorectal Adenomas
    • 6. Bias in Research The result of systematic error in the design or conduct of a study Can artificially “trend” results  Toward the Null hypothesis  Toward the Alternative hypothesis A major problem to consider when planning any study
    • 7. Common Biases Selection bias: one relevant group in the population (e.g. cases positive for predictor variable) has a higher probability of being included in the sample Ascertainment: bias in asking questions or offering tests of one group over another Information: bias from erroneously classifying people in exposure/outcome categories Adjudication: bias in determining if the treatment was helpful due to partial or inadequate blinding Recall/Response: bias associated with inaccurate recall of exposure or representation of true exposure (self- report) Experimenter/Interviewer bias: Differential treatment of participants in treatment and control groups Publication: the tendency to publish only “positive” or “significant” findings.
    • 8. Reliability• Refers to the consistency of an instrument/measurement.• Thought of as an individual’s “true score” on the phenomenon you aim to measure minus “measurement error”• Two common types of reliability o Internal consistency: Cronbach’s alpha, KR20 o Inter-Rater: Kappa statistic• Necessary but not sufficient in determining validity.
    • 9. Reliability
    • 10. Validity  Refers to the accuracy of an instrument/measurement  In other words, “the degree to which you’re measuring what you claim to measure”  Two broad types of validity  Internal validity  External validity
    • 11. Internal Validity Concerns the accuracy of measurement within the study Can be threatened by  Biases  Confounding  History: large scale events that change participants’ attitudes or behavior (e.g., recession)  Maturation: participants change over time, e.g., growth, fatigue etc.  Repeated Testing: participants get wise to the study and remember the test questions  Compensatory rivalry/resentful demoralization: Control participants work extra hard to prove themselves or withdraw because not getting treatment  Diffusion: treatment effects spread from treatment group to control group
    • 12. External Validity The ability to generalize the findings of your study to the relevant population. Threatened by  Bias  Confounding  Non-experimental design (i.e. case-control vs. RCT)  Lack of randomization External validity is the strongest when a true experimental design is used.
    • 13. Comparing Research Designs Cross-Sectional Studies Case Control Studies Cohort Studies Randomized Control Trial (RCT)
    • 14. Pyramid of Clinical Evidence EvidenceSystematic Reviews Summaries & Meta-analyses RCT Level 1 Evidence Cohort Studies Level 2 Evidence Cross-Sectional Case ControlStudies: Level 2.3 Studies Level 3 Evidence Case Series Case Reports Ideas, Editorials, Opinions Animal research In vitro (‘test tube’) research
    • 15. Cross-Sectional Studies “Snapshot” of a population. People are studied at a “point” in time, without follow-up. Strength of evidence… What are some research questions that can be answered with cross-sectional designs?
    • 16. Advantages and Disadvantages of Cross-Sectional Studies Advantages Disadvantages Fast and inexpensive  Can’t determine No loss to follow-up causal relationship Springboard to  Impractical for rare expand/inform diseases research question  Risk for nonresponse Can target a larger sample size
    • 17. Case-Control Studies Always retrospective  Prevalence vs. Incidence A sample with the disease from a population is selected (cases). A sample without the disease from a population is selected (controls). Groups are compared using possible predictors of the disease state.
    • 18. Advantages and Disadvantages of Case-Control Studies Advantages Disadvantages  Cannot estimate High information yield incidence of disease with few participants  Limited outcomes can Useful for rare be studied outcomes  Highly susceptible to biases
    • 19. Strategies for Sampling Controls Population versus hospital/clinic-based controls Matching  Individual level  Group level Using 2 or more control groups
    • 20. For Discussion“How much does a family history ofalcoholism increase the risk of being analcoholic?” The PI plans a case-controlstudy to answer this question.  How should she pick the cases?  How should she pick the controls?  What are some potential sources of bias in the sampling of cases and controls?
    • 21. Cohort Studies A “cohort” is a group of individuals who are followed or traced over a period of time. A cohort study analyzes an exposure/disease relationship within the entire cohort. Groups selected based on exposure to a risk factor.
    • 22. Cohort Design
    • 23. Are U.S. Athletes more likely to win aGroup of gold medal than Chinese athletes atInterest the 2012 Olympics?(U.S.) Follow over the games Compare OutcomesComparisonGroup(China)
    • 24. Prospective versus Retrospective Cohort Studies Exposure Outcome Assessed at the Followed intoProspective beginning of the the future for study (present) outcome Assessed at some Outcome hasRetrospective point in the past already occurred
    • 25. Advantages and Disadvantages of Cohort Studies Advantages Disadvantages Establish population-based  Lengthy and costly incidence  May require very large samples Accurate relative risk  Not suitable for rare/long-latency Temporal relationship inferred diseases Time-to-event analysis possible  Unexpected environmental changes Used when randomization not  Nonresponse, migration and loss-to- possible follow-up Reduces biases (selection,  Sampling, ascertainment and information) observer biases Can study multiple outcomes  Changes over time in staff/methods
    • 26. Randomized Controlled Trial Considered the “gold standard” by much of the research community (level 1 evidence) Blind vs. double blind Randomization Cause & effect
    • 27. Designs of RCTs• Parallel Group Trial: Patients in the same randomized group throughout the study• Cross-over Trial: Patient randomly assigned to one group then crossed over to other group at some point. Patient serves as own control – greatly reduces sources of bias and confounding• Factorial Trial: Two or more interventions in a single experiment.
    • 28. Disadvantages of RCT Designs Extremely time and resource demanding Unethical in many situations Poor external validity if the RCT is too highly controlled Difficult to study rare events Therapeutic misconception
    • 29. Material Learned  Common problems in research design  Confounding  Bias  Reliability  Validity  Observational Research Designs  Cross-Sectional Studies  Case Control Studies  Cohort Studies  Experimental Research Designs  RCT DesignQuestions?
    • 30. In Pairs… Work together to brainstorm an example of how your topic could be addressed using 1) a Cross- Sectional design, 2) a case-control design, 3) a prospective or retrospective cohort design, and an RCT. Be prepared to share your responses

    ×