Your SlideShare is downloading. ×
  • Like
  • Save
How to critique articles
Upcoming SlideShare
Loading in...5
×

Thanks for flagging this SlideShare!

Oops! An error has occurred.

×

Now you can save presentations on your phone or tablet

Available for both IPhone and Android

Text the download link to your phone

Standard text messaging rates apply
Published

how to critique articles …

how to critique articles
3rd medical Club meeting
20 oct. 2011

Published in Education , Health & Medicine
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
No Downloads

Views

Total Views
8,764
On SlideShare
0
From Embeds
0
Number of Embeds
1

Actions

Shares
Downloads
0
Comments
1
Likes
8

Embeds 0

No embeds

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
    No notes for slide
  • Its not as important how data is analysied … what is important is how data is collected
  • As critical apprasial you should consider the resones for the study and determin is ther enough evidance presented o justfy
  • Randomization insures that both measurable and unmeasurable factors are balanced out across both the standard and the new therapy, assuring a fair comparison. It also guarantees that no conscious or subconscious efforts were used to allocate subjects in a biased way.
  • . Nevertheless, for any deviation or modification to the protocol, you can ask whether this change would have made sense to include in the protocol if it had been thought of before data collection began.

Transcript

  • 1. HOW TO CRITIQUESCIENTIFIC ARTICLES HANI ALMOALLIM
  • 2. WHAT INFORMATION WOULD I REQUIRE TO ACCEPT THE CONCLUSION? Fisher’s assertability question
  • 3. KIPLING 6 HONEST SERVING MEN• I keep six honest serving men• They taught me all I know• Their names are ;• what, why and when• How, where and whoR. Kipling 1902
  • 4. THE PLAN• Over view on article structure• Whats considered to be the gold stranded of research?• Hands-on evaluation of medical articles
  • 5. SCIENTIFIC PAPERThe council of biology editor define scientific paper as , an acceptable primary scientific publication must be the first disclosure containing information to enable peers :• 1- to assess observation• 2- to repeat experiment• 3- to evaluate intellectual process
  • 6. SCIENTIFIC PAPER• New• True• Important• comprehensible
  • 7. Why : study designHow : study methodologyWho : study populationWhat: intervention and outcomesHow many: statisticWhat else/
  • 8. TITLE• Has to be informative, concise and graceful!• Attract the reader!• Tell what the study about!• Why the study was done?• What get studied is what get funded!
  • 9. AUTHORS• Are they known in the field?• Whats their specialty?• Citation index.
  • 10. DATE OF SUBMISSION/ ACCEPTANCE• Long delay may indicate that referee found serious issues in the initial version
  • 11. ABSTRACT• Why the study was done• What was done• What was found• What was concluded• It helps to answer Fisher’s assertability question
  • 12. IT S NOT HOW THE DATA WERE ANALYZED IT HOW THE DATA WERE COLLECTED
  • 13. INTRODUCTION• How important is the study and what’s new• Is there clear statement to justify the study?• Is there a clear statement of the study hypothesis?
  • 14. WHY: STUDY QUESTION• The study design , population to be studied, the method to be used, all depends on the purpose of the study• Is the hypothesis stated in advance or arose by the data• “ fishing expedition”: exploring their data for association then reporting the significant one!
  • 15. Is It Efficacy Or Effectiveness• Therapeutic studiesEfficacy :• whether the intervention will work ; very controlled population and experimental conditions( ideal ). Short term goalsEffectiveness :• intervention will cause more good than harm, under normal clinical condition, long term goals
  • 16. Bypass Surgery In Patients With Coronary Heart Disease efficacy effectiveness• Patient with clearly • Policy: intent to treat documented coronary principle” stenosis • Long term• Increased myocardial survival, quality of life flow or relief of symptoms• Any one was allocated to • Any one allocated but TX but did not receive it, did not receive the will not be included in the surgery will be included study in the study
  • 17. Nicotine patch therapy in adolescent smokers Smith etal 1996What information would I require toaccept the conclusion
  • 18. WHYIs sufficient evidence presented to justify the study?Is there a clear statement of the purpose of the study?Is there a clear statement of study hypothesis?Is it clearly outlined in the study if itsEfficacyeffectiveness
  • 19. HOW: STUDY DESIGN observational experimental Comparison group randomized Non randomized yes no Analytical descriptivecohort Case control Cross sectional
  • 20. Comparison GroupAlmost all studies has comparison :• dose left handed subjects live longer than right handed?• Are women more more likely to have periodontal disease than men• Comparison to be fair
  • 21. EXAMPLE 2: Cancer And Vitamin C• observational study of Vitamin C as a treatment for advanced cancer.• For each patient, ten matched controls were selected with the same age, gender, cancer site, and histological tumor type.• Patients receiving Vitamin C survived four times longer than the controls (p < 0.0001).• Cameron and Pauling 1976
  • 22. • Ten years later, the Mayo Clinic conducted a randomized experiment which showed no statistically significant effect of Vitamin C. Moertel 1989• Why did the Camoeron and Pauling study differ from the Mayo study?• The treatment group represented patients newly diagnosed with terminal cancer. They received Vitamin C and followed prospectively.• The control group was selected from death certificate records The control group represented a retrospective chart review.
  • 23. Be Cautious When A Study ComparesProspective Data To Retrospective data
  • 24. Did The Author Created The Groups?• Experimental study• Observational study• Who did the choosing ?:• Author decided who get the intervention : experimental• Patients / doctors decided/group were intact prior to study : observational
  • 25. EXAMPLE121 children with moderate-to-severe asthma were"randomly assigned to receive subcutaneousinjections of either a mixture of seven aeroallergenextracts or a placebo.” Adkinson (1997),an experimental design.
  • 26. EXAMPLE"80 severe recidivist alcoholics received acupunctureeither at points specific for the treatment of substanceabuse (treatment group) or at nonspecific points(control group).” Bullock (1989),Since the researchers controlled the nature of theacupuncture, this is an experimental design.
  • 27. EXAMPLE33 health care workers who became seropositive to HIVafter percutaneous exposure to HIV-infected bloodwere compared to 665 health care workers with similarexposure who did not become seropositive. Cardo (1997)Since the researchers did not control who becameseropositive, this is an observational study.
  • 28. EXAMPLE• 80,082 women between the ages of 34 and 59 years were followed for 14 years to look for instances of non-fatal myocardial infarction or death from coronary heart disease. These women were divided into low, intermediate, and high groups on the basis of their consumption of dietary fat. Hu (1997),• Since the women themselves controlled their diets, rather than having a diet imposed on them by the researchers, this represents an observational design
  • 29. • information from Experimental designs is considered more authoritative than information from observational designs
  • 30. HOW: STUDY DESIGNWhat is the study design?Was it randomize?Was it blinded?Was prognostic stratification used?
  • 31. HOW : STUDY DESIGN• Controlled trial• Before- and after• Prospective analytic• Cross sectional• Retrospective• Case series
  • 32. Was The Assignment Randomized?• assurance that the two groups are comparable in every way except for the therapy received.• use of a random device, such as a coin flip or a table of random numbers.• Be alert to “ quasi-random allocation” patient allocated on the basis of seemingly random process ( BD, chart #.. Etc)• If randomization was not followed :• could any bias have occurred from the allocation of patients?
  • 33. Why Randomization Is Important?• Groups are more comparable for known and unknown variables ( measurable and un measurable)• Eliminate selection bias• Some statistical analysis prerequisite randomization• Its difficult to have blinding in a trial which is not based on random allocation
  • 34. What Type Of Blinding Was Used• Knowledge of group membership, either before or during the data collection can bias the study• At the start of the study, did the patients know which group they were going to be placed in?• During the study, did the patients know which group they were in?
  • 35. BLINDING• Single blind• Double blind• Triple blind• Surgical trials : at least who performs the outcome assessment.• If the study was not blinded… how does lack of blinding might have affected the result?
  • 36. BLINDING• Studies without blinding show an average biase of 11-17 % Schulz 1996, Coldiz 1989• Comparing Unblinded study to a blinded one : an overestimation of treatment effect by 11-17 %
  • 37. Why Blinding Is Important?• Prevent bias form allocating the patient to experimental /control: i.e.; very sick patient.• Minimize difference of how pt are treated during care.• Prevent losing patient from trial.• Prevent placebo Positive effects of a treatment.• Greater validity of result : more like to report side effect.• Minimize expectation bias for subjective outcomes, i.e pain.
  • 38. WHO STUDY POPULATION• Is the population from which the sample clearly described?• Did they represent a full spectrum of disease of certain subset?
  • 39. WHO: STUDY POPULATION• Clear and replicable inclusion and exclusion criteria• Did the criteria match the goal of the study?• Who was excluded at the start of the study?• Who dropped out during the study?• Was there any effort to minimize drop out?• Where the authors able to characterizes the demographic of the drop out?
  • 40. WHAT : Intervention And Outcomes• What is the intervention? Is it Cleary stated• Were there enough subjects?• Did the research have a narrow focus?• Did the authors deviate from the plan?• Did the authors discard outliers?
  • 41. Were There Enough Subjects?• Small sample size lead to lack of power ; negative study• Half of the articles reporting non significant difference between therapies, a 50% improvement in performance could be easily missed• Type II error and small sample size are ubiquitous to medical lit Freiman etal• Predetermine the needed size!
  • 42. Did The Research Have A Narrow Focus?• A good research study has limited objectives that are specified in advance. Failure to limit the scope of a study leads to problems with multiple testing.• A large number of comparisons limits the amount of evidence that you can place on any single conclusion.• Fishing
  • 43. • “If you torture your data long enough, it will confess to something."
  • 44. Be aware of Multiple comparisonproblems:Increase in type I error
  • 45. Were Statistical Tests Applied Appropriately?• Knowledge of bio statistic• Greenhalgh T. statistic for the non – statistion. Part I: British medical journal 1997• Greenhalgh T. statistic for the non – statistion. Part II: significant relations and pitfalls. British medical journal 1997
  • 46. • Withdrawals; patients removed by investigators• Dropout: patients leave the study on their well• Crossover: patients change arm of the study• Poor compliers• Intent to treat : subjects are analyzed according to the treatment to which they were randomized to
  • 47. SO WHAT• If difference was detected …is it clinically significant?• For a difference to be a difference it has to make a difference"
  • 48. SO WHAT• Were the patient entered and analyzed sufficiently representative that the results can be generalized?• Can intervention as performed be generalized to other sitting?
  • 49. HOWStudy designAllocation of subjects randomizedControl groupBlindness WHOPopulation Cleary describedInclusion and exclusion criteriaVolunteersSample size WHAT What intervention Compliance Dropout Narrow focus Change of plan Alternative hypotheses
  • 50. Why : study designHow : study methodologyWho : study populationWhat: intervention and outcomesHow many: statisticWhat else/
  • 51. WHAT DOES IT TAKE TO CONVINCEME THAT THIS EVIDENCE IS TRUE?
  • 52. THANK YOU
  • 53. How many: statistical significant andsample sizeWas statistical significant considered?Was test applied appropriately?Did they consider the sample size prior tostart?Was the study large enough to detectdifference?
  • 54. REFERENCE Critical thinking : understanding and evaluating dentalresearch. D Brunettehttp://www.childrensmercy.org/stats/journal/jour2003-07.htmhttp://healtoronto.com/howto.html Thank you