1. Evaluating a clinical trial F. Javier Rodriguez-VeraDepartment of Internal Medicine. Hospital do Barlavento Algarvio. Portimão. Portugal. EU
2. Study designed to answer specific questions about vaccines ornew therapies or new ways of using known treatments. Clinicaltrials (also called medical research or research studies) areused to determine whether new drugs or treatments are bothsafe and effective
3. Once researchers test new therapies or procedures in thelaboratory and get promising results, they begin planningclinical trials. New therapies are tested on people only afterlaboratory and animal studies show promising results.
4. Basically, a trial is comparing to options in asituation.These two options may beTherapeuticalDiagnosticPrognostic
5. Depending on the complexity, Trials may be:Before-afterUncontrolled clinical trialControlled clinical trialRandomized controlled trial
6. Study Before-after:Is the simplest and not properly a trialConsists in comparing the situation in a person or groupof people in two points of time: before aplying a certaindrug and after doing itThe probability of bias is high
7. Uncontrolled clinical trialAn intervention is givenTwo groups are comparedThe allocation to the groups is not randomizedMay happen that the two groups compared arenot homogeneous
8. Controlled clinical trialAn intervention is done in two groups whichhave been randomly createdBoth the patients and the doctors know theintervention apllied, so there exists Placeboeffect
9. •An RCT seeks to measure and compare the outcomes oftwo or more clinical interventions.•One intervention is regarded as the standard ofcomparison or control.•Participants receive the interventions in random order toensure similarity of characteristics at the start of thecomparison•Randomisation can be achieved through a variety ofprocedures•Individuals, groups, and the order in which measurementsare obtained can all be randomised•RCTs cannot answer all clinical questions
10. RCT is a study in which people are allocated at random to receiveone of several clinical interventions.
11. RCTs are quantitative, comparative, controlled experiments inwhich a group of investigators studies two or more interventions ina series of individuals who receive them in random order.
12. RCTs can be classified according to:(2) the aspect of the interventions investigators want to explore.(3) the way in which the participants are exposed to the interventions.(3) the number of participants included in the study.(4) whether the investigators and participants know which intervention is being assessed. (5) whether the preferences of non-randomised individuals and participants are taken into account in the design of the study.
13. Why a clinical trial has to be analyzed? The biasAny factor or process that tends to deviate the results orconclusions of a trial systematically away from the truthBias can occur in a trial during the planning stages, the selectionof participants, the administration of interventions, themeasurement of outcomes, the analysis of data, the interpretationand reporting of results, and the publication of reports.Bias canalso occur when a person is reading the report of a trial.
14. Types of biasSelection biasSelection bias occurs when the outcomes of a trial are affected bysystematic differences in the way in which individuals areaccepted or rejected for a trial, or in the way in which theinterventions are assigned to individuals once they have beenaccepted into a trialAscertainment biasAscertainment bias occurs when the results or conclusions of atrial are systematically distorted by knowledge of whichintervention each participant is receiving. Ascertainment bias canbe introduced by the person administering the interventions, theperson receiving the interventions (the participants), theinvestigator assessing or analysing the outcomes, and even bythe people who write the report describing the trial.
15. What is publication bias?Some evidence shows a propensity for investigators and sponsorsto write and submit, and for peer-reviewers and editors to accept,manuscripts for publication depending on the direction of thefindingsWhat is language bias?Recently, a variation of publication bias has been described as‘language bias’, to indicate that manuscripts may be submitted toand published by journals in different languages depending on thedirection of their results, with more studies with positive resultspublished in English
16. What is country of publication bias?It has also been shown that researchers in some countries maypublish only positive results, such as with RCTs evaluatingacupuncture conducted in China, Japan, Hong Kong, and Taiwan.What is time lag bias?This bias occurs when the speed of publication depends on thedirection and strength of the trial results.1 In general, it seems thattrials with ‘negative’ results take twice as long to be published as‘positive’ trials.What is ‘potential breakthrough’ bias?This type of bias can be introduced by journalists (and,increasingly, Internet publishers) if they systematically select,overrate, and disseminate trials depending on the direction of thefindings.
17. Rivalry bias Underrating the strengths or exaggerating theweaknesses of studies published by a rival.I owe him one’ bias This is a variation of the previous bias andoccurs when a reader (particularly a peer-reviewer) acceptsflawed results from a study by someone who did the same for thereaderPersonal habit bias Overrating or underrating a study dependingon the habits of the readerMoral bias Overrating or underrating a study depending on howmuch it agrees or disagrees with the readers morals
18. Clinical practice bias Overrating or underrating a study accordingto whether the study supports or challenges the readers currentor past clinical practiceComplementary medicine biasIt refers to the systematic overrating or underrating of studies thatdescribe complementary medicine interventions, particularly whenthe results suggest that the interventions are effectiveDo something’ bias Overrating a study which suggests that anintervention is effective, particularly when there is no effectiveintervention available‘Do nothing’ bias This bias is related to the previous one. It occurswhen readers underrate a study that discourages the use of anintervention in conditions for which no effective treatment exists
19. Favoured design bias Overrating a study that uses a designsupported, publicly or privately, by the readerDisfavoured design bias The converse of favoured design bias. Itoccurs when a study is underrated because it uses a design thatis not favoured by the readerResource allocation bias Overrating or underrating a studyaccording to the readers preference for resource allocation.Prestigious journal bias This occurs when the results of studiespublished in prestigious journals are overrated.Non-prestigious journal bias The converse of prestigious journalbias. It occurs when the results of studies published in non-prestigious journals are underrated.
20. Printed word bias This occurs when a study is overrated becauseof undue confidence in published dataProminent author bias This occurs when the results of studiespublished by prominent authors are overrated.Unknown or non-prominent author bias. It occurs when the resultsof studies published by unknown or non-prominent authors areunderrated.Famous institution bias This occurs when the results of studiesemanating from famous institutions are overrated.Unrecognised or non-prestigious institution bias Related to theprevious bias. It occurs when the results of studies emanatingfrom unrecognised or non-prestigious institutions aresystematically underrated.
21. Large trial bias . It occurs when the results of large trials areoverrated.Multicentre trial bias . It occurs when the results of multicentrecollaborative trials are overrated. These trials do not necessarilyhave large sample sizes
22. Assessing the quality of RCTs: why, what, how, and by whom? •There is no such thing as a perfect trial. •Internal validity is an essential component of the assessment of trial quality. •There are many tools to choose from when assessing trial quality, or new ones can be developed. •Using several people to assess trial quality reduces mistakes and the risk of bias during assessments. •How to use quality assessment will depend on your role, the purpose of the assessment, and the number of trials on the same topic being evaluated. •The CONSORT statement aims to improve the standard of written reports of RCTs.
23. A good clinical trial•Answer clear and relevant clinical questions previously unanswered.•Evaluate all possible interventions for all possible variations of theconditions of interest, in all possible types of patients, in all settings,using all relevant outcome measures.•Include all available patients.•Include strategies to eliminate bias during the administration of theinterventions, the evaluation of the outcomes, and reporting of theresults, thus reflecting the true effect of the interventions.•Include perfect statistical analyses.•Be described in reports written in clear and unambiguous language,including an exact account of all the events that occurred during thedesign and course of the trial, as well as individual patient data, and anaccurate description of the patients w ho were included, excluded,withdrawn, and dropped out.•Be designed, conducted, and reported by researchers who did not haveconflicts of interest.•Follow strict ethical principles.
24. Critical appraisal. Treatment.Is the research valid?1a. Was the assignment of patients to treatments randomized?1b. Was the randomization list concealed?1c. Were subjects and clinicians ‘blind’ to which treatment wasbeing received?
25. Critical appraisal. Treatment.2a. Were all subjects who entered the trial accounted for at its conclusion?2b. Were they analyzed in the groups to which they were randomized?
26. Critical appraisal. Treatment.3a. Aside from the experimental treatment, werethe groups treated equally?3b. Were the groups similar at the start of thetrial?
27. Critical appraisal. Treatment.Is the research important? RRR (Relative risk reduction) ARR (Absolute risk reduction) NNT (Number needed to treat)
28. Critical appraisal. Treatment.Can I apply it to my patient?4. Is this patient so different from those in the trialthat the results don’t apply?
29. Critical appraisal. Treatment.5a. How great would the benefit of therapy be for thisparticular patient?5b. What is the event rate in my practice for patients likethis one?
30. Critical appraisal. Treatment.Is it consistent with the patients values andpreferences?6. Do I have a clear assessment of the patient’s valuesand preferences?
31. Critical appraisal. Treatment.7. Do this intervention and its potential consequencesmeet them?
32. Critical appraisal. DiagnosisIs the research valid?1. Was there an independent, blind comparison with areference ("gold") standard of diagnosis?
33. Critical appraisal. Diagnosis2. Was the diagnostic test evaluated in an appropriatespectrum of patients (like those in whom it would beused in practice)?
34. Critical appraisal. Diagnosis3. Was the reference standard applied regardless of thediagnostic test result?
35. Critical appraisal. Diagnosis Is the research important?SensitivitySpecificity
36. Critical appraisal. Diagnosis Is the research important?SensitivitySpecificity
37. Critical appraisal. DiagnosisCan I apply it to my patient?4. Is the diagnostic test available, affordable,accurate, and precise in your setting?
38. Critical appraisal. Diagnosis5. Can you generate a clinically sensible estimate ofyour patients pre-test probability (from practice data,from personal experience, from the report itself, or fromclinical speculation?)
39. Critical appraisal. Diagnosis6. Will the resulting post-test probabilities affect yourmanagement and help your patient? (Could it moveyou across a test-treatment threshold?)
40. Critical appraisal. Prognosis/harmIs the research valid?1. Was a defined, representative sample of patientsassembled at a common (usually early) point in thecourse of their disease?
41. Critical appraisal. Prognosis/harm2. Was patient follow-up sufficiently long andcomplete?
42. Critical appraisal. Prognosis/harm3. Were objective outcome criteria applied in a"blind" fashion?
43. Critical appraisal. Prognosis/harm4. If subgroups with different prognoses areidentified, was there adjustment for importantprognostic factors?
44. Critical appraisal. Prognosis/harm5. Was there validation in an independent group("test-set") of patients?
45. Critical appraisal. Prognosis/harmIs the research important? Outcome Rate (95% CI) Probability (95% CI)
46. Critical appraisal. Prognosis/harmCan I apply it to my patient?6. Were the study patients similar to your own?
47. Critical appraisal. Prognosis/harm7. Will this evidence make a clinically important impacton your conclusions about what to offer or tell yourpatient?
48. After the critical appraisal youconclude that that article you´ve readhas a low-moderate-high probabilitiesto be biased