8. Analysing dichotomous outcomes

10,281 views
10,004 views

Published on

To analyse the results of the studies included in your systematic review, you will need to understand the different kinds of data you may encounter. This presentation is an introduction to analysing dichotomous or binary data.

0 Comments
6 Likes
Statistics
Notes
  • Be the first to comment

No Downloads
Views
Total views
10,281
On SlideShare
0
From Embeds
0
Number of Embeds
811
Actions
Shares
0
Downloads
0
Comments
0
Likes
6
Embeds 0
No embeds

No notes for slide
  • To begin looking at analysis of outcomes in your review, we’re going to start with one of the most common outcome types that you may come across – dichotomous outcomes.
  • When we’ve collected the data from a study, we need to organise it and make sense of it. Each study measures outcomes among its individual participants, and reports those outcomes in various ways. For each study in your review, you will select an effect measure – a statistic to represent the effect of the intervention on the people in that study, such as an odds ratio. It might be the effect measure reported in the study itself, or you might choose a different effect measure that you think is more useful. [Click] We have to do this for each included study in your review – if possible, so that we have a comparable set of effect measures for each study. [Click] Ultimately, our aim will be to synthesise those results across all the studies, and report an overall effect for the intervention. We’ll come back to that stage of the process later. For now, we will look at some of the different types of outcome data you are likely to find at the study level, and how we can select and calculate appropriate effect measures. We’re going to start by talking about the analysis of dichotomous outcomes.
  • Truly dichotomous data is outcomes in which every participant is in one of two possible groups – either with the event of interest or without, yes or no.
  • In general conversation the terms ‘risk’ and ‘odds’ are used interchangeably (as are the terms ‘chance’, ‘probability’ and ‘likelihood’). In statistics, however, risk and odds have particular meanings and are calculated in different ways. although they are both ways of expressed the chance of being in one of two groups – with the dichotomous outcome, or without. It’s important to be clear abut the difference between risk and odds, and which one you’re using.
  • Risk is the concept more familiar to patients and health professionals. Risk describes the probability with which a health outcome or event will occur. Risk is very easy to calculate: in this case, we have a group of 24 people who drank coffee, and 6 of them had a headache. So we take the number of people with a headache, and divide it by the total number of people – in this case, 6 out of 24, or 25%. Risk is often expressed as a percentage, but you can also express it as a decimal, in this case, or as a number out of 100 or 1,000 people.
  • Odds is slightly different, but just as easy to calculate. It’s probably a concept that is more familiar to gamblers. To calculate the odds, we take the number of people with a headache, and divide it not by the total number, but by the number of people without a headache. So in this case, 6 people with a headache, divided by 18 people without a headache, or 0.33. Odds are not usually expressed as a percentage, but you can express them as a ratio – in this case 1:3.
  • It’s important to be clear about whether you’re using risk or odds, because they are not always the same. As you can see from the first example here, when the event is rare (5 out of 65 people with the event, in this case a headache), the risk and odds will be very similar – risk is 0.077 and odds are 0.083. CLICK However, when the event is common, such as 130 people out of 165 ‘still awake’ at the end of the trial, risk and odds will be very different. In this case, risk is 0.79, and odds are 3.71.
  • These are some examples of ways in which we can talk about risk and odds, while being specific about which measure we are using.
  • So, we can calculate risk and odds in a group of people, some of whom experience an event. In a trial, what we usually want to do is compare the chances of an event between two groups (such as the intervention and control groups), to decide which of the two groups has a better outcome. For that, we need to use effect measures that compare the risk or odds between two groups.
  • All the information you need to calculate measures for dichotomous outcomes can be expressed in a 2x2 table. ASK: Has anyone seen this kind of table before? ASK: Can anyone suggest any of the effect measures we might use to compare two groups with each other?
  • There are three main effect measures we use to express the difference between two groups for a dichotomous outcome: risk ratios (also known as relative risk), odds ratios and risk difference (also known as absolute risk reduction). Whichever one you use, remember that all measures of effect are uncertain – we should never present the result of any study without a measure of uncertainty, preferably a confidence interval. Both the effect measure and measures of uncertainty can be calculated from a 2x2 table. Note: Number Needed to Treat (NNT) is a popular measure of the effect of an intervention, but it can’t be used directly in a meta-analysis. You don’t have to report the NNT in your review, but if you would like to, you’ll need to use one of these other statistics for your analysis, and then calculate the NNT at the end to report it in your review. The NNT can be easily calculated as 1/|RD| (1 divided by the absolute value of the RD).
  • The risk ratio simply divides the risk in the intervention group by the risk in the control group. In this case, the risk in the intervention group is 17/68 or 0.25, and the risk in the control group is 9/64 or 0.14. We take the intervention risk, 0.25 and divide it by the control risk, 0.14, and get the answer 1.79. If you’re comparing two active interventions, it’s up to you to decide which intervention you’re going to treat as the intervention and which you will treat as the control in this calculation – remember to be consistent throughout the review, and make sure you’re interpreting the result correctly for each group in that case. Where RR is less than 1, this implies that the intervention reduced the number of events compared to the control. If the RR is greater than 1, the intervention increased the number of events. If the RR is equal to 1, this implies there was no difference between the two groups – 1 is our point of no effect for risk ratios. ASK: What does this RR of 1.79 mean?
  • These are some examples of the ways we can talk about RR. For our calculated RR of 1.79, this means the intervention increased the number of headaches by 79%, or that the risk in the intervention group was 179% of the control. Hypothetically, if we had found a decreased risk, say a RR of 0.79, less than one, we could say that the chances of having a headache in the intervention group were 79% of the chances in the control group, or that the intervention reduced the risk of headache by 21%.
  • The odds ratio is very similar – instead of dividing the risk by the risk, we divide the odds in the intervention group by the odds in the control group. In this case, an odds for the intervention group of 0.33 divided by the control group odds of 0.16, gives an odds ratio of 2.06. Just like a RR, where OR is less than 1, this implies that the intervention reduced the number of events compared to the control. If the OR is greater than 1, the intervention increased the number of events. If the OR are equal to 1, this implies there was no difference between the two groups, and therefore no intervention effect. ASK: What does this OR of 2.06 mean?
  • These are some examples of the ways we can talk about OR. In this case, our OR of 2.06 means the intervention doubled the odds of headache. We can say the odds in the intervention group were increased to 206% of the odds in the control group, or increased by 106%. Hypothetically, if we had found a result indicating fewer headaches in the intervention group, or a number less than 1, say an odds of 0.06, we could say that the intervention had reduced the odds of headache to 6% of the odds in the control group, or reduced the odds by 94%. You can see that the way we express RR and OR are not very different, but we do need to be careful to be specific about which we are reporting. Misinterpreting an OR as an RR can lead to overestimating or underestimating the effect, especially for common events.
  • The third effect measure that can be used to measure dichotomous outcomes is the Risk Difference. The previous two measures, RR and OR, have been ratios, which give relative measures of the effect, dividing one by the other. RD is an absolute measure, giving you the absolute difference between the risks in each group. All we do is subtract – take the risk in the intervention group, and subtract the risk in the control group. In this case, 0.25 – 0.14, which gives a RD of 0.11. You cannot use odds to calculate a corresponding Odds Difference. For absolute measures, our point of no effect is different. For ratios, the point of no effect is 1, but for RD the point of no effect is 0. Where risk difference is less than 0, this implies that the intervention reduced the number of events compared to the control. If the risk difference is greater than 0, the intervention increased the number of events. If the risk difference is equal to 0, this implies there was no difference between the two groups, and therefore no intervention effect. ASK: What does this RD of 0.11 mean?
  • It’s common to talk about RD in terms of ‘percentage points’, to make it clear that we’re not speaking in relative terms, so we could say our RD of 0.11 means the intervention increased the risk of headache by 11 percentage points. 11 percentage points is not 11% of the control value, but actually 11 higher than the control value. Hypothetically, if we had a reduction in headaches, say a RD of -0.11, we might say that the intervention reduced the risk of headache by 11 percentage points.
  • [Note to trainers: you may wish to delete this slide from the printed materials for your workshop.] Risk ratio the risk of an event with the intervention was 40% of the risk with the control the intervention reduced the risk to 40% of what it was treatment reduced the risk by 60% Odds ratio the intervention reduced the odds of an event to 25% of what they were treatment reduced the odds by 75%
  • So, we have three effect measures available for dichotomous outcomes. How do you decide which effect measure to use for your review?
  • All the effect measures are equally valid, but there are some properties that we might wish to consider when choosing which statistic to use: communication - everyone knows what it means and it is useful for communicating treatment effects in practice consistency of effect - it would be nice to have one number we can take away and use in a range of populations, that may be different from the population or context in which the study was conducted easy to manipulate mathematically Your CRG may have a policy on which to use – in which case, take their advice.
  • People find risk a more intuitive concept, and often interpret odds ratios as if they were risk ratios. This may be a problem for frequent events, where the statistics are not the same, and may lead readers to exaggerate the treatment effect. RR is easier, but as a relative measure, it can mean a very big or very small change, depending on the underlying risk. For example, 50% decrease in risk is very important when coming from a baseline of 80% to 40%, but less so when going from a baseline risk of 4% to 2%. Absolute measures like RD are often most easily interpreted by clinicians and patients, as they give clear estimations of actual risk, and can be expressed as natural frequencies, such as 11 fewer people out of 100 experiencing a headache. They can also be easily converted to NNT.
  • The amount of variation in the effect measure (RR, OR, RD) across a set of studies conducted in different times, places and populations depends on which effect measure you choose. This is mainly due to changes in the underlying event rates across studies e.g. some populations may have high underlying death rates and some lower rates. A study of meta-analyses in the Cochrane Library shows that, in general OR and RR are less variable across different populations – that is, in general, relative measures like OR and RR tend to stay more stable when applied to different populations, which also makes it easier for readers to apply the results of the review to their own population. RD varies more widely from population to population, and must be interpreted in the context of the underlying risk of the population being considered. For example, a risk difference of 0.02 (or 2%) may represent a small, clinically insignificant change from a risk of 58% to 60% or a proportionally much larger and potentially important change from 1% to 3%. The absolute value is likely to be different in every population, and different again in the reader’s population. Sometimes this can cause problems – for example, if the RD is -11%, and the reader’s population has an underlying risk of 5%, then their risk with the intervention would be calculated as -5% risk of the event occurring, which doesn’t make sense.
  • The first issue arising from the mathematical properties of the effect measures is the importance of defining your event. You have the option to define your event of interest as the desirable outcome or the undesirable outcome – e.g. people who made it to the bottom of the ski slope, or people who didn’t make it; people with headache, or people without headache. For OR and RD the effect measure is stable, but reversed, when you change the event: e.g. OR 0.5 for bad event corresponds to OR 2.0 for good (in general 1/OR for good event = OR for bad event). For RD, if we have RD of 5 percentage points for a good event, we get –5 percentage points for a bad event (in general, the same but different direction , +/-) RR is not so straightforward – changing the event changes the size and significance of your intervention effect. Think carefully about which outcome makes the most sense to you to measure and report. It’s also the case that, while RR and RD are limited in the maximum values they can take, OR is unbounded (that is, it can go up to infinity). There is no clear consensus on the importance of these two factors.
  • As mentioned before, your CRG may have a preferred measure. If not, RR is generally the best default setting for most outcomes.
  • Going back to our data collection – what data do we need to collect from the included studies to analyses a dichotomous outcome?
  • If they’re available, the 4 numbers shaded are all you need to collect from your included studies – the number of events in each group, and the total number of people in each group. The Cochrane review software, RevMan, will calculate the risk ratio or odds ratio, as well as a confidence interval as a measure of uncertainty, based on these numbers. CLICK It’s not always clear whether the total number of people for whom the outcome was measured is the same as the total sample size at any given time point. Where possible, collect the actual number of people who were measured for each outcome, at each time point. If that information is not available in the study, that’s ok – you can use the total sample size for the group, or you can contact the author to request more information. The reason we try to use the number actually measured relates to the issue of intention-to-treat analysis. Wherever possible, we want complete data about the whole sample, but when people are missing, we need to make sure that we are aware of that. If we don’t have measurements on some people, and we use the total sample size as our N, then we are effectively imputing data – making an assumption that none of the missing people had a headache, or experienced the event you are measuring. This may or may not be a plausible assumption, so it’s important to be clear on the decisions you’re making, and get statistical advice if you want to know more about imputation.
  • Those four numbers may be the most straightforward way to analyse dichotomous data, but it’s very important to remember that if your study does not report those four numbers, it doesn’t mean you can’t use the data reported in the study. There are other ways the outcome can be reported that are just as useful. For example, if the study doesn’t report the actual number of events in each group, but reports a percentage, this can be easily converted to an absolute number if you have the sample size for each group. Alternatively, your study might not provide separate data for the intervention and control groups at all, but might report the overall effect estimate – such as the OR or RR. You can use these effect estimates in a meta-analysis, too, as long as they have been reported with a measure of variance, such as the standard error or a confidence interval can be used for this. When we enter the usual numbers of events and people, RevMan uses these to calculate the OR or the RR, along with the variance or uncertainty for each study. In effect, studies that report effect estimates have done these calculations for you. Because they’re in a different format, these studies need to be entered using a different outcome type in RevMan, not the usual Dichotomous outcome type, called the generic inverse variance method. An explanation of the generic inverse variance method, and what to do if you have an overall effect estimate instead of separate data for each group, will be given later in a separate presentation on analysing non-standard data.
  • 8. Analysing dichotomous outcomes

    1. 1. Analysing dichotomous outcomes
    2. 2. Steps of a systematic review <ul><li>define the question </li></ul><ul><li>plan eligibility criteria </li></ul><ul><li>plan methods </li></ul><ul><li>search for studies </li></ul><ul><li>apply eligibility criteria </li></ul><ul><li>collect data </li></ul><ul><li>assess studies for risk of bias </li></ul><ul><li>analyse and present results </li></ul><ul><li>interpret results and draw conclusions </li></ul><ul><li>improve and update review </li></ul>
    3. 3. Review level ↓ Effect measure Study A Study level ↓ Source: Jo McKenzie & Miranda Cumpston Effect measure Outcome data Effect measure Outcome data Study B Effect measure Outcome data Study C Effect measure Outcome data Study D
    4. 4. Session outline <ul><li>expressing chance: risk and odds </li></ul><ul><li>effect measures for comparing groups </li></ul><ul><li>choosing an effect measure </li></ul><ul><li>collecting data for dichotomous outcomes </li></ul>See Chapters 7 & 9 of the Handbook
    5. 5. What are dichotomous outcomes? <ul><li>when the outcome for every participant is one of two possibilities or events </li></ul><ul><ul><li>alive or dead </li></ul></ul><ul><ul><li>healed or not healed </li></ul></ul><ul><ul><li>pregnant or not pregnant </li></ul></ul>
    6. 6. What were the chances of that? <ul><li>Risk and odds </li></ul><ul><li>express chance in numbers </li></ul><ul><li>for dichotomous outcomes, express the chance within a group of being in one of two states </li></ul><ul><li>particular statistical meanings, calculated differently </li></ul>
    7. 7. Risk <ul><li>24 people drank coffee 6 developed a headache </li></ul><ul><li>risk of a headache </li></ul><ul><ul><li>= 6 headaches / 24 people who could have had one </li></ul></ul><ul><ul><li>= 6/24 = ¼ = 0.25 = 25% </li></ul></ul>risk = no. participants with event of interest total no. participants
    8. 8. Odds <ul><li>24 people drank coffee 6 developed a headache </li></ul><ul><li>odds of a headache </li></ul><ul><ul><li>= 6 headaches/18 without headaches </li></ul></ul><ul><ul><li>= 6/18 = 1/3 = 0.33 = 1:3 (not usually as %) </li></ul></ul>odds = no. participants with event of interest no. participants without event of interest
    9. 9. Do risks and odds differ much? <ul><li>Two examples from caffeine trials </li></ul><ul><li>5 people with ‘headaches’ out of 65 </li></ul><ul><li>chance of having a headache </li></ul><ul><ul><li>risk = 5/65 = 0.077 odds = 5/60 = 0.083 </li></ul></ul><ul><li>130 people ‘still awake’ out of 165 </li></ul><ul><li>chance of still being awake </li></ul><ul><ul><li>risk = 130/165 = 0.79 odds = 130/35 = 3.71 </li></ul></ul>
    10. 10. Expressing it in words <ul><li>Risk </li></ul><ul><ul><li>the risk of having a headache was one in four, or 25% </li></ul></ul><ul><li>Odds </li></ul><ul><ul><li>the odds of having a headache were one third of the odds of not having a headache </li></ul></ul><ul><ul><li>one person had a headache for every three that didn’t </li></ul></ul><ul><ul><li>the odds of having a headache were 3 to 1 against </li></ul></ul>
    11. 11. Overview <ul><li>expressing chance: risk and odds </li></ul><ul><li>effect measures for comparing groups </li></ul><ul><li>choosing an effect measure </li></ul><ul><li>collecting data for dichotomous outcomes </li></ul>
    12. 12. Comparing two groups Headache No headache Total Caffeine 17 51 68 Decaf 9 55 64 Total 26 106 132
    13. 13. Comparing two groups <ul><li>effect measures </li></ul><ul><ul><li>risk ratio (RR) (relative risk) </li></ul></ul><ul><ul><li>odds ratio (OR) </li></ul></ul><ul><ul><li>risk difference (RD) (absolute risk reduction) </li></ul></ul><ul><li>all estimates are uncertain, and should be presented with a confidence interval </li></ul>
    14. 14. Risk ratio <ul><li>risk of event with intervention = 17/68 </li></ul><ul><li>risk of event with control </li></ul><ul><ul><li>= 9/64 </li></ul></ul><ul><li>risk ratio = intervention risk </li></ul><ul><li>control risk </li></ul><ul><li>= 17/68 = 0.25 = 1.79 </li></ul><ul><li>9/64 0.14 </li></ul><ul><li>Where risk ratio = 1, there is no difference between the groups </li></ul>Headache No headache Total Caffeine 17 51 68 Decaf 9 55 64 Total 26 106 132
    15. 15. Expressing it in words <ul><li>Risk ratio 1.79 </li></ul><ul><ul><li>the risk of having a headache with treatment was 179% of the risk in the control group </li></ul></ul><ul><ul><li>intervention increased the risk of headache by 79% </li></ul></ul><ul><li>or for a reduction in risk: </li></ul><ul><li>Risk ratio 0.79 </li></ul><ul><ul><li>the risk of having a headache with treatment was 79% of the risk in the control group </li></ul></ul><ul><ul><li>intervention reduced the risk of headache by 21% </li></ul></ul>
    16. 16. Odds ratio <ul><li>odds of event with intervention = 17/51 </li></ul><ul><li>odds of event on control = 9/55 </li></ul><ul><li>odds ratio = intervention odds </li></ul><ul><li>control odds </li></ul><ul><li>= 17/51 = 0.33 = 2.06 </li></ul><ul><li>9/55 0.16 </li></ul><ul><li>Where odds ratio = 1, there is no difference between the groups </li></ul>Headache No headache Total Caffeine 17 51 68 Decaf 9 55 64 Total 26 106 132
    17. 17. Expressing it in words <ul><li>Odds ratio 2.06 </li></ul><ul><ul><li>intervention doubled the odds of headache </li></ul></ul><ul><ul><li>intervention increased the odds to 206% of the odds in the control group </li></ul></ul><ul><ul><li>intervention increased the odds of headache by 106% </li></ul></ul><ul><li>or for a reduction in odds: </li></ul><ul><li>Odds ratio 0.06 </li></ul><ul><ul><li>intervention reduced the odds of headache to 6% of the odds in the control group </li></ul></ul><ul><ul><li>intervention reduced the odds of headache by 94% </li></ul></ul>
    18. 18. Risk difference <ul><li>risk of event with intervention </li></ul><ul><ul><li>= 17/68 </li></ul></ul><ul><li>risk of event with control </li></ul><ul><ul><li>= 9/64 </li></ul></ul><ul><li>risk difference = risk with intervention - risk with control </li></ul><ul><li>= 17/68 - 9/64 = 0.25 - 0.14 = 0.11 </li></ul><ul><li> </li></ul><ul><li>When risk difference = 0, there is no difference between the groups </li></ul>Headache No headache Total Caffeine 17 51 68 Decaf 9 55 64 Total 26 106 132
    19. 19. Expressing it in words <ul><li>Risk difference 0.11 </li></ul><ul><ul><li>intervention increased the risk of headache by 11 percentage points </li></ul></ul><ul><li>or for a reduction in risk: </li></ul><ul><li>Risk difference -0.11 </li></ul><ul><ul><li>intervention reduced the risk of headache by 11 percentage points </li></ul></ul>
    20. 20. Now it’s your turn! <ul><li>calculate: </li></ul><ul><ul><li>risk ratio for the effect of treatment on chance of event </li></ul></ul><ul><ul><li>odds ratio for the effect of treatment on chance of event </li></ul></ul><ul><li>express the results in words </li></ul>Event No Event Total Intervention 2 8 10 Control 5 5 10 Total 7 13 20
    21. 21. The answers <ul><li>Risk ratio </li></ul><ul><li>Odds ratio </li></ul>
    22. 22. Overview <ul><li>expressing chance: risk and odds </li></ul><ul><li>effect measures for comparing groups </li></ul><ul><li>choosing an effect measure </li></ul><ul><li>collecting data for dichotomous outcomes </li></ul>
    23. 23. Choosing an effect measure <ul><li>communication of effect </li></ul><ul><ul><li>users must be able to understand and apply the result </li></ul></ul><ul><li>consistency of effect </li></ul><ul><ul><li>applicable to all populations and contexts </li></ul></ul><ul><li>mathematical properties </li></ul>Your CRG may have a policy
    24. 24. Communication <ul><li>OR is hard to understand, often misinterpreted </li></ul><ul><li>RR is easier, but relative </li></ul><ul><ul><li>can mean a very big or very small change </li></ul></ul><ul><li>RD is easiest </li></ul><ul><ul><li>absolute measure of actual change in risk </li></ul></ul><ul><ul><li>easily converted to natural frequencies or NNT </li></ul></ul>
    25. 25. Consistency <ul><li>event rates vary from study to study within a review </li></ul><ul><li>study of meta-analyses in The Cochrane Library : </li></ul><ul><ul><li>RR and OR are less variable across different populations </li></ul></ul><ul><ul><li>RD is more variable, dependant on baseline risk </li></ul></ul><ul><li>readers will apply results to their own population, which may be different again </li></ul>Source: Deeks JJ. Issues in the selection of a summary statistic for meta-analysis of clinical trials with binary outcomes. Statistics in Medicine 2002; 21:1575-1600
    26. 26. Mathematical properties <ul><li>defining your event </li></ul><ul><ul><li>good or bad, presence or absence? </li></ul></ul><ul><ul><li>think carefully and choose in advance </li></ul></ul><ul><ul><li>OR and RD are stable either way, RR varies </li></ul></ul><ul><li>unbounded values </li></ul><ul><ul><li>OR is the only unbounded effect measure </li></ul></ul>
    27. 27. Summary OR RR RD Communication    Consistency    Mathematics   
    28. 28. Overview <ul><li>expressing chance: risk and odds </li></ul><ul><li>effect measures for comparing groups </li></ul><ul><li>choosing an effect measure </li></ul><ul><li>collecting data for dichotomous outcomes </li></ul>
    29. 29. Collecting data <ul><li>four numbers needed for effect measure and variance: </li></ul>Try to collect the actual number measured for each outcome, at each time point Headache No headache Total Caffeine 17 51 68 Decaf 9 55 64 Total 26 106 132
    30. 30. Other data formats can also be used <ul><li>percentages </li></ul><ul><ul><li>number of events can be calculated if sample size is known </li></ul></ul><ul><li>overall effect estimate (e.g. OR, RR) </li></ul><ul><ul><li>where results for each group is not reported separately </li></ul></ul><ul><ul><li>can include in meta-analysis using generic inverse variance method </li></ul></ul><ul><ul><li>need a measure of variance (e.g. SE, 95% CI) </li></ul></ul>
    31. 31. What to include in your protocol <ul><li>effect measure(s) to be used for dichotomous data </li></ul>
    32. 32. Take home message <ul><li>risks and odds are two ways of expressing chance </li></ul><ul><li>risk ratio, odds ratio and risk difference compare chance between two groups </li></ul><ul><li>to enter dichotomous data you need the number of events and the total number in each group </li></ul>
    33. 33. References <ul><li>Higgins JPT, Deeks JJ (editors). Chapter 7: Selecting studies and collecting data . In: Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available from www.cochrane-handbook.org . </li></ul><ul><li>Deeks JJ, Higgins JPT, Altman DG (editors). Chapter 9: Analysing data and undertaking meta-analyses . In: Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available from www.cochrane-handbook.org . </li></ul>Acknowledgements <ul><li>Compiled by Miranda Cumpston </li></ul><ul><li>Based on materials by Sally Hopewell, Phil Alderson, Jon Deeks, Deborah Caldwell, the Cochrane Statistical Methods Group and the Australasian Cochrane Centre </li></ul><ul><li>Approved by the Cochrane Methods Board </li></ul>

    ×