Small study effects and reporting biases


Published on

In some meta-analyses, we find that small studies have systematically different effects to the large studies. This can have many causes, but one is the possibility of reporting bias - that is, we might be missing small studies with negative effects because they are unpublished or less accessible than larger studies.

  • Be the first to comment

No Downloads
Total views
On SlideShare
From Embeds
Number of Embeds
Embeds 0
No embeds

No notes for slide
  • One we have begun the analysis of our results, and set up our meta-analyses, the next important step is to start to explore our results, and in particular the differences we observe between the results of our included studies. This exploration of differences will inform our understanding of the effects we’re observing, and how we should interpret them.
  • Usually, the differences between small studies and larger studies vary based on their vulnerability to random sampling error. Any time we conduct a study and estimate an effect, the study is affected by random error – there is a gap between our estimate, and the true effect of the intervention. The estimates of multiple studies will be scattered, sometimes overestimating and sometimes underestimating the effect.
  • We can usually assume that small studies will be less precise than large studies in estimating an intervention effect – we expect the results larger studies to be closer to the true effect, and smaller studies to be more widely scattered. This assumption will hold true, for fixed-effect and random-effects meta-analyses, even in the presence of other kinds of heterogeneity such as differences in the intervention or population, except in one case: when the results of the small studies are consistently different to the larger studies, either more positive or more negative. Like other kinds of heterogeneity, it may be the case that differences in the results of the study are somehow related to the size of the study – what this means, and what the different explanations might be, we’ll explore further.
  • The first thing we have to do is identify whether or not small-study effects are at work in our review. There are several methods available to test whether the results of your studies are associated with the study size: we can use funnel plots, statistical tests and sensitivity analysis. We’ll explain the basics of each of these, remembering that you may find small-study effects for some outcomes and not others. These methods are tricky, though, and the best way to proceed is to get advice from a statistician who can assist you in planning your steps and interpreting your findings.
  • Funnel plots are a tool available to take the results of a meta-analysis and plot the results of each individual study against a measure of the study’s size (usually represented by a measure of variance like standard error). If the study’s size is not associated with the results, then the plot should represent an inverted funnel – larger studies will be at the top in the centre of the plot, close to the meta-analysed estimate of effect, and smaller studies will progressively scatter more widely either side towards the bottom of the plot. RevMan can generate these plots for you, but it should be emphasised that funnel plots will not give meaningful results if you have fewer than 10 studies in your meta-analysis, or if they are all the same size.
  • This is what a funnel plot looks like. You can see that the standard error has been used as the measure of size. The scale is reversed so that studies with low SE (i.e. large studies) will be at the top of the plot, and studies with high SE (i.e. small studies) will be at the bottom. So, the studies at the point of the funnel will be the large studies, and the smaller studies gradually scatter wider and wider towards the bottom. Note that the important vertical line on this plot is not the line of no effect, in this case 1, as it would be on a forest plot. The important vertical line that we want to be in the centre of the triangle is the overall effect estimate from the meta-analysis. For ratio measures, just like a forest plot, a logarithmic scale is used for the measure of treatment effect so that the scale is symmetrical.
  • On our hypothetical plot, this is what it might look like if we have small-study effects. You can see that we have large studies at the top, close to the overall effect estimate, but we don’t have a nice even scatter of smaller studies either side – our smaller studies appear to be consistently estimating lower odds ratios than the larger studies. This is called funnel plot asymmetry, and indicates that we have some kind of small-study effect at work. This might be because these small studies were not published, and couldn’t be found for the review.
  • Alternatively, it might be that the small studies are consistently finding different results to the large studies, and so there are no studies with results up the other end of the scale.
  • This is a more realistic picture of what a funnel plot might look like for your review – if you are lucky enough to have so many included studies. ASK: Is this plot symmetrical? Yes. Real funnel plots will rarely be perfect triangles, but this one appears fairly symmetrical.
  • ASK: How about this example - is this plot symmetrical? No – it appears that there are more studies on the left side of the effect estimate. It can be difficult to judge from a visual inspection like this how much we should be concerned about the missing studies – is it likely to be reporting bias, or is it one of the other reasons?
  • It’s important not to jump to conclusions about the causes of funnel plot asymmetry and small-study effects. There are many different reasons why this might occur, and you will need to put some thought into distinguishing between these different effects. Knowing your intervention, and the circumstances in which it was implemented in different studies, can help identify causes of funnel plot asymmetry. It’s also important to remember that your review may suffer from some of these problems even if the funnel plots are symmetrical and the tests negative, so you will always be required to explore and understand your results, and consider each of these issues for yourself. The first reason you might find asymmetry is chance – it may just be random chance that the small studies found different effects, particularly in reviews with few studies – which applies to most Cochrane reviews. Secondly, it may be artefactual – some statistics are naturally correlated with their standard errors, for example odds ratios, and in these cases some funnel plot asymmetry is to be expected. It may be due to clinical diversity, or the heterogeneity in your study populations and intervention. For example, you may have different underlying populations in the smaller studies that obtain different benefits from the intervention. Early, small, exploratory studies may have been conducted in high-risk populations, who might receive more benefit from the intervention – in that case you might have a correlation between the effect and the size of the study. This can also apply to the delivery of the intervention – e.g. if larger studies deliver the intervention with less fidelity and monitoring, or less intensity than smaller studies, we might see different effects. It may be that you are already planning subgroup analyses which can clarify these differences in effects. In some cases, methodological diversity may be at work. Small studies may be consistently overestimating the effect due to bias, e.g. poor allocation concealment, lack of blinding, etc. Concern about this is the reason we assess the risk of bias in our included studies in the first place, and the risk of bias assessment in your review may indicate whether these factors may be impacting on your results. If this is occurring in your review, and you have not done so already, you may wish to consider excluding studies at high risk of bias from your analysis. Finally, asymmetry may be caused by reporting biases, otherwise known as publication bias – we’ll come to that later.
  • Some enhancements to funnel plots can be helpful in this regard – such as these contour-enhanced funnel plots. Unfortunately these enhanced plots are not currently available in RevMan. The shaded areas on these plots indicate P values – that is, studies falling outside the white area in the middle indicate significant results, at the level of P < 0.1, P < 0.05, and P < 0.01 respectively as we move away from the middle. If studies appear to be missing in the middle of the plot – indicating that the results of the missing studies would not be statistically significant, then this is consistent with our understanding of reporting bias. That is, non-significant trials are less likely to be found, although we should still consider the other possible explanations. The plot on the left is an example of this kind of case. However, if the asymmetry suggests that the missing studies would be statistically significant, especially if they would be significant in the direction considered desirable by the authors, then it’s less likely that studies have not been reported or published for reasons of reporting bias. Looking at the plot on the right, the asymmetry suggests missing studies over to the left side, crossing into the area of statistical significance. This is not consistent with our understanding of reporting bias – it would mean that the non-significant studies had been published, and not the significant ones. It’s much more likely that there is some other reason for the asymmetry. If there are no statistically significant studies at all, then it’s very unlikely that reporting bias is the cause of the asymmetry.
  • Here at the top left is an example of a funnel plot in which there is overall asymmetry. On these plots, the dotted triangle lines indicate the area within which we would expect to find 95% of the data in the absence of bias and heterogeneity. As it turns out, the asymmetry in this plot is due entirely to differences in the effects between subgroups. Separate funnel plots for each of the three subgroups show that none of them is asymmetrical, but there are differences in the effect of the intervention in each subgroup. When we look at all three subgroups overlaid, it looks asymmetrical, but in face what we have is heterogeneity arising from other factors.
  • Visual inspection of funnel plots is not always easy or reliable. Aside from funnel plots, it’s also possible to conduct statistical tests to determine whether there is a greater association between study size and effect than we would expect to occur by random chance. There are a range of different tests available, although they have advantages and disadvantages. Three of the available tests are recommended – check the Cochrane Handbook section 10.4.3, and you should definitely get statistical advice before deciding to use any of them. If you are assessing small-study effects, you should always include a visual assessment of the funnel plot as well, and like funnel plots, these statistical tests are usually only useful if you have at least 10 studies.
  • If you suspect that you have identified a small-study effect, you may wish to know how large its impact might be on your results. We have already come across a useful technique for testing this in our separate presentation on heterogeneity: where small studies are systematically different, comparing the fixed- and random-effects meta-analyses will give you a sensitivity analysis of the potential impact of the small studies. If the random-effects model shows a different effect, consider whether it is reasonable to conclude that the intervention was more effective in smaller studies, with reference to possible clinical and methodological diversity between the studies. This is not a perfect test – it is possible for small-study effects to bias the results where there is no heterogeneity, and where fixed-effect and random-effects models give the same result. Selection models (e.g. ‘trim and fill’, other more sophisticated models) can be used, but require expertise and careful application. Do not attempt to use these without statistical advice.
  • Here we have an example where the size of the studies in the review is correlated with their results – the same example we looked at in the separate presentation on Heterogeneity. This review is looking at intra-venous magnesium for acute myocardial infarction, and measuring mortality. As you can see, there are a few large studies, shown by the large squares, lined up closely to the line of no effect. There are then a lot of small studies, and they are all over to the left of the plot, showing a stronger reduction in mortality. If the small studies were not systematically different to the larger studies, we would expect the fixed-effect diamond to sit neatly inside the diamond for the random-effects model. In this case, we can see that this doesn’t happen. The fixed-effect result is right on the line of no effect – between 0.94 and 1.04. In comparison, the random-effects result shifts to the left, with a CI of between 0.53 and 0.82 – they don’t overlap at all. The random-effects model, by giving more weight to the smaller studies, has highlighted a systematic difference. Remember that this kind of sensitivity analysis can highlight the presence of small study effects, but it doesn’t tell you why this has happened. It is still your job to consider the possible explanations.
  • 9 Dissemination of research results falls along a continuum: Unavailable: e.g. not published, only available through informal circulation from the author. Available in principle: e.g. published as thesis, a conference abstract, or in a journal with smaller circulation and impact, perhaps not published in English, not indexed by the major databases. Only about half of the abstracts presented at conferences are ever published in full (Scherer 2007). Easily available: e.g. published in a journal indexed in Medline Actively disseminated: e.g. trials distributed by the drug rep or some other interested organisation Only a proportion of studies will ever be published in a way that makes it easy to access and include in your review.
  • We now that these differences in dissemination aren’t random. They are influenced by the results of the studies. Studies with more positive results, and significant findings, are more likely to be published and widely disseminated, which in turn makes it much more likely that they will be incorporated into your systematic review. Small studies are more likely to be affected by this problem. Large studies are more likely to be published in any case, regardless of their findings, and small studies are the most vulnerable. So, if we find an excess of small, positive studies in a review, one of the possible explanations is that we have failed to find published records of the balancing, neutral or negative studies. If we can only find and include the positive, significant findings in our review, the risk is that we are misrepresenting the true effect of the intervention. For our review to be accurate and reliable, we need to make sure we include all the evidence, including the negative and statistically non-significant results.
  • There is evidence to demonstrate this effect at work. In this study, Stern & Simes looked at a cohort of clinical studies to see how long it took for the results to be published. The answer varied strongly depending on the results of the study. The red line shows studies with significant results – they were the fastest to be published, and after 10 years less than 20% remained unpublished. Studies with non-significant results, but with a discernible trend, were slower to be published over time, and nearly half remaining unpublished after 20 years. The slowest to be published were studies with a null result – that is, the intervention had no discernible effect. Almost none of these studies were published within 5 years, and after 10 years more than 70% remained unpublished.
  • Reporting biases can occur at many stages of the dissemination process. File drawer problem (Rosenthal 1979) Journals are filled with 5% of the studies that show Type I errors, while file drawers are filled with the 95% of studies that show non significant results. Authors may not spend the time to write up and submit manuscripts on disappointing results. This may be especially true of industry-funded research. Publication bias : Once submitted, papers may be less likely to interest journal editors, or receive less favourable peer review, leading to less chance of publication. Time lag bias : Another Cochrane methodology review (Hopewell 2007a) has shown that non-significant results may take 2 to 3 years longer to be published compared to studies with significant results (Royal Prince Alfred Ethics Committees, Stern and Simms 1997; and HIV multicentre studies, Ioannidis 1998). This means that at the point a systematic review is written, the literature may be dominated by positive studies, with years going by before the balancing studies are published. Duplicate/multiple publication bias: It is relatively common for trials to be published multiple times and difficult to determine when this has occurred (may be different authors, different population sizes etc..). Positive studies are more likely to be published more than once, which means they are more likely to be located and included in the review. If multiple publications are included without being recognised, participants will be double-counted and the treatment effect will be even more exaggerated. Language bias: There is some evidence (although not conclusive) that positive results are more likely to be submitted to and published in English-language journals. This highlights the importance of not limiting your search to papers published in English or databases that largely index the English literature. Location bias: Studies with positive findings are more likely to be accepted for publication in high-impact, high-distribution journals, and importantly, the limited proportion of journals that are indexed by the major databases. Studies published in non-indexed journals are harder to find for your review. Selective outcome reporting: As discussed in relation to our risk of bias assessment for individual studies, within studies that do make it to publications, outcome measures showing positive findings are more likely to be reported. Citation bias: Studies with positive findings are more likely to be cited by other papers, which again makes them easier to find if citations are used as part of the search strategy for the review. And, since authors tend to cite papers that agree with their findings, additional citations reinforces existing the bias. And don’t forget the importance of selective outcome reporting , or the selective publication of positive or significant findings within papers, while leaving out or altering the reporting of those outcomes with negative or non-significant findings. This issue is addressed as part of the ‘Risk of bias’ assessment of included studies.
  • In determining whether reporting bias is impacting on your review, perhaps causing funnel plot asymmetry or perhaps not, you will need to consider the context of your intervention, and susceptibility to different biases for example through conflict of interest. Sometimes there is real world information that can help us work out if publication bias is likely – it doesn’t always depend on statistical tests and small-study effects. This example from a Cochrane review of alpha blockers for hypertension. A total of 10 trials were found in the review, although measuring several different doses of the drugs, so they could not be pooled together, and there weren’t enough trials in a meta-analysis to generate a funnel plot. Nonetheless, these drugs were known to be approved for use by regulators (e.g. the FDA in the US), so we know there had to have been trials completed and submitted for that approval to be successfully given. However, as so few trials were available, and not enough to support all the doses that were approved – in fact none to support some doses - we can conclude that there are missing trials that the drug companies have not made public. This might lead us to conclude that publication bias is likely, although it does not give us a clear idea of how great its impact might be in this particular case.
  • Here is another example, identified by Moreno and colleagues in a BMJ paper looking at a set of trials on anti-depressants, and comparing the published literature to the set of trials submitted for FDA approval. On the left is a funnel plot based on all the FDA data. [CLICK] Here on the right, we have the results of all the publications that could be found reporting the same studies. You can see that we have many fewer studies (50 studies instead of 73), and it’s clear from this plot that the studies that are missing are those that reported non-significant results. Not all cases will be so clear-cut, of course. The role of companies with a strong financial interest in the outcomes of the research is always a powerful conflict of interest that should be considered. Trial registration and standardising the reporting of outcomes in a field can be reassuring about reporting bias, as can the inclusion of data from pharmaceutical regulators in the review.
  • The effect of reporting biases, while important, may be less than risk of bias related to study design, such as blinding and allocation concealment. This is another Cochrane methodology review, assessing the impact on the results of meta-analysis of including grey literature, finding between a 4% and 28% increase in odds ratios. Identifying grey literature will not always make a dramatic difference to your results, and may bring its own issues: the studies may be at higher risk of bias (which we assess as we do for all studies), and it may be that even the grey literature we find is more likely to be positive than grey literature overall, e.g. as authors are more likely to respond to requests for unpublished data. It’s important to keep this in perspective.
  • So, in practice, what should you do in your review? In relation to reporting bias in particular, the best thing we can do to prevent it is to do our best to find all the studies that have been conducted, by running a comprehensive search, attempting to find unpublished and grey literature, contacting authors in the field, etc. Trials registries are an important initiative – as they grow internationally, and more journals require registration before publication, registries have the potential to make an important difference in publication bias (although there are still limitations on the completeness of the data in the registered trials, and the application of requirements by journals for registration). Still, we may not be completely successful in preventing reporting bias. Thinking more broadly about small-study effects, we can use the tools available for diagnosis. Funnel plots and statistical tests can help us identify small-study effects, and sensitivity analyses, such as comparing fixed- and random-effects meta-analyses, can help us measure how great the impact of the small studies might be. Even where we do identify small-study effects, we have to remember the range of possible causes of these effects. If we do explore those effects and conclude that reporting bias is the most likely cause, there is no cure. Nonetheless, authors will be expected to comment on both of these issues – small-study effects and the possibility of reporting biases - in their review.
  • Bringing all this back to your protocol – in the Methods section of the review, under the collective heading ‘Data and analysis’, there is a specific subheading on ‘Assessment of reporting biases’. In this section you should include how to you plan to consider reporting biases in your review, including the option to include specific methods such as funnel plots and statistical tests, but remembering that small-study effects have many possible causes.
  • Small study effects and reporting biases

    1. 1. Small-study effects and reporting biases
    2. 2. Steps of a Cochrane review <ul><li>define the question </li></ul><ul><li>plan eligibility criteria </li></ul><ul><li>plan methods </li></ul><ul><li>search for studies </li></ul><ul><li>apply eligibility criteria </li></ul><ul><li>collect data </li></ul><ul><li>assess studies for risk of bias </li></ul><ul><li>analyse and present results </li></ul><ul><li>interpret results and draw conclusions </li></ul><ul><li>improve and update review </li></ul>
    3. 3. Outline <ul><li>identifying small-study effects </li></ul><ul><li>understanding reporting biases </li></ul>See Chapter 10 of the Handbook
    4. 4. Reminder: random error <ul><li>where a group of studies estimates an effect, each study is affected by random error </li></ul><ul><li>results will be scattered around the true effect – some lower, some higher </li></ul>Source: Julian Higgins random error effect result
    5. 5. Random error and small studies <ul><li>random error assumes: </li></ul><ul><ul><li>small studies are less precise than large studies </li></ul></ul><ul><ul><li>expected to be more widely scattered around the mean </li></ul></ul><ul><li>small-study effects </li></ul><ul><ul><li>when small studies are consistently more positive, or negative, than larger studies </li></ul></ul><ul><ul><li>one possible type of heterogeneity </li></ul></ul><ul><ul><li>may be many explanations </li></ul></ul>
    6. 6. Identifying small-study effects <ul><li>assess each outcome separately </li></ul><ul><li>methods available: </li></ul><ul><ul><li>funnel plots </li></ul></ul><ul><ul><li>statistical tests </li></ul></ul><ul><ul><li>sensitivity analysis </li></ul></ul><ul><li>proceed with caution and get statistical advice </li></ul>
    7. 7. Funnel plots <ul><li>plot effect size against study size </li></ul><ul><ul><li>study size usually indicated by a measure like standard error </li></ul></ul><ul><li>studies will be scattered around the effect estimate </li></ul><ul><ul><li>larger studies at the top, smaller studies further down </li></ul></ul><ul><ul><li>small studies expected to scatter more widely </li></ul></ul><ul><li>a symmetrical plot will look like an inverted funnel or triangle </li></ul><ul><li>RevMan can generate funnel plots </li></ul><ul><li>only appropriate with ≥ 10 studies of varying size </li></ul>
    8. 8. Symmetrical funnel plot Standard Error Effect Source: Matthias Egger & Jonathan Sterne 0.1 0.33 1 3 3 2 1 0 10 0.6
    9. 9. Asymmetrical funnel plot Effect Standard Error Source: Matthias Egger & Jonathan Sterne Unpublished studies 0.1 0.33 1 3 3 2 1 0 10 0.6
    10. 10. Asymmetrical funnel plot Effect Standard Error Source: Matthias Egger & Jonathan Sterne Small studies all finding positive effects 0.1 0.33 1 3 3 2 1 0 10 0.6
    11. 11. Colloids vs crystalloids for fluid resuscitation Adapted from Perel P, Roberts I. Colloids versus crystalloids for fluid resuscitation in critically ill patients. Cochrane Database of Systematic Reviews 2011, Issue 3. Death
    12. 12. Magnesium for myocardial infarction Adapted from Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database of Systematic Reviews 2007, Issue 2.
    13. 13. Reasons for funnel plot asymmetry <ul><li>chance </li></ul><ul><li>artefact </li></ul><ul><ul><li>some statistics are correlated to SE, e.g. OR </li></ul></ul><ul><li>clinical diversity </li></ul><ul><ul><li>different populations different in small studies </li></ul></ul><ul><ul><li>different implementation different in small studies </li></ul></ul><ul><li>methodological diversity </li></ul><ul><ul><li>greater risk of bias in small studies </li></ul></ul><ul><li>reporting biases </li></ul>Source: Egger M et al. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629
    14. 14. Contour-enhanced funnel plots Source: Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: 10.1136/bmj.d4002
    15. 15. Asymmetry due to heterogeneity Source: Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: 10.1136/bmj.d4002
    16. 16. Tests for funnel plot asymmetry <ul><li>is there a greater association between study size and intervention effect than would be expected by chance? </li></ul><ul><li>three tests are recommended </li></ul><ul><ul><li>generally have low power to rule out reporting biases </li></ul></ul><ul><li>also use a visual inspection of the funnel plot </li></ul><ul><ul><li>only appropriate if you have ≥ 10 studies of varying size </li></ul></ul>See Section 10.4.3 of the Handbook
    17. 17. Sensitivity analysis <ul><li>if you find a small-study effect, how large is the impact on your results? </li></ul><ul><li>consult a statistician before proceeding </li></ul><ul><ul><li>if heterogeneity is present (I 2 > 0), compare fixed-effect and random-effects estimates </li></ul></ul><ul><ul><ul><li>is there a difference? </li></ul></ul></ul><ul><ul><ul><li>if so, is there a reason why the intervention is more effective in smaller studies? </li></ul></ul></ul><ul><ul><li>selection models and other methods </li></ul></ul>
    18. 18. Sensitivity analysis Adapted from Li J, Zhang Q, Zhang M, Egger M. Intravenous magnesium for acute myocardial infarction. Cochrane Database of Systematic Reviews 2007, Issue 2.
    19. 19. Outline <ul><li>identifying small-study effects </li></ul><ul><li>understanding reporting biases </li></ul>
    20. 20. The dissemination of evidence available in principle (e.g. thesis, obscure journal) Source: Matthias Egger
    21. 21. Reporting biases <ul><li>dissemination of research findings is influenced by the nature and direction of results </li></ul><ul><li>statistically significant, ‘positive’ results more likely to be published… </li></ul><ul><li>… therefore more likely to be included in your review </li></ul><ul><ul><li>leads to exaggerated effects </li></ul></ul><ul><ul><li>large studies likely to be published anyway, so small studies most likely to be affected </li></ul></ul><ul><li>non-significant results are as important to your review as significant results </li></ul>
    22. 22. Evidence for reporting bias Source: Stern JM, Simes RJ. Publication bias: evidence of delayed publication in a cohort study of clinical research projects BMJ 1997;315:640-645. Proportion of studies not published Years since conducted Significant Non-significant trend Null
    23. 23. Positive studies are more likely to be <ul><li>submitted for publication... </li></ul><ul><li>… and accepted (publication bias) </li></ul><ul><li>… quickly (time lag bias) </li></ul><ul><li>… as more than one paper (multiple publication bias) </li></ul><ul><li>… in English (language bias) </li></ul><ul><li>… in high-impact, indexed journals (location bias) </li></ul><ul><li>… including positive outcomes (selective outcome reporting) </li></ul><ul><li>… and cited by others (citation bias) </li></ul>Source: Julian Higgins Conceived Performed Submitted Cited Published
    24. 24. Example: alpha blockers <ul><li>10 trials identified measuring several different doses </li></ul><ul><li>trials must have been completed and provided to the regulators in order for the drug to be approved </li></ul><ul><ul><li>few trials found </li></ul></ul><ul><ul><li>many of doses approved by regulators did not have sufficient evidence to support their use </li></ul></ul><ul><ul><li>for some doses there were no published data </li></ul></ul>Source: Nancy Santesso and Holger Schünemann. Based on Heran BS, Galm BP, Wright JM. Blood pressure lowering efficacy of alpha blockers for primary hypertension. Cochrane Database of Systematic Reviews 2009, Issue 4
    25. 25. Example: antidepressants Source: Moreno, S. G., A. J. Sutton, et al. Novel methods to deal with publication biases: secondary analysis of antidepressant trials in the FDA trial registry database and related journal publications. BMJ 2009, 339.
    26. 26. Impact of publication bias Hopewell S, McDonald S, Clarke MJ, Egger M. Grey literature in meta-analyses of randomized trials of health care interventions. Cochrane Database of Systematic Re views 2007, Issue 2.
    27. 27. What does this mean for my review? <ul><li>prevention </li></ul><ul><ul><li>a comprehensive search of multiple sources </li></ul></ul><ul><ul><li>grey literature, non-English literature, handsearching </li></ul></ul><ul><ul><li>trials registries </li></ul></ul><ul><li>diagnosis </li></ul><ul><ul><li>consider looking for small-study effects </li></ul></ul><ul><ul><li>sensitivity analysis to identify possible impact </li></ul></ul><ul><ul><li>publication bias is not the only explanation </li></ul></ul><ul><li>there is no cure </li></ul><ul><ul><li>explore any observed small-study effects </li></ul></ul><ul><ul><li>you will also be expected to comment the likelihood of reporting biases </li></ul></ul>
    28. 28. What to include in your protocol <ul><li>assessment of reporting biases </li></ul><ul><li>optional use of funnel plots & statistical tests for asymmetry </li></ul>
    29. 29. Take home message <ul><li>look for small-study effects in your review </li></ul><ul><li>be aware of the many possible causes </li></ul><ul><li>consider the possible impact of reporting biases on your review </li></ul><ul><li>get statistical advice if uncertain </li></ul>
    30. 30. References <ul><li>Sterne JAC, Egger M, Moher D (editors). Chapter 10: Addressing reporting biases . In: Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available from . </li></ul><ul><li>Egger M et al. Bias in meta-analysis detected by a simple, graphical test. BMJ 1997; 315: 629 </li></ul><ul><li>Sterne JAC, Sutton AJ, Ioannidis JPA et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ 2011;342:d4002 doi: 10.1136/bmj.d4002 </li></ul>Acknowledgements <ul><li>Compiled by Miranda Cumpston </li></ul><ul><li>Based on materials by Jonathan Sterne, Matthias Egger, Julian Higgins, David Moher, Nancy Santesso, Holger Schünemann, the Cochrane Bias Methods Group, the Australasian Cochrane Centre and the Cochrane Applicability and Recommendations Methods Group </li></ul><ul><li>Approved by the Cochrane Methods Board </li></ul>