Systematic Reviews: the process, quantitative, qualitative and mixed methods reviews. Edoardo Aromataris
Upcoming SlideShare
Loading in...5
×
 

Like this? Share it with your network

Share

Systematic Reviews: the process, quantitative, qualitative and mixed methods reviews. Edoardo Aromataris

on

  • 5,432 views

Health Libraries Australia Professional Development Day 2012 [Keynote address - part 2]

Health Libraries Australia Professional Development Day 2012 [Keynote address - part 2]

Statistics

Views

Total Views
5,432
Views on SlideShare
5,432
Embed Views
0

Actions

Likes
1
Downloads
86
Comments
0

0 Embeds 0

No embeds

Accessibility

Upload Details

Uploaded via as Microsoft PowerPoint

Usage Rights

© All Rights Reserved

Report content

Flagged as inappropriate Flag as inappropriate
Flag as inappropriate

Select your reason for flagging this presentation as inappropriate.

Cancel
  • Full Name Full Name Comment goes here.
    Are you sure you want to
    Your message goes here
    Processing…
Post Comment
Edit your comment
  • Why Systematic Reviews? The Joanna Briggs Institute has as its' central focus, the appropriateness, meaningfulness and effectiveness of health care. We concentrate on health outcomes from the client, community, clinical and economic perspective. Guiding Principles: We regard the results of well designed research studies grounded in any methodological position as providing more vigorous evidence than anecdotes or personal opinion; The approach to evaluating evidence will be tailored to the specific focus of the review; and the development and peer review of a protocol is fundamental to The Joanna Briggs Institute approach to reviews. JBI follows the process developed by the Cochrane Collaboration and incorporates the dissemination approach developed by the NHS Centre for Reviews and Dissemination at the University of York.
  • This slide shows a quote, almost a complaint actually, from Archibald (Archie) Cochrane from whose work the evidence based practice movement originated. When discussing this slide describe Archie and then expand on his work. Archie Cochrane was a Welsh General Practitioner and spent some time as a prisoner of war during World War 2. He drew attention to the lack of information about the effects of health care and suggested “ (it) is surely a great criticism of our profession (medicine) that we have not organised a critical summary by specialty or sub-specialty adapted periodically of all relevant randomised controlled trials”. One example of the need for these critical summaries is the use of streptokinase in myocardial infarction. Streptokinase is a drug used to thin the blood and dissolve clots. The benefits of this drug were demonstrated in studies conducted in the 1970s, however more and more studies were conducted into the 1990s. If a register of these trials had been available, the results could have been pooled in a summary or systematic review and the benefits of this therapy confirmed, thus saving many dollars and potentially lives. This is where the evidence-based practice started, with RCTs for medical practice/therapies. However, JBI advocates a wider, inclusive definition what evidence is, asking broader questions than solely focusing on effectiveness.
  • Explain history of systematic review methodology
  • International development coordinating group?
  • This slide shows a quote, almost a complaint actually, from Archibald (Archie) Cochrane from whose work the evidence based practice movement originated. When discussing this slide describe Archie and then expand on his work. Archie Cochrane was a Welsh General Practitioner and spent some time as a prisoner of war during World War 2. He drew attention to the lack of information about the effects of health care and suggested “ (it) is surely a great criticism of our profession (medicine) that we have not organised a critical summary by specialty or sub-specialty adapted periodically of all relevant randomised controlled trials”. One example of the need for these critical summaries is the use of streptokinase in myocardial infarction. Streptokinase is a drug used to thin the blood and dissolve clots. The benefits of this drug were demonstrated in studies conducted in the 1970s, however more and more studies were conducted into the 1990s. If a register of these trials had been available, the results could have been pooled in a summary or systematic review and the benefits of this therapy confirmed, thus saving many dollars and potentially lives. This is where the evidence-based practice started, with RCTs for medical practice/therapies. However, JBI advocates a wider, inclusive definition what evidence is, asking broader questions than solely focusing on effectiveness.
  • The literature review or the review of the literature is ‘an organized critique of the important scholarly literature that supports a study, and a key step in the research process. The term scholarly literature refers to published and unpublished data-based (research) reports , as well as conceptual (theoretical) literature All systematic reviews are a type of literature review however all literature reviews are not systematic reviews.
  • Given the explosion of knowledge and access to a diverse range of knowledge sources over the past decade, it is now almost impossible for individual clinicians or clinical teams to stay abreast of knowledge in a given field. Systematic review, conducted by review groups with specialised skills, set out to retrieve international evidence and to translate the results of this search into evidence summaries suitable for the transfer of knowledge into practice settings. Systematic Review is also referred to as a “Research Synthesis”. Discuss these points as per slide.
  • The systematic review is a form of research using secondary data. And therefore the process must be a rigorous as for a primary research study. Here “minimise…arbitrariness” refers to the attempt to reduce the impact of individual, personal or unsupported judgment on the outcome of the research and the conclusions reached, similar to minimising bias. The process used should be transparent and reproducible.
  • Bias is a term used to describe a tendency or preference towards a particular perspective, ideology or result, when the tendency interferes with the ability to be impartial, unprejudiced, or objective. In other words, bias is generally seen as a 'one-sided' perspective. Thus in order to produce statements about evidence that are neutral in perspective, and to uncover all the evidence, explicit and exhaustive reporting must be used so that the systematic review can minimise bias.
  • Meta-analysis is a term you may be familiar with and we will discuss more in Module 3. Meta-analysis uses various statistical methods to combine the results of multiple different studies to produce a stronger conclusion than can be derived from any one of the studies on its own. It is a ‘statistical analysis of results from separate studies, examining sources of differences in results among studies, and leading to a quantitative summary of the results if the results are judged sufficiently similar to support such synthesis’ (Porta, 2008:154). It is an integral part of the systematic review, however does not represent all steps in the process. Meta-analysis may appear in a JBI review addressing questions related to Effectiveness, as with a Cochrane Review, and also potentially questions related to Feasibility and Appropriateness. Porta M. (editor). (2008). A Dictionary of Epidemiology . Fifth edition. Oxford: Oxford University Press,
  • Meta-synthesis is the pooling of textual information. The term ‘meta-synthesis’ refers to a ‘higher form of synthesis’, or could be termed the ‘science of summing up’. It is the process of combining the findings of individual qualitative studies to create summary statements that authentically describe the meaning of these themes (Pearson et al 2007). Similar to meta-analysis, meta-synthesis is an integral part of the systematic review, however does not represent all steps in the process. Meta-synthesis may appear in a JBI review addressing questions related to meaningfulness and experiences, and also potentially questions related to Feasibility and Appropriateness taken from a different perspective such as opinion, as opposed to numbers. Pearson A, Field J, Jordan Z. (2007) Evidence-Based Clinical Practice in Nursing and Health Care: Assimilating Research, Experience and Expertise. Oxford: Blackwell Publishing.
  • Qualitative research, as seen in these examples, can be used in a variety of contexts to inform and improve evidence-based practice, thus it is important to develop skills in synthesising this type of evidence for health services. Qualitative research is not an alternative or “instead” approach to quantitative research, but can work well in conjunction with quantitative study to provide answers to different questions. It can play a significant role in understanding how individuals and communities perceive health, manage their own health and make decisions related to health service usage; increasing our understandings of the culture of communities and of health units in relation to implementing change and overcoming barriers to the use of new knowledge and techniques; Informing planners and providers in relation to how service users experience health and illness and the health system; evaluating components and activities of health services that cannot be measured in quantitative outcomes (such as health promotion and community development).
  • Discuss as per slide. There has been a concerted effort to develop a systematic review process that is sensitive to the specific characteristics of the various types of evidence used in various reviews. The JBI software, which we will now introduce (next slide) enables reviewers to do this by providing templates and tools of searching, appraisal, extraction and synthesis of data.
  • A mixed method or comprehensive systematic review gathers both forms of evidence, that is qualitative and quantitative evidence regarding feasibility, appropriateness, meaningfulness, and effectiveness. Separate analyses and synthesis are performed on the corresponding data.
  • Discuss as per slide. Systematic reviews aim to provide comprehensive and unbiased summaries of the research on a single topic bringing together multiple individual studies in a single document. As part of the systematic review process, individual research studies are subject to critical appraisal. Even when research evidence is limited or non-existent, systematic reviews summarise the best available evidence on a specific topic providing the best evidence for clinical decision-making as well as identifying future research needs. (JBI, 2001:1) Joanna Briggs Institute, The (2001). An Introduction to Systematic Review. Changing Practice: evidence-based practice information sheets for health professionals . Supp1. 1-6.
  • These steps outline the structured process of a systematic review. They are similar to the steps we saw in our schematic right at the beginning of the day and they are the steps we will explore over the duration of this program. This module and the following modules will lead the participants through all of the steps according to the question and the type of evidence they are searching for.
  • It is question design that has the most significant impact on the conduct of a systematic review as the subsequent inclusion criteria are drawn from the question and provide the operational framework for the review. The core elements of a good question include: Population Intervention, interest or exposure Control or comparator Outcomes to be measured There is a range of mnemonics available to guide the structuring of review questions, the most common for quantitative reviews being PICO. The PICO mnemonic begins with identification of the Population , the Intervention being investigated and its Comparator and ends with a specific Outcome of interest to the review. Use of mnemonics can assist in clarifying the structure of review titles and questions. In JBI, and in Cochrane, there is a preference for the PICO mnemonic to be used to guide question development. It can also be used to guide concept mapping for the search strategy and tailored to the specify type of evidence being focused upon. Context may be geographic, cultural, socio-economic, or any other dimension of the human experience
  • The key in making in impact on clinical practice is to provide the evidence for change and/or implementation. So, you want to find the ‘best’ evidence, that is, articles or systematic reviews with well conducted randomised controlled trials (RCTs), if, for example you are conducting a review of effects. You are able to answer the question that there was independent research conducted, free from bias that clearly sets the, who, how, what and where so that the results are transparent. Unlike primary scientific research, which relies on the reproduction of experiments and observations by multiple independent researchers to establish truth and meaning and knowledge., secondary research, that is the systematic review, does not. Unless it is a specific update of a review a question should not need reviewing multiple times. So look first to see if any reviews exist - if they do -change your question!
  • The Joanna Briggs Institute, Cochrane and other review-based organisations require the development of a protocol before a review can be commenced. The benefits of submitting a protocol before beginning the review relate to the validity and merit of a research process that reduces risk of bias, promotes a systematic rather than ad hoc approach to the review process, facilitates communication with others and promotes consistency between review team members. These features further support the reliability and usefulness of reviews to health professionals. A review protocol provides the specific direction the review will follow. It explicitly describes the inclusion criteria, methods of appraisal, extraction and synthesis. The background section describes the topic, why it is important and explores key elements of the inclusion criteria. Protocols across all review methods (experimental, non-experimental, qualitative& textual and economic) contain a series of standardised headings that are consistent with the scientific principals of a priori documentation.
  • The background should not assume prior knowledge on the part of readers, and may require several pages to clarify why the topic was chosen, what elements within the question are important and why they are important. If there are particular inclusion criteria, or outcome measures the background should justify the choices, and explain why they are important. The Joanna Briggs Institute places emphasis on a comprehensive, clear and meaningful background section to every systematic review. The background should avoid making statements about findings (for example: “Use of acupuncture is effective in increasing smoking cessation rates in hospitalised patients”). This is what the review will determine. If this type of statement is made it should be clear that it is not the reviewer’s conclusion but that of a third party, (phrased such as “Smith indicates that acupuncture is effective in increasing smoking cessation rates in hospitalised patients”).
  • Non experimental study designs vary from those included in Cochrane EPOC reviews such as controlled before and after studies, down to convenience sample based descriptive studies with no controls, and may be based on a sample of one. They can be useful approaches to investigate issues not amendable to randomisation, and in the absence of higher level evidence of effects, provide the basis for evidence based practice by being the “b e st available evidence”. The type and range of study designs included should be listed, with reviewers giving consideration to how far down the types of studies hierarchy the review will extend. The population characteristics remain important, with the additional factor that not all non experimental studies have control groups. The lack of randomisation and/or control groups means that establishing the population characteristics can be difficult. Meta analysis is possible on studies with control groups, but not studies where there is not a comparator. Criteria related to the intervention are the same as for reviews of experimental studies. Comparator groups may be included in some study designs, but not others, both can be included in the review, but it may be useful to report those with control groups separately, even for the same outcomes as there is a difference in the strength of study designs where controls are present. For reviews of economic effectiveness, there are specific study designs of interest to specific economic questions. These include: Cost minimisation designs: intended to identify the least costly intervention where multiple interventions have demonstrated similar benefit. Cost effectiveness designs: where interventions achieve similar outcomes but have unknown or potentially different resource implications. Cost utility designs: seek to establish benefit as measured by quantity and quality of life (QALYs). Cost benefit designs: these seek to identify a specific monetary ratio (gain/loss or cost/benefit) for an intervention. In economic reviews, the inclusion criteria remain similar to that of experimental designs, however in regard to outcomes, there should be a list of all the outcome measures to be considered. Note that outcome measures might be primary or secondary. The background should provide enough information to justify the outcomes included and potentially those that were not included. The outcomes need to be measurable and appropriate to the review objective. In terms of costing data, the outcome may be described in relation to the type of review. Therefore the outcome may be described in relation to cost minimisation, cost effectiveness, cost benefit or cost utility; these being the economic models incorporated in the analytical module JBIACTUARI.
  • The study designs used in qualitative (critical and interpretive) research are not hierarchical as quantitative study designs are, therefore, the listing of specific methodologies is not made in preference to alternate methodologies. The argument for focusing on a particular methodology is not supported theoretically, or practically in terms of ensuring the review will capture the richness and depth of data necessary to properly inform readers, therefore the tendency to include multiple methodologies is encouraged by JBI. Participants: Are described in relation to the phenomena of interest. Are there specific participant characteristics related to the phenomena that might be important in the understandings the review seeks to generate. The aim should be to identify relevant characteristics, without overly limiting the characteristics. Phenomena of interest: Can be described in highly specific terms, or general terms. Its usual to describe it as the individuals experience, although “p e rception” “a w areness” or other words that you feel are congruent with qualitative methodology may be used, as no existing convention defines what terms should be used Context: Might be a particular type of setting, or socio-economic factor, or environment. As with other elements of the review criteria, the context should be developed in such a way that relevant papers will not be excluded for trivial reasons Textual Reviews: A broad range of papers can be described. It is useful to give consideration not just to the types of papers, but also where and how they might be accessed, including from government, professional or peak bodies etc. Textual reviews tend to examine an intervention or phenomena, and as such are the focus of the inclusion criteria.
  • It is important to know what question you are asking…Discuss as per slide and below. The following are examples of the different types of questions asked within a systematic review: Clinical Findings: How do we properly gather or soundly interpret these clinical findings? Differential diagnosis: How do we distinguish condition a from condition b Diagnostic Tests : How do we select and interpret relevant diagnostic tests for x Therapy : Is this intervention more effective in terms of state outcome/s than another/others/doing nothing etc Prevention : How do we screen and reduce risk of this disease? Prognosis : What is the likely outcome, course or natural progression of this condition? Cause/aetiology : What are the causes of this state of affairs? Cost-effectiveness : Is intervention a more cost-effective than intervention b? Harm/risk : What are the side effects, risks, of this intervention? (Does it do more harm than good?) Quality of life : What will be the quality of life for the patient (s) following (or without) this intervention?
  • It is important to structure your search strategy, usually breaking down the review question into a PICO (participants, intervention, comparator, and outcome) scale. A thorough search strategy usually involves three steps: An initial search of MEDLINE and CINAHL, followed by analysis of the text words contained in the title and abstract, and of the index terms used to describe the article; A second search using all identified keywords and index terms across all included databases; The third step includes search for unpublished studies When participants take a moment to have a look at the predictive text inserted by CReMS, they should notice that these 3 steps appear in the search description.
  • There are two features of a search strategy: Sensitivity is the ability to identify all the relevant articles Specificity or precision is the ability to exclude irrelevant articles There is an inverse relationship between sensitivity and specificity i.e. high sensitivity will tend to have low specificity, and this means that a large number of articles retrieved are not relevant to the review question. It is important that two reviewers independently screen the identified trials for potential inclusion in the review “in order to maximise ascertainment of relevant trials”.
  • To be an effective searcher, the most important thing is knowing where to start. Firstly, you need know and understand your question. What is it asking? What are the keywords in the question? You must understand what it is that you are seeking before you even make a move. It’s like a chess game, right from the first move you make its important to the outcome. If you start badly, with the incorrect information, or the wrong strategy, you’ll end up with too many results, you’ll find you have been working away solidly away for some time, even weeks, have gotten completely lost, finding more and more left-of-field and irrelevant information. Think about the sort of information you are seeking, is it highly scientific, quantifiable or is it more about a patients feelings and more qualitative in its rationale. Perhaps you might need to find economic data regarding the merits of providing an intervention in a hospital. There are many sorts of data that you could be seeking and it is important to get a very clear picture of what the thing is that you are looking for. Knowing your subject area steers you in the right direction before the search begins. You can then utilise relevant sources for your search.
  • Database of African Theses and Dissertations (DATAD) These are only some of the different data sources that can be utilised in the search. Some of these resources have been explored earlier. It is important to look at each search on its own and not just use the same databases to search on as a matter of course. Most of you will need to look for peer reviewed literature published in scientific journals. For a JBI review you will also have to make some attempt to search… Grey or Gray Literature – also referred to as Deep/Hidden/Web – is electronic information or documentation that is usually not peer reviewed, can be biased because it might be sponsored by a pharmaceutical company or government agency. It is information usually not found on the main databases like Medline, EMBASE etc. It can be in the form of reports, newsletters or it can be from academic institutions, corporations, blogs, conference proceedings, census reports and non-independent research. It is also material that has not been indexed by a web crawler or robot. Grey Literature :The purpose of conducting a systematic review is providing the most comprehensive, proven data on a given subject. If grey literature is not searched, then there could be important research that has been missed on the main database searches. It is work that has not yet been through the peer review process and thus a way of obtaining information before it is published in mainstream journals etc, and a means of accessing information that is not published in mainstream journals. BIAS: Grey literature can be seen as biased because it is often not subject to rigorous review and editing process - often publisher/author may have vested interests in the information. However, searching grey literature a key means of avoiding publication bias. (studies with positive effects are more likely to be published than those that found no effect). A great deal of information and knowledge is stored in higher degree research theses. Many of these are know digitised and able to be searched beyond those at your own institution’s library. Government websites should not be overlooked. They are often a great source of information, particularly when looking at more remote locations in the world and even finding links to definitive sources specific to the type of subject you are researching. Depending on your question government data, such as census data may be relevant for you to search - if your question is related to epidemiology for example. Circulars or reports for organisation and government agencies can also be a valuable source of information.
  • POPLINE(POPulation information onLINE), the world's largest database on reproductive health, provides more than 350,000 citations with abstracts to scientific articles, reports, books, and unpublished reports in the field of population, family planning, and related health issues. POPLINE has numerous special features including links to free, fulltext documents, the ability to limit your search to peer-reviewed journal articles and many abstracts in French and Spanish.
  • The Cochrane Central Register of Controlled Trials (CCTR) CENTRAL is bibliography of controlled trials identified by contributors to the Cochrane Collaboration, as part of an international effort to hand search the world’s journals and create an unbiased source of data for systematic review’s. CENTRAL includes reports in conference proceedings and materials not listed in MEDLINE of other bibliographic databases. Regard is the online database of the ESRC.(The Economic and Social Research Council) It holds details of all research funded by the ESRC since the mid-1980s, addressing economic and social concerns. It provides high quality research on issues of importance to business, the public sector and government. Clinical Trials.gov is the register oif the NIH in the Us of current trials. It provides the status of all trials also including results in some cases and is easy to search and use. Current Controlled Trials launched in 1998, allows users to search, register and share information about worldwide, randomized controlled trials. CCT searches across UK Clinical Trials, The Wellcome Trust, ClinicalTrials.gov, Medical Research Council UK, HTA, Leukaemia Research Fund and Action Medical research.
  • Types of grey literature that can be useful sources of evidence: Census or other statistical-related sources, conference proceedings and abstracts, Databases of ongoing research, E-journal, preprints, netprints, e-archives, newsletters, Informal communications (email, phone conversations, meetings, etc) “w i kis”, RSS feeds and podcasts, technical reports, specifications and standards, theses and dissertations, geological and geophysical surveys, translations, web log (or blog), non-commercial translations, bibliographies, official (government) documents not published commercially, library material that is in ‘raw’ archived state and not yet catalogued, and medical clinical trials. A rather severe limitation of many Grey Literature websites is there lack of indexing or subject headings and the inability to refine your search even by date. This makes searching a very time consuming process, whilst you think of every permutation that the subject you are looking for could be indexed as. There has also been rapid growth in grey literature with the advent of the World Wide Web, so there is a vast amount of information to sort through. The web has given rise to a situation where electronic versions of documents exist but are not necessarily indexed with search engines, or the reference may be almost impossible to find a full-text version of. Mednar , was established in November 2008 and is a federated search engine therefore non-indexing, designed for professional medical researchers to quickly access information from a multitude of credible sources. WorldWideScience .org is a global science gateway connecting you to national and international scientific databases and portals. WorldWideScience.org accelerates scientific discovery and progress by providing one-stop searching of global science sources PsycEXTRA is a companion database to PsycINFO in psychology, behavioural science and health. It includes references from newsletters, magazines, newspapers, technical and annual reports, government reports and consumer brochures OAIster is a catalogue of digital resources, providing access to these digital resources by "harvesting" their descriptive metadata (records) using OAI-PMH (the Open Archives Initiative Protocol for Metadata Harvesting). INIST in France (Institute for Scientific and Technical Information) has launched OpenSIGLE, which provides access to all the former SIGLE (System for Information on Grey Literature) records, new data added by EAGLE members and information from Greynet. Google Scholar provides a quick way to broadly search for scholarly literature; it provides links to peer-reviewed papers, theses, books, abstracts and articles, from academic publishers, professional societies, preprint repositories, universities and other scholarly organizations.
  • Index to Theses features thesis abstracts only. However, many full text theses accepted for Doctoral degrees by British universities are available from the British Library (BL). The BL’s British Theses Service gives copies in print or microfiche of the full text of more than 170,000 doctoral theses mainly from dates in the 1970s to the present day. The Networked Digital Library of Theses and Dissertations (NDLTD) is an international organization dedicated to promoting the adoption, creation, use, dissemination and preservation of electronic analogues to the traditional paper-based theses and dissertations. ProQuest Dissertations and Theses — Full text is the world's most comprehensive collection of dissertations and theses. The official digital dissertations archive for the Library of Congress and the database of record for graduate research. PQDT — Full Text includes 2.4 million dissertation and theses citations from around the world from 1861 to the present day together with 1 million full text dissertations that are available for download in PDF format. EThOS makes UK theses (e and paper based) available on-demand from researchers. The British Library, in collaboration with many UK universities and other associations, aims to provide over 250,000 theses produced by the UK higher education system on an open access model to all researchers and others requiring information. Some theses are available for immediate download, while others can be requested from a participating institution which then sends the thesis to the British Library for digitisation
  • Government websites should not be overlooked. They are often a great source of information, particularly when looking at more remote locations in the world and even finding links to definitive sources specific to the type of subject you are researching. The National Technical Information Service (NTIS) provides access to the results of both US and non-US government-sponsored research and can provide the full text of the technical report for most of the results retrieved. NTIS is free. The NCCAM - National Centre for Complementary and Alternative Medicine is an excellent resource for credible, reliable, useful and objective information about these approaches to medical care. For patients who are interested in evidence that supports or debunks alternative medicine, this is the place to look. The National Health and Medical Research Council (NHMRC) is Australia's peak body for supporting health and medical research; for developing health advice for the Australian community, health professionals and governments; and for providing advice on ethical behaviour in health care and in the conduct of health and medical research. These are only three examples, however keep in mind your subject matter so that you could harness relevant information such as policy papers from applicable government agencies.
  • As you search your sources you will have to export the results of searches from the different databases to various endnote libraries for example.
  • Study selection is an important process. Discuss as per slide. This slide introduces the need for strategic thinking when selecting studies for retrieval - we don’t want to retrieve all of the results from our search, only the relevant papers. It also highlights, that retrieval is one part of the review process and needs to be followed by critical appraisal of retrieved papers and then inclusion in the review of papers with sufficient rigor. The use of 2 reviewers is also reiterated here to minimise bias.
  • Discuss as per slide. DWhen selecting studies, aim to select only those that are specific to your review question. If your question relates to adults with mucositis, a paper detailing the effects of a mucositis therapy for children are not applicable. The results may be interesting, but not relevant. Aim to be inclusive and selective, it is a difficult balance. Explain that the decision of whether to retrieve studies must be made with the review question in mind. The inclusion and exclusion criteria described in the review protocol provide this information Emphasise the need for transparent and reproducible study selection for the same reasons as the search. Ideally, 2 reviewers should independently review the search results and determine which article should be retrieved. They should then confer and have any disagreements arbitrated by a third reviewer. This minimises bias when retrieving studies and contributes to the rigor of the review process. Stress the importance of appropriate and economic resource use. While it might be ‘nice’ to retrieve all papers there are considerable resource implications with this. For example papers may need to be photocopied or requested from other libraries at considerable expense. There is also the impact of the time required for these activities to occur. The whole review could be held up for several months while waiting for a paper. If the paper finally arrives and is not relevant to the review, this time has been wasted.
  • By looking into the primary research studies critically, and exploring potential sources of bias, we try to establish the ‘validity’ of the study being examined. When talking about the validity of the paper, we are referring to the minimisation of all forms of bias. Need to consider both internal and external validity….
  • Thinking back to yesterday, our results showed that we had 35 studies that met the inclusion criteria and made their way onto appraisal. Why? Simply, there is an overwhelming amount of scientific literature available, however, not all of this literature is of high quality. All of the papers you ultimately select for inclusion into your review must then go through a rigorous appraisal process by 2 members of your review team. The aim is to include only studies which are of a high standard and exclude those which are of poor quality. Inclusion of poor quality research may lead to biased or misleading estimates of effectiveness in your review findings and the conclusions that are subsequently drawn from these results!
  • So how do we go about assessing the risk of bias, or establishing the validity, of primary research literature? First of all, there are numerous tools available, such as the JBI checklists, which are a series of questions which aim to focus your critical skills to the ‘methods’ used in the study. Different tools, and therefore questions, are available for different study designs, And these questions focus us in on particular aspects or criteria a good study of any particular type should fulfill to be able to draw any valid conclusions from the results derived. As mentioned in the last point on the slide, something like ethical approval, though essential for most experiments or observations, is unlikely to influence the outcome of an particular research.
  • The table is now complete, and shows that, for attrition bias, follow up and ITT analysis on all participants enrolled in the study are the methods the appraiser of the research must look for in the original literature.
  • Here is the critical appraisal tool or checklist for the appraiser to use when assessing the quality of a RCT/Pseudo randomised trail. This is a credible JBI tool which is in the JBI MASTARI software package. It represents a list of questions that need to be considered when examining experimental studies. All questions are answered either yes no or unclear. Yes indicates that there is a clear statement in the paper which directly answers the question. No is where the question has been directly answered in the negative. Unclear is where there is no clear statement, or there is ambiguous information. Explain what each of the questions mean. The first question to be considered is; was assignment to treatment groups truly random? This refers to the methods the researchers have used in attempt to ensure that each study participant received an equal chance of being in either of the groups. For example, if a statement is made such as th e participants were randomised, this would be seen as an inadequate response, as it is not clear how they were randomised, and should be answered as Unclear. Whereas if the statement was made such as th e participants were randomised to each study group using a blinded computer randomisation process ⒊, this an be seen as a clear statement, and can be answered yes. Question 2 asks did the participants know about the treatment outcomes, or which group they had been allocated to? This could be seen as performance bias on behalf of the participant, in that if they are aware of their potential treatment outcomes, and it may sway their response. It may often be difficult to blind this, but this is aided in drug trials by the use of the placebo. Q3 Did the people who were allocating the participants to groups know which ones they were being allocated to? This process should also be blinded in order to try eliminate any selection bias. If the allocator is blinded to this process, they are unable to alter the allocation of participants to groups. Q4 Were the people who withdrew from the study for any reason mentioned, and included in the analysis? This is the question that is attempting to eliminate the attrition bias that we spoke about earlier. This may also include to presence of an ITT analysis. Q5 Were those who assessed the outcomes blinded to treatment allocation? This deals with detection bias, in that those assessing the outcomes should be, where ever possible, unaware of the treatment group of that participant. Q6 The groups of the study should be similar enough to say they are comparable when they enter the study. As we mentioned earlier in selection bias, the study should not have young fit people in group A, and elderly unfit people in group B, they are very different. Q7 Did everyone in the study receive the same care or treatment, other than the named intervention that was the focus of the study? If a study is examining the effects of an anti-psychotic medication, but that group also receives individual therapy, where the other group does not, then the groups are not treated the same, unless they too were to receive the therapy. Q8, Were the outcomes measured in the same way for all groups? From the above example, the intervention group cannot use one outcome measure, such as X measure, and then the control group uses the Y measure. Q9 Were those measures that were used to measure outcomes, seen to be a reliable and tested tool? If we were looking at a participant ’ s level of consciousness, then the Glasgow Coma Scale is a recognised and reliable tool. Q10 The final question asks whether an appropriate statistical analysis has been used. We will go over what dichotomous and continuous data is in a moment for those of you that are unsure.
  • Critical appraisal of identified papers is a key and required stage in the process of conducting a qualitative synthesis using meta-aggregation. Critical appraisal is sometimes described as quality assessment and basically involves using a tool(s) to evaluate the quality of a given study. Critical appraisal is not a necessary stage in some approaches to qualitative synthesis – meta-ethnography for example – indeed, the practice remains contentious. Garratt and Hodkinson argue that is both illogical and pointless to attempt to predetermine a definitive set of criteria against which all qualitative research should be judged. Nonetheless, in recent years the number of critical appraisal and quality assessment tools have increased rapidly . In this section on critical appraisal, we are going to examine some of the issues important to the debates surrounding the issues related to quality. From the JBI perspective, appraisal should be a feature of systematic reviews, but should be sensitive to the particular paradigm of interest. From this perspective, critical thinking should be applied to studies before they are included in a review, and should focus on  キ  Congruity between epistemology and theoretical perspective; キ  Congruity between theoretical perspective and methodology; キ  Congruity between methodology and methods. For example there is congruity between constructionism (as epistemology), symbolic interactionism (as interpretivist theoretical perspective), ethnography (as methodology) and participant observation (as method) (example from Crotty, 1998:5). This is not the only view in relation to appraisal, and we will look at that in more detail in this series of slides.
  • Drawing from the literature and input from a panel of experts, a critical appraisal instrument (developed and extensively piloted and refined) is incorporated into the QARI software. Based on the standard approach promoted by the Cochrane Collaboration and adopted by the Joanna Briggs Institute, two reviewers are expected to independently critically appraise data, and to then confer. The Assessment of Study can result in a Study being either “included” or “excluded” from the Review by either reviewer. If a Study is found to be “excluded” there needs to be a reason stated for the exclusion. During this process the Primary Reviewer would review each Study to determine a final Assessment status. If a Study’s Assessment status is in conflict the Primary Reviewer would need to resolve these conflicts. In the case of both Assessments on the Study being “excluded” there would need to be a final exclusion reason created. This reason would default to the original exclusion reason given by the Primary Reviewer but could be modified before being saved. The Critical Appraisal Criteria 1. Congruity between the stated philosophical perspective and the research methodology. Does the report clearly state the philosophical or theoretical premises on which the study is based? Does the report clearly state the methodological approach adopted on which the study is based? Is there congruence between the two? For example: a report may state that the study adopted a critical perspective and participatory action research methodology was followed. There is congruence between a critical view (focusing on knowledge arising out of critique, action and reflection) and action research (an approach that focuses on working with groups to reflect on issues or practices; to consider how they could be different; acting to change; and identifying new knowledge arising out of the action taken); a report may state that the study adopted an interpretive perspective and survey methodology was followed. There is incongruence between an interpretive view (focusing on knowledge arising out of studying what phenomena mean to individuals or groups) and surveys (an approach that focuses on asking standard questions to a defined study population); a report may state that the study was qualitative or used qualitative methodology (such statements do not demonstrate rigour in design) or make no statement on philosophical orientation or methodology. 2. Congruity between the research methodology and the research question or objectives. Is the study methodology appropriate for addressing the research question? For example: a report may state that the research question was to seek understandings of the meaning of pain in a group of people with rheumatoid arthritis and that a phenomenological approach was taken. Here, there is congruity between this question and the methodology. A report may state that the research question was to establish the effects of counselling on the severity of pain experience and that an ethnographic approach was pursued. A question that tries to establish cause-and-effect cannot be addressed by using an ethnographic approach (as ethnography sets out to develop understandings of cultural practices) and thus, this would be incongruent. 3. Congruity between the research methodology and the methods used to collect data Are the data collection methods appropriate to the methodology? For example: a report may state that the study pursued a phenomenological approach and data was collected through phenomenological interviews. There is congruence between the methodology and data collection; a report may state that the study pursued a phenomenological approach and data was collected through a postal questionnaire. There is incongruence between the methodology and data collection here as phenomenology seeks to elicit rich descriptions of the experience of a phenomena that cannot be achieved through seeking written responses to standardised questions. There is congruity between the research methodology and the representation and analysis of data. 4. Congruity between the research methodology and the representation and analysis of data Are the data analysed and represented in ways that are congruent with the stated methodological position? For example: a report may state that the study pursued a phenomenological approach to explore people ’s experience of grief by asking participants to describe their experiences of grief. If the text generated from asking these questions is searched to establish the meaning of grief to participants; and the meanings of all participants are included in the report findings, then this represents congruity; the same report may, however, focus only on those meanings that were common to all participants and discard single reported meanings. This would not be appropriate in phenomenological work. 5. There is congruence between the research methodology and the interpretation of results Are the results interpreted in ways that are appropriate to the methodology? For example: a report may state that the study pursued a phenomenological approach to explore people ’s experience of facial disfigurement and the results are used to inform practitioners about accommodating individual differences in care. There is congruence between the methodology and this approach to interpretation; a report may state that the study pursued a phenomenological approach to explore people’s experience of facial disfigurement and the results are used to generate practice checklists for assessment. There is incongruence between the methodology and this approach to interpretation as phenomenology seeks to understand the meaning of a phenomena for the study participants and cannot be interpreted to suggest that this can be generalised to total populations to a degree where standardised assessments will have relevance across a population. 6. Locating the researcher culturally or theoretically Are the beliefs and values, and their potential influence on the study declared? The researcher plays a substantial role in the qualitative research process and it is important, in appraising evidence that is generated in this way, to know the researcher ’s cultural and theoretical orientation. A high quality report will include a statement that clarifies this. 7. Influence of the researcher on the research, and vice-versa, is addressed Is the potential for the researcher to influence the study and for the potential of the research process itself to influence the researcher and her/his interpretations acknowledged and addressed? For example: is the relationship between the researcher and the study participants addressed? Does the researcher critically examined her/his own role and potential influence during data collection? Is it reported how the researcher responded to events that arose during the study? 8. Representation of participants and their voices Generally, reports should provide illustrations from the data to show the basis of their conclusions and to ensure that participants are represented in the report. 9. Ethical approval by an appropriate body A statement on the ethical approval process followed should be in the report. 10. Relationship of conclusions to analysis, or interpretation of the data This criterion concerns the relationship between the findings reported and the views or words of study participants. In appraising a paper appraisers seek to satisfy themselves that the conclusions drawn by the research are based on the data collected; data being the text generated through observation, interviews or other processes.
  • Data extraction forms should include detailed information on allocation methods, attrition, assessment and analysis. Information on interventions should include treatment modalities and the amount, duration, frequency and intensity of the intervention. Participant characteristics should include demographic information such as age, gender, location etc. Data extraction for outcome measures includes recording information such as name of the instrument, method use to obtain the data, and the validity and reliability of the method used. It is also important to record the mode of measurement and different scales used along with their grading. Statistical data is required to calculate effect measures, therefore it is important that recorded data should include number of people (usually N) assigned to the treatment and control comparison groups and all statistical tests used to test differences between the two groups. Data for continuous measures includes means and standard deviations and data for dichotomous measures includes number of cases that experienced an event (in both treatment and control groups) and total number of cases in each group (N). Consider blinding (removal of author information, journal names etc) during data extraction and compute and report inter-rater reliability, if possible.
  • The systematic review pools together the results of two or more individual studies. We need to go through each individual study and extract the data that is applicable to our review question. Some of the difficulties that can arise when extracting data from a paper are ensuring that the data is line with our review question, and also in comparing with the other papers that are to be used in the systematic review. Things to be considered include different populations, different outcome measures, different scales, interventions measured differently, and the reliability of the data extraction between the reviewers.
  • Note here that the reviewer is seeking to extract the FINDINGS of the study and the DATA that gives rise the the finding. The next slide clarifies this.
  • PRISMA/MOOSE statements
  • Prior to pooling the data in a meta-analysis, there are a number of things to be considered in the general analysis.
  • From the studies in our “example’ review, it may be that not all report on the same outcome e.g. - only 6 here will make it into out meta analysis whilst the rest will just be reported on in the text. Remember we need at least 2 studies to be able to do statistical combination. Talk here about meta-analysis. It is a systematic procedure for summarising and pooling the results from 2 or more research studies. We combine the results of similar individual studies, to give a cumulative result, which may then differ from the results of the individual studies. If possible, this is then reported in a visual representation, as a meta-view (MAStARI-view). We will now delve further into meta-analysis.
  • It is a systematic procedure for summarising and pooling the results from 2 or more research studies. We combine the results of similar individual studies, to give a cumulative result, which may then differ from the results of the individual studies. Meta-analysis for RCTs is for improvement of precision of overall estimate of “effect”. For observational designs will be to increase strength of correlation or association between exposure and outcome or to investigate reasons for differences in risk estimates or discover patterns of risk amongst studies.
  • When there is variation in the effect size, which is if each study reports a difference in treatment effects, such as the percentage of reduction in the incidence of a condition. Often a study does not carry enough weight to make it statistically significant because of its sample size. If we able to combine the multiple, good quality, and similar small studies, this can increase the power and possibly provide a meaningful result. We should not use the results of one single study to provide us with a definitive conclusion on the effectiveness of an intervention, but rather pool the results of multiple similar studies to confirm the effectiveness.
  • We have seen from the previous slide when meta-analysis can be useful, now we need to see in what situations it can be used. Studies to be included in meta-analysis should be similar to each other so that generalisation of results is valid. This is referred as homogeneity. This is calculated in MAStARI using Chi-square. The four main criteria that must be considered are:  patient population (eg is it valid to combine the results of studies on different races of people, or different aged people?)  outcome (eg is it valid to combine studies that have measured pain via a visual analogue scale with those that have used a pain diary?)  intervention (eg are the interventions being given to the ‘ treatment ’ group in each study similar enough to allow meta-analysis?)  control (eg are the control groups in each study receiving treatment similar enough to warrant combination and meta-analysis?) The questions raised above can be very difficult to answer and often can involve subjective decision making. Involvement of experienced systematic reviewers and/or researchers with a good understanding of the clinical question being investigated should help in situations where judgement is required. These situations should be clearly described and discussed in the systematic review report.
  • Example of how a subgroup analysis may look - split between trials and cohort studies on the basis of study design. Also nice example about some of the issues with meta analysis of observational studies and the differences in results that may be seen! This is described in more detail below if it is something you wish to explore further with your group of participants. Meta-analysis of association between ß carotene intake and cardiovascular mortality: results from observational studies show considerable benefit, whereas the findings from randomised controlled trials show an increase in the risk of death. Meta-analysis is by fixed effects model. Observational studies have consistently shown that people eating more fruits and vegetables, which are rich in ß carotene, and people having higher serum ß carotene concentrations have lower rates of cardiovascular disease and cancer.27 ß carotene has antioxidant properties and could thus plausibly be expected to prevent carcinogenesis and atherogenesis by reducing oxidative damage to DNA and lipoproteins.27 Contrary to many other associations found in observational studies, this hypothesis could be, and was, tested in experimental studies. The findings of four large trials have recently been published.28 29 30 31 The results were disappointing and even—for the two trials conducted in men at high risk (smokers and workers exposed to asbestos)28 29—disturbing. We performed a meta-analysis of the findings for cardiovascular mortality, comparing the results from the six observational studies recently reviewed by Jha et al27 with those from the four randomised trials. For the observational studies the results relate to a comparison between groups with high and low ß carotene intake or serum ß carotene concentration, whereas in the trials the participants randomised to ß carotene supplements were compared with those randomised to placebo. With a fixed effects model, the meta-analysis of the cohort studies shows a significantly lower risk of cardiovascular death (relative risk reduction 31% (95% confidence interval 41% to 20%, P<0.0001)) (fig 2). The results from the randomised trials, however, show a moderate adverse effect of ß carotene supplementation (relative increase in the risk of cardiovascular death 12% (4% to 22%, P=0.005)). Similarly discrepant results between epidemiological studies and trials were observed for the incidence of and mortality from cancer. This example illustrates that in meta-analyses of observational studies, the analyst may well be simply producing tight confidence intervals around spurious results.
  • The JBI-QARI approach to metasynthesis requires reviewers to identify and extract the findings from papers included in the review; to categorise these study findings; and to aggregate these categories to develop synthesised findings. It is important to note here that JBI-QARI does not involve a reconsideration and analysis of the data in papers reviewed as if it were primary data. JBI-QARI focuses only on the combination of study findings. These statements are referred to as synthesised findings - and they can be used as a basis for evidence based practice.
  • This slide summarises the JBI three-step approach.
  • This slide gives an overview of the different steps between JBI meta aggregation and meta ethnography Key difference is that the JBI meta aggregation approach does not go into the primary data but takes the conclusions presented by the researcher. By contrast, in a meta ethnographic approach, the reviewer does engage with the primary data. The processes are outlined below: This flow chart shows how: In step 1, reviewers may identify all of the findings from the number of retrieved studies included in the review, and these papers may cross different methodologies. In step 2, where the findings are categorised on the basis of similarity in meaning, the total number of findings from the different studies have been aggregated to generate a certain number categories. In step 3 the categories have been synthesised to generate a certain number of synthesised findings. These steps are mirrored in the meta-ethnographic method of review. In the next series of slides we will work through 2 different examples
  • For QARI users who are also using JBI-CReMS, these tables are exported to the Final Report in JBI-CReMS. Powerpoint does not illustrate exactly how it looks online, but you can see the analytic process that underpins moving through the aggregative method of synthesis.
  • What is known about the barriers to and facilitators of healthy eating among children? Do interventions promote healthy eating among children? What are children's own perspectives on healthy eating? (There was a great interest in the department about not only interventions but also children's own views and what we can learn from them.) What are the implications for intervention development?
  • To summarise the days program in this final 15 min session after they have had time to work on their protocols. Stress to participants to use the resources available to them whilst conducting their systematic reviews. The JBI reviewers manual, the SUMARI users guide. Also to look at other systematic reviews already published. For example if they are having trouble with their search strategy, pick up another, published review, and see how the authors turned their question, via their PICO, into a successful search strategy. As mentioned JBI follows the process developed by the Cochrane Collaboration and incorporates the dissemination approach developed by the NHS Centre for Reviews and Dissemination at the University of York, both of these organisations also produce their own Manuals for reviewers which are excellent resources.
  • As a trainer for the JBI Comprehensive Systematic Review Training Program you and your participants may utilise the JBI SUMARI software suite. JBI SUMARI (System for the Unified Management of the Assessment and Review of Information) is a software package designed to assist health researchers and practitioners to conduct systematic reviews of evidence of feasibility, appropriateness, meaningfulness, effectiveness and economic evaluations of activities and interventions. The package consists of 5 modules: JBI CReMS - JBI Comprehensive Review Management System JBI QARI - JBI Qualitative Assessment and Review Instrument JBI NOTARI - JBI Narrative, Opinion and Text Assessment and Review Instrument JBI MAStARI - JBI Meta Analysis of Statistics Assessment and Review Instrument JBI ACTUARI - JBI Analysis of Cost, Technology and Utilisation Assessment and Review Instrument. Each of these is described in the following slides
  • SUMARI-CReMS: Describe CReMS here as the module that assists reviewers to manage and document a review, and how it incorporates fields to enter a review protocol and the results of the literature search. The next slide lists the components of CReMS.
  • CReMS: Explain the components of CReMS and describe how reviews start in CReMS then proceed to one or more of the 4 analytical modules before returning to CReMS for the generation of the final report. Components are as follows: Protocol Reviewers Search Strategy Search Results Bibliography of selected studies Bibliography of studies not retrieved for appraisal Executive Summary Report Generator CReMS is discussed in more detail later in this module. Each of the analytical modules are described in the following slides.
  • QARI: Briefly explain that QARI (Qualitative Assessment and Review Instrument) is for the pooling and synthesis of experiential qualitative research. This module is investigated further in Module 4 of the training program.
  • MAStARI: Briefly explain that MAStARI (Meta Analysis of Statistics and Review Instrument) is utilised for the pooling and synthesis of quantitative data. It is to be used in place of REVMAN for non Cochrane Reviews. MAStARI is able to pool the results of a broader range of study types than REVMAN, eg randomised controlled trials, pseudo randomised controlled trials, cohort, case control, and time series. This module of SUMARI is investigated further in Module 3.
  • NOTARI: Briefly explain that NOTARI (Narrative, Opinion and Text Assessment and Review Instrument) is utilised for the pooling and synthesis of narrative opinion and textual data that is not research such as information from government reports, consensus guidelines and expert opinion papers. This module of SUMARI is investigated further in Module 4.
  • ACTUARI: Briefly explain that ACTUARI (Analysis of Cost Technology and Utilisation Assessment and Review Instrument) is utilised for the pooling and synthesis of economic data. This module of SUMARI is investigated further in Module 5.
  • SUMARI, CReMS and analytical modules. Use this slide a discussion point. All the software programs names are acronyms, as are all of the JBI programs! However an acronym is not enough, the word created must mean something. Here are the meanings behind each of the program names. SUMARI - The “ summary ” of the review information QARI - To “ mine ” for information MAStARI - For quantitative researchers in their attempt to “ master ” a clinical problem NOTARI - A Notary is an official scribe recognised as having an ability to authenticate documents and text. ACTUARI - An Actuary is a statistician who calculates rates etc.
  • This is the first screen seen when loading CReMS. Users are then directed to the log in screen. CReMS is web based and can be accessed on the web at no cost. It can be used as a stand-alone program in place of RevMan or in conjunction with other SUMARI components.
  • This is an example of one review, displaying the title, year of commencement, and the primary reviewer. Details of the review can be edited later if required. To add the secondary, click in the text field alongside “secondary reviewer”
  • The first thing that needs to be done at the protocol stage is to assign the analytical module(s) you will use. This will insert the generic text for your protocol which is specific and essential in all reviews. At the top of the screen you are asked to select the type of evidence the topic/question lends itself to, and therefore the type of review, being conducted. Click in the tick box alongside the type of evidence you will address for your review (or analytical module you will use). This is an important step, selection here will automatically append the appropriate JBI appraisal and extraction tools for each type of evidence to your exported protocol and your review document (can be viewed in R e port View tab or with selection of from the menu items and then selecting ). By selecting only this tick box, and not the option for S e t text, users can append the appraisal tool but are no longer constrained to using the JBI CReMS set text if they do not wish to. Simply un-tick the selection if the incorrect box is ticked. Once one or more of the T y pes of evidence is selected the tick box with the label S e t text will highlight.
  • Special CASE for selecting MAStARI and Quantitative evidence. Selecting the tick box in between Q u antitative MASTARI, unlike any of the other three (3) selections available, will not automatically append all of the JBI appraisal and extraction tools associated with quantitative evidence. If users scroll down the page, just above the field where procedures for A s sessment of Methodological Quality are entered, a new selection of tick boxes has now appeared prompting users to select which appraisal tools are appropriate for their review question. This selection can be made at any stage during the protocol development. More than one study design can be selected. For example, if a reviewer is conducting a review informing the effectiveness of an intervention or therapy they may wish to include only experimental study designs and therefore will select only the first tick box labelled E x perimental (e.g. RCT, quasi-expt). Only the appraisal tools and extraction instruments relevant to experimental studies will appear appended to the protocol and review reports exported. Conversely, this selection does not need to be made here at all as if the option to insert JBI S e t text is (see Section 1.3.2 below), a similar choice based on study design is offered so the text inserted regarding the A s sessment, E x traction and S y nthesis is relevant to the T y pe of Study being targeted by the review

Systematic Reviews: the process, quantitative, qualitative and mixed methods reviews. Edoardo Aromataris Presentation Transcript

  • 1. Systematic Reviews: the process,quantitative, qualitative and mixed methods reviews Edoardo Aromataris www.joannabriggs.edu.au
  • 2. Systematic reviews• Different strokes... – Where I’m coming from• Introduction to reviews and systematic reviews – Terminology• Systematic steps• Journey beyond where you may be involved...• Where you’ll find differences in quantitative and qualitative www.joannabriggs.edu.au
  • 3. Some answers!?• What is a systematic review? www.joannabriggs.edu.au
  • 4. Some answers!?• An attempt to identify, appraise and synthesise all the empirical evidence that meets pre-specified eligibility criteria to answer a given research question.• An attempt to sum up the best available research on a specific question. This is done by synthesising the results of several studies. www.joannabriggs.edu.au
  • 5. The Nature of Evidence• “It is surely a great criticism of our profession that we have not organized a critical summary, by specialty or subspecialty, updated periodically, of all relevant randomized controlled trials” Archie Cochrane, 1972 www.joannabriggs.edu.au
  • 6. Cochrane Collaboration• Established 1993• Leaders in EBM, some 28,000 members• Focus on effectiveness of interventions and therapies in medical practice – RCTs focus – More pluralistic – qualitative methods• Organised into Cochrane Review Groups (CRG)• Cochrane Library www.joannabriggs.edu.au
  • 7. Campbell Collaboration (C2)• Focus on systematic reviews of effectiveness in education, crime and justice, and social welfare• Coordinating groups (6)• Campbell Systematic Reviews• Synthetic Reviews www.joannabriggs.edu.au
  • 8. The Nature of Evidence• “It is surely a great criticism of our profession that we have not organized a critical summary, by specialty or subspecialty, updated periodically, of all relevant randomized controlled trials” Archie Cochrane, 1972.• Shift toward pluralistic, inclusive definitions of what evidence is, and subsequently of what evidence based practice is... www.joannabriggs.edu.au
  • 9. Joanna Briggs Institute• Established 1996• Feasibility, Appropriateness, Meaningfulness and Effectiveness• Qualitative methods• JBC – 70+ Centres conduct reviews• JBI Library of Systematic Reviews• Leaders in EBHC and evidence implementation www.joannabriggs.edu.au
  • 10. Introduction to Systematic Reviews• Variations in conduct• Different questions, different methods of synthesis – E.g. effectiveness, aetiology, harms, meta-ethnographic reviews, realist syntheses• Not all are validated – Term used loosely• Broad coverage, applicable to most reviews – Systematic! www.joannabriggs.edu.au
  • 11. Terminology• Literature review summarises, critiques, and synthesizes articles while not using systematic methodology• Systematic reviews adhere to explicit and rigorous methods to identify, critically appraise, and synthesise relevant primary/original studies. (Krainovich-Miller, 2006:87) www.joannabriggs.edu.au
  • 12. Systematic Review• Also called “Research Synthesis”• Is an attempt to integrate empirical data for the purpose of: – uncovering the international evidence and – producing statements about that evidence to guide decision making• Requires explicit and exhaustive reporting of the methods used in synthesis www.joannabriggs.edu.au
  • 13. Systematic Review• ‘…an attempt to minimize the element of arbitrariness…by making explicit the review process, so that, in principle, another reviewer with access to the same resources could undertake the review and reach broadly the same conclusions’ (Dixon-Wood et al. 1997:157 quoted by Seers, 2005:102) www.joannabriggs.edu.au
  • 14. Bias• ‘systematic deviation of results or inferences from truth’• ‘an error in the conception and design of a study- or in the collection, analysis, interpretation, reporting, publication, or review of data-leading to results or conclusions that are systematically (as opposed to randomly) different from truth’ (Porta, 2008:18) www.joannabriggs.edu.au
  • 15. Meta-analysis• Quantitative evidence – Questions of Effectiveness, Feasibility and/or Appropriateness• Use of statistical methods to combine the results of various independent, similar studies• More precise calculation of one estimate of treatment effect than could be achieved by any of the individual, contributing studies• Only forms a part of the systematic review in which it appears www.joannabriggs.edu.au
  • 16. Meta-synthesis• Qualitative evidence – Questions of Meaningfulness, Feasibility and/or Appropriateness• Qualitative analysis of a number of independent qualitative research studies and text• Use of qualitative methods of combining the findings of individual studies• Only forms a part of the systematic review in which it appears www.joannabriggs.edu.au
  • 17. Qualitative Research Findings asEvidence for Practice• Qualitative evidence is of increasing importance in health services policy, planning and delivery.• It can play a significant role in: – understanding how individuals / communities perceive health, manage their own health and make decisions related to health service usage; – increasing our understandings of the culture of communities and of health units; – Informing planners and providers; – evaluating components and activities of health services that cannot be measured in quantitative outcomes. www.joannabriggs.edu.au
  • 18. Systematic Review• The notion of and methods for establishing credibility in systematic reviews has been extensively developed and debated• In terms of quantitative evidence: – Emphasis on reducing bias and increasing validity – Degree of credibility established through critique and by applying levels of evidence• In terms of qualitative evidence: – Emphasis on rigour of research design and transferability – Degree of credibility established through critique and by applying levels of credibility www.joannabriggs.edu.au
  • 19. Comprehensive/Mixed methodReview• Series of questions• Combines both quantitative and qualitative findings and addresses multiple forms of evidence• 2 or more types of evidence• Different approaches - particularly to integration of qualitative findings www.joannabriggs.edu.au
  • 20. Characteristics of a SR• Protocol driven process• Clearly stated set of objectives with pre-defined eligibility criteria for studies• Explicit, reproducible methodology• Systematic search that attempts to identify all studies that would meet the eligibility criteria• Assessment of the validity of the findings of the included studies• Systematic presentation, and synthesis, of the characteristics and findings of the included studies (Green et al., 2008:6) www.joannabriggs.edu.au
  • 21. Steps in a Systematic Review• Formulate review question• Define inclusion and exclusion criteria• Locate studies• Select studies• Assess study quality• Extract data• Analysis/summary and synthesis of relevant studies• Present results• Interpret results/determining the applicability of results (Egger & Davey Smith, 2001:25; Glasziou et al., 2004:2) www.joannabriggs.edu.au
  • 22. Question DevelopmentDivide question following the PICO/PICo model• Reviews of effects & • Reviews of qualitative & economics: Textual data: – Population – Population – Intervention/Exposure – Phenomena of Interest – Comparator – Context – Outcome – Types of Study Design www.joannabriggs.edu.au
  • 23. Question?• Very important - guides entire review• Effectiveness? Meaningfulness?• Other components - How? Why? When? www.joannabriggs.edu.au
  • 24. Where to Start?• Conduct a search for previous Systematic Reviews on your topic: has this review already been conducted or is a review already in progress? – Yes = stop and refine your question – No = proceed with your protocol development www.joannabriggs.edu.au
  • 25. Protocol• Detailed review methods a priori• Guide process – Reasoned approach to the question asked• Decrease biased post hoc changes – Important to avoid “fishing”• Review proper may deviate – Needs clear explanation how and why www.joannabriggs.edu.au
  • 26. Protocol Development • The usefulness and success of the review stems from the robustness of the protocol • The protocol: – Guides the specific direction of the review – Describes inclusion criteria – Identifies the appropriate search sources and resources – Methods of appraisal, extraction and synthesis www.joannabriggs.edu.au
  • 27. Background• Describe the issue under review, including: – Target population, interventions, outcomes,• Should concisely overview the main elements of the review, and issues within the topic of choice• Provide adequate detail to justify the conduct of the review and choice of inclusion criteria• Provide necessary definitions of important terms and concepts www.joannabriggs.edu.au
  • 28. Inclusion Criteria: Quantitative data• Study designs• Population characteristics• Intervention or exposure• Comparator - active or passive• Outcomes of interest www.joannabriggs.edu.au
  • 29. Inclusion Criteria: Qualitative & Text• Study designs• Population characteristics• Phenomena of Interest• Context www.joannabriggs.edu.au
  • 30. 
  • 31. Search Strategy• Know and understand the question!• What is the relevant PICO / PICo?• Qualitative, quantitative or economic?• Feasibility, Appropriateness, Meaningfulness or Effectiveness?• Other questions? Extra concepts? Less concepts? www.joannabriggs.edu.au
  • 32. Search Strategy Steps• Initial Search – initial search of MEDLINE, CINAHL, followed by analysis of text words in the title and abstract• Second Search – all identified key words and index terms across all databases• Third Search – references of identified studies and unpublished studies www.joannabriggs.edu.au
  • 33. Search Strategy• Features of search strategy – Sensitivity – ability to identify all the relevant studies – Specificity – ability to exclude irrelevant studies• Inverse relationship between sensitivity and specificity – High sensitivity will tend to have low specificity – A large number of articles retrieved may not be relevant to the review question www.joannabriggs.edu.au
  • 34. 
  • 35. Searching• Depend on methods (to minimise bias, or not?) – Exhaustive or Saturation based? – Limited or comprehensive?• Impact on transparency and auditability• Where appropriate? – How much is enough? – Are appropriate sources being covered? www.joannabriggs.edu.au
  • 36. Types of Sources• What to use? – Scientific databases – Scientific Journals – Organisations – Websites – Libraries – Experts www.joannabriggs.edu.au
  • 37. Types of Resources• Peer reviewed journal articles – Research – Opinion/commentary/letters• Grey Literature – Google Scholar, Mednar• Theses/Dissertations – DATAD• Data – Statistics• Circulars• Reports www.joannabriggs.edu.au
  • 38. Databases and sources…• Medline/PubMed • ScienceDirect• Scopus/Embase • TRIP• POPLINE • Wiley InterScience• CINAHL • SPORTDiscus• ERIC • ISI Web of Science• PsychINFO • + many more…! Consult your Librarian! www.joannabriggs.edu.au
  • 39. Research and Trials Registers• Cochrane library – CCTR• Clinical trials.gov• Controlled Clinical Trials• NHS Research Register• REGARD (database of the ESRC) www.joannabriggs.edu.au
  • 40. Grey literature• Mednar• PsycEXTRA• OAIster• SIGLE• Google Scholar www.joannabriggs.edu.au
  • 41. Theses & Dissertations• Index to Thesis• Networked Digital Library of Theses and Dissertations (NDLTD)• ProQuest Dissertations and Theses Database• EThOS - Beta - Electronic Thesis Online System• TROVE (ADTP)• +… www.joannabriggs.edu.au
  • 42. Government Websites• Are there government agencies that may have evidence relevant to your review question? – NCCAM – CDC – NH&MRC – AHRQ – AIHW – +… www.joannabriggs.edu.au
  • 43. Search Logistics! Search Strategy • Apply Search Strategy to databases • Export to bibliographic Search Sources software – E.g. Endnote • Document process Export citations to bibliographic software
  • 44. 
  • 45. Study Selection• Study Selection is an initial assessment that occurs following the review Search• It addresses the question “should the paper be retrieved?” – 2nd assessment that occurs after retrieval and addresses the question “should the study be included in the review?” - this is CRITICAL APPRAISAL!• It is essential to use two assessors in both the selection and critical appraisal processes to limit the risk of error www.joannabriggs.edu.au
  • 46. Selection Process• Aims to select only those studies that address the review question and that match the inclusion criteria documented in your protocol• Scan titles and abstracts• Err on the side of caution - Inclusive!• If uncertain? - Retrieve - scan full text• The selection should be: – Transparent – Reproducible www.joannabriggs.edu.au
  • 47. Critical Appraisal• Aim is to establish validity – To establish the risk of bias – Internal validity• Every review must set out to use an explicit appraisal process. Essentially, – A good understanding of research design is required in appraisers; and – The use of an agreed checklist or scale is usual. www.joannabriggs.edu.au
  • 48. 1004 references Why Critically 172 duplicates Appraise? 832 references Scanned Ti/Ab • Combining results of 715 do not meet poor quality research Incl. criteria may lead to biased or 117 studies misleading results and retrieved 82 do not meet understandings Incl. criteria 35 studies forCritical Appraisal
  • 49. Assessing the Risk of Bias• Numerous tools are available for assessing methodological quality of clinical trials and observational studies.• Generally requires the use of a specific tool for assessing risk of bias in each included study.• ‘High quality’ research methods can still leave a study at important risk of bias (e.g. when blinding is impossible)• Some markers of quality are unlikely to have direct implications for risk of bias (e.g ethical approval, sample size calculation) www.joannabriggs.edu.au
  • 50. Type of bias Quality Population assessment Allocation Selection Allocation Treatment Control concealmentPerformance Blinding Exposed to Not exposed intervention Detection Blinding Population Population Attrition ITT follow up Follow up Follow up
  • 51. JBI-MAStARI Instrument
  • 52. Critical Appraisal• Exclude studies on basis of appraisal?• Explore impact through sensitivity analyses/subgroups www.joannabriggs.edu.au
  • 53. Critical Appraisal & Qualitative Review• General acceptance of need for quality• Ongoing debate around the role of appraisal• Particular focus of debate on role of scales and sum scores in appraisal• In practice, appraisal instruments for qualitative research tend to focus on establishing the degree to which the evidence applies to practice (transferability) rather than internal validity (credibility) www.joannabriggs.edu.au
  • 54. 
  • 55. Considerations in Data Extraction• Source - citation and contact details• Eligibility - confirm eligibility for review• Methods - study design, concerns about bias• Participants - total number, setting, diagnostic criteria• Interventions - total number of intervention groups• Outcomes - outcomes and time points• Results - for each outcome of interest: sample size, etc• Miscellaneous - funding source, etc www.joannabriggs.edu.au
  • 56. Data Extraction• The data extracted for a systematic review are the results from individual studies specifically related to the review question.• Difficulties related to the extraction of data include: – different populations used – different outcome measures – different scales or measures used – interventions administered differently – reliability of data extraction (i.e: between reviewers) www.joannabriggs.edu.au
  • 57. Extracting Findings for Qualitativereviews• The units of extraction in this process are specific findings and illustrations from the text that demonstrate the origins of the findings• A finding is defined as: A conclusion reached by the researcher(s) and often presented as themes or metaphors www.joannabriggs.edu.au
  • 58. 
  • 59. Analyses• Depend on data collected/question• Maybe simply narrative• Meta analysis?• Qualitative syntheses?• Mixed methods www.joannabriggs.edu.au
  • 60. Analyses and reporting – What interventions/activities have been evaluated – The effectiveness /appropriateness /feasibility /meaningfulness of the intervention/activity – Contradictory findings and conflicts – Limitations of study methods – Issues related to study quality – The use of inappropriate definitions – Specific populations excluded from studies – Future research needs www.joannabriggs.edu.au
  • 61. 1004 referencesMeta Analysis & 172 duplicates 832 referencesMeta Synthesis Scanned Ti/Ab 715 do not meet Incl. criteria 117 studies retrieved 82 do not meet Incl. criteria 35 studies for Critical Appraisal 9 excluded studies 26 studies incl. in review 6 studies incl. 20 studies incl. in meta analysis in narrative
  • 62. Statistical methods for meta-analysis• Quantitative method of combining results of independent studies• Aim is to increase precision of overall estimate• Investigate reasons for differences in risk estimates between studies• Discover patterns of risk amongst studies www.joannabriggs.edu.au
  • 63. When is meta-analysis useful?• If studies report different treatment effects.• If studies are too small (insufficient power) to detect meaningful effects.• Single studies rarely, if ever, provide definitive conclusions regarding the effectiveness of an intervention. www.joannabriggs.edu.au
  • 64. When meta-analysis can be used• Meta analysis can be used if studies: – have the same population – use the same intervention administered in the same way. – measure the same outcomes• Homogeneity – studies are sufficiently similar to estimate an average effect. www.joannabriggs.edu.au
  • 65. Example meta-analysisTaken from Egger, M. et al. BMJ 1998;316:140-144
  • 66. Meta-Synthesis• Analysis and synthesis of qualitative studies• Based on processed data• Aim of meta-synthesis • is to assemble findings; • categorise these findings into groups on the basis of similarity in meaning; • aggregate these to generate a set of statements that adequately represent that aggregation. www.joannabriggs.edu.au
  • 67. Data Synthesis Involves:• Step 1: Identifying findings• Step 2: Grouping findings into categories; and• Step 3: Grouping categories into synthesised findings www.joannabriggs.edu.au
  • 68. META ETHNOGRAPHY QARI METAGGREGATIONFIRST ORDER ANALYSIS ⇐ SECOND ORDER INTERPRETATION⇐ § STEP 2: CATEGORIES THIRD ORDER INTERPRETATION⇐ § STEP 3: SYNTHESISED FINDINGS
  • 69. Comprehensive/Mixed method Review FOCUS technical brief #25, Angela Harden, 2010
  • 70. Interpretation• Quality, strength and applicability of evidence for outcomes• Applicability for stakeholders/policy makers• Qualitative evidence can aid in interpretation• How, why, barriers, facilitators, experience• Conclusions/recommendations should be drawn based on available evidence• May be none available! www.joannabriggs.edu.au
  • 71. The JBI Software System for the Unified Management of the Assessment and Review of Information
  • 72. Consists of the following componentsComprehensive Review and Management SystemKeeps all review information together and generates a report that may contain elements from all other SUMARI modules
  • 73. • Protocol• Reviewers• Search strategy• Bibliography - retrieved studies• Bibliography - non selected studies
  • 74. QualitativeAssessment andReview Instrument
  • 75. Meta-Analysis ofStatisticsAssessment andReview Instrument(Used in place of Review Managerfor non-Cochrane reviews)
  • 76. Narrative Opinionand TextAssessment andReview Instrument
  • 77. Analysis of CostTechnology andUtilisationAssessment andReview Instrument