Systematic review
Systematic reviews provide readers with an overview of an area and an understanding of what the quality and quantity of primary research is in that area . The areas they review tend to be very focussed, such as “Vitamin D for the management of multiple sclerosis” or “Strategies for training or supporting teachers to integrate technology into the classroom [74, 75].” Systematic reviews are a way of finding and reporting as much primary research as possible in a structured, replicable way [76-78].
The process for systematic reviews is broadly as follows:
- Define the specific topic that the systematic review will cover and develop a review question that sets out which studies will and will not be included (for example, which geographical areas or particular populations the studies should cover).
- Define a strategy that covers where and how researchers will look for primary research (the databases, journals and archives that will be searched) and how researchers will search for it (the exact search terms that they will use to include and exclude studies based on the review question). This will be published as part of the systematic review so that the process is transparent and replicable.
- Review the studies that have been identified during the search and exclude any that are not relevant to the defined topic after reading key parts of the study (such as the methodology).
- Extract data from the studies, such as the number of participants involved, the intervention tested, the methodology used, and the outcomes reported.
- Assess the robustness, validity and relevance of the studies to the initial review question (see part two of this briefing).
- Synthesise and present the data from the studies, indicating where there is consensus and discrepancy in the conclusions of different studies, and how robust and valid the studies are.
There are several guides that lay out the standards for performing systematic reviews in different topic areas, such as EPPI-Centre, Cochrane or the Campbell Collaboration [79-81]. The aim of the standards is to make the review as transparent, thorough and replicable as possible. The standards usually expect that:
- The topic of the systematic review to be public during the research process, along with an explanation of which studies are eligible for inclusion in the systematic review
- A published explanation of how the search for primary research was conducted so that it could be replicated by other researchers
- The review states the validity of the primary research used and assesses the risk of bias in the primary research [82-84].
The exact process and standards used for systematic reviews vary, but all systematic reviews should state the standards that they have used. Many systematic reviews search academic literature alongside other types of literature. For example, many look for grey literature (research that is not published in an academic journal or academic book) and/or unpublished studies to limit the influence of publication bias [85]. Publication bias is the tendency for researchers and academic journal editors to favour publishing research where the findings have shown a positive result (meaning that they have shown an intervention works or have confirmed what the researchers initially predicted).
What can systematic reviews tell us?
Well-run systematic reviews offer robust evidence on a topic because they pool the data of as many studies in that area as possible [86]. Well-run systematic reviews are replicable and keep bias to a minimum [87, 88]. Systematic reviews in areas where there is limited primary research, or where much evidence has had to be excluded, may not be able to provide recommendations or answer the initial review questions. However, this may reveal research gaps that represent useful knowledge for other researchers.
Key concept 5: Can you rank research methods?
When an individual reads a research study, they are likely to want to know how reliable the findings are. Standards of evidence are guidelines for individuals to use when making decisions on the quality, strength or applicability of research studies. There are over 20 different standards of evidence used in UK research and policy [89]. The majority of standards of evidence use a hierarchy of evidence based on study designs to rank research on the quality and strength of its evidence from high to low [90]. However, the exact ranking given to study designs and the different study designs that are included in the hierarchy vary [91]. The arguments given for the rankings of studies in hierarchies focus on: Whether a study can prove causation How robust a study is How internally and externally valid a study is likely to be (see part two of this briefing) Studies that can show causation (such as RCTs) tend to rank higher in these standards than those that show correlation (as found in observational studies). Designs that use measures to reduce the influence of extraneous variables are ranked higher in the hierarchies. Where systematic reviews and meta-analyses are included in hierarchies, they are ranked top because they bring together results from a number of studies, increasing their ability to show causation. Some hierarchies also rank studies with multiple replications higher [92]. Hierarchies of evidence based on study design have two main limitations. First, these hierarchies can oversimplify complex issues of study quality. For example, RCTs are often ranked highly in these hierarchies and observational studies (including natural experiments and case-control studies) tend to be ranked lower. However, an observational study with a large representative sample may more definitively answer a research question than an RCT with a small sample [93]. By ranking research on study design alone, some of the complexities of what makes a study valid can be lost. Second, hierarchies are designed to give a measure of how robust a particular study design is. However, a hierarchy does not necessarily indicate whether interventions work or give details about the circumstances in which an intervention. Standards of evidence can be useful as a guide to the strength and quality of research evidence and, in some instances, as an indication of which interventions work in which circumstances. However, trying to rank or rate research evidence against broad criteria means that some of the complexity around the robustness and validity of studies is overlooked. Therefore, standards of evidence should be considered an indication, rather than a definitive measure, of a study’s strength, quality and applicability.
Rapid evidence assessment
Rapid evidence assessments are similar to systematic reviews, in that they have a structured methodology for how they find and present primary research. However, they are run over a shorter timescale and therefore are not as exhaustive as a systematic review [94]. The process for a rapid evidence assessment is less well-defined than that for a systematic review as there are different ways by which the review process is shortened (for example, by limiting searches to academic literature rather than including grey or unpublished literature) [95, 96].
What can rapid evidence assessments tell us?
Well-run rapid evidence assessments provide a quicker overview of evidence than systematic reviews [97]. This can be particularly beneficial if there is a sudden policy issue or crisis that requires fast evidence synthesis [98]. Well-run and well-reported rapid evidence assessments are replicable [99].
However, rapid evidence assessments are not as comprehensive as systematic reviews and therefore cannot report as much of the primary research. This may result in bias in the primary research that is presented [100, 101].
Evidence review
Evidence reviews are another method for reviewing primary research. Evidence reviews are used by charities, NGOs, government bodies and What Works Centres to present an overview of primary research in an area [102-106]. The methodology for carrying out evidence reviews varies between different agencies and there is no definition for what constitutes an evidence review. Therefore, unless the exact process undertaken for the evidence review is stated, it is not possible to assess how biased the presentation of primary research may be [107]. Evidence reviews may have a clear methodology and be as structured as a rapid evidence assessment, or they may be a less transparent and less structured search of primary research [108]. However, when evidence reviews are upfront about their methods and their limitations it is possible for a reader to assess how much they can rely upon its conclusions.
What can evidence reviews tell us?
Evidence reviews can be a very fast way to review research in an area, and well-run evidence reviews with transparent procedures can be as replicable as rapid evidence assessments. However, evidence reviews are not as comprehensive and transparent as systematic reviews or rapid evidence assessments and therefore it may not be possible to know how much of the primary research in an area is presented and how much bias there is in the studies chosen to be presented.
Key concept 6: Where does your evidence come from?
Research evidence is made available by publishing findings publicly. However, the availability of research evidence is affected by a number of biases. Publication bias is the tendency for researchers and academic journal editors to favour publishing research where the findings have shown a positive result, meaning that they have shown that an intervention works or have confirmed what the researchers initially predicted [109]. In addition, studies with positive results are also likely to produce more articles and be published in more widely-cited academic journals than those studies without positive results [110]. This means that there is often far more knowledge about what might work and less about what has been found not to work [111]. For example, if a new intervention is found to work in one study, but then found to not work in four studies, a person is far more likely to see the study where it worked as opposed to the ones where it did not. Secondary research (studies that review the results of other research) can be affected by publication bias as it relies on using results from published studies. As not all studies publish their results, the research available to be reviewed is at risk of being biased and incomplete. One means of combatting publication bias is pre-registration. Pre-registration involves researchers registering details of their study (such as the research design) before starting their research. A register of studies being carried out then allows people reviewing research to know when a study has been done but no positive results were published from it, creating greater completeness and transparency [112]. It may also encourage researchers and academic journal editors to publish results from more studies without positive results, especially as some journals will approve future publications from studies before finding out the results [113]. However, even where studies without positive results are published and available, other biases may still factor in. For example, citation bias is the tendency for other researchers to cite work with positive results more than studies without positive results [114, 115]. Researchers are also more likely to cite work published in English than in other languages (forming a part of language bias), which may lead to bias in secondary research if studies not written in English are excluded [116-118]. Research can be made available in many different formats. A common format involves publishing research findings in academic journals. Academic journal articles can be open access (meaning that the full text is available for free online) or subscription only (meaning that only individuals who pay for access can read the full text). Whether an article is open access affects who is able to read it, as subscriptions to journals are predominantly held by academic institutions (as opposed to members of the public, third sector workers or civil servants). This availability creates the open access advantage, where articles that are open access are cited more frequently than those that are subscription only [119]. Academic journals are not the only way research evidence is made available. Grey literature comprises any research that is not published in an academic journal or academic book [120]. It includes reports produced by industry, government departments, regulators and charities. As grey literature does not go through the same review process as found in academic publishing, it has been argued that the studies may not be as high-quality as academic journal articles. However, research suggests that there is little difference between the quality of research in published academic journal articles and grey literature [121, 122]. The majority of systematic reviews and meta-analyses do not include grey literature, focussing solely on academic journal articles [123]. However, including grey literature alongside academic journal articles may help combat publication bias because, compared to academic journals, grey literature tends to comprise more studies that did not find positive results [124-128].
Meta-analysis
Meta-analyses use data collated from primary research and can be performed as part of a systematic review. Meta-analyses extract data from the studies found during a systematic review and reanalyse the results as part of a larger dataset [129, 130].
What can meta-analyses tell us?
Well-run meta-analyses offer robust evidence on a topic because they pool the data of as many studies in that area as possible [131]. They allow researchers to weight the importance of different factors (such as where or how a study was carried out) to examine what makes an intervention more or less effective. Meta-analyses can also indicate how likely it is that positive results found in the studies are due to publication bias. Well-run meta-analyses are replicable and keep bias to a minimum [132, 133]. However, as meta-analyses involve the collation of data from primary research, they are reliant on the quality and quantity of data available.
Also in this series
References
- Jagannath, V. et al (2018). Vitamin D for the management of multiple sclerosis. Cochrane Database of Systematic Reviews.
- Gamage, S. & Tanwar, T. (2017). Strategies for training or supporting teachers to integrate technology into the classroom. International Development Research Centre, Canada & Department for International Development.
- Higgins, J. et al (2019). Cochrane Handbook for Systematic Reviews of Interventions. Cochrane.
- EPPI-Centre. What is a systematic review?
- Dicks, L. et al (2017). Knowledge synthesis for environmental decisions: An evaluation of existing methods, and guidance for their selection, use and development – A report from the EKLIPSE project. EKLIPSE.
- EPPI-centre. About EPPI-centre.
- Higgins, J. et al (2019). Cochrane Handbook for Systematic Reviews of Interventions. Cochrane.
- Campbell Collaboration (2019). Campbell systematic reviews: Policies and guidelines.
- Higgins, J. et al (2019). Cochrane Handbook for Systematic Reviews of Interventions. Cochrane.
- Campbell Collaboration (2019). Campbell systematic reviews: Policies and guidelines.
- Gough D. et al (2017). An introduction to systematic reviews: 2nd Edition. London: Sage.
- Ziai, H. et al (2017). Search for unpublished data by systematic reviewers: An audit. BMJ Open.
- Breckon, J. & Roberts, I. (2016). Using research evidence: A practice guide. Alliance for Useful Evidence.
- Dicks, L. et al (2017). Knowledge synthesis for environmental decisions: An evaluation of existing methods, and guidance for their selection, use and development – A report from the EKLIPSE project. EKLIPSE.
- Government Social Research Service. What is a rapid evidence assessment?
- Puttick, R. (2018). Mapping the standards of evidence used in UK social policy. Alliance for Useful Evidence.
- Puttick, R. (2018). Mapping the standards of evidence used in UK social policy. Alliance for Useful Evidence.
- Puttick, R. (2018). Mapping the standards of evidence used in UK social policy. Alliance for Useful Evidence.
- Puttick, R. (2018). Mapping the standards of evidence used in UK social policy. Alliance for Useful Evidence.
- Nutley, S. et al (2013). What counts as good evidence? Alliance for Useful Evidence.
- Dicks, L. et al (2017). Knowledge synthesis for environmental decisions: An evaluation of existing methods, and guidance for their selection, use and development – A report from the EKLIPSE project. EKLIPSE.
- Dicks, L. et al (2017). Knowledge synthesis for environmental decisions: An evaluation of existing methods, and guidance for their selection, use and development – A report from the EKLIPSE project. EKLIPSE.
- Thomas, J. et al (2013). Rapid evidence assessments of research to inform social policy: Taking stock and moving forward. Evidence & Policy: A Journal of Research Debate and Practice.
- Breckon, J. & Roberts, I. (2016). Using research evidence: A practice guide. Alliance for Useful Evidence.
- Tricco, A. et al (2017). Rapid reviews to strengthen health policy and systems: a practical guide. World Health Organisation and Alliance for Health Policy and Systems Research.
- Dicks, L. et al (2017). Knowledge synthesis for environmental decisions: An evaluation of existing methods, and guidance for their selection, use and development – A report from the EKLIPSE project. EKLIPSE.
- Dicks, L. et al (2017). Knowledge synthesis for environmental decisions: An evaluation of existing methods, and guidance for their selection, use and development – A report from the EKLIPSE project. EKLIPSE.
- Breckon, J. & Roberts, I. (2016). Using research evidence: A practice guide. Alliance for Useful Evidence.
- Public Health England (2015). Change4Life: Evidence review on physical activity in children. UK Government.
- Age UK (2018). Digital inclusion evidence review 2018.
- What Works Centre for Wellbeing (2019). A guide to our evidence review.
- What Works Centre for Local Economic Growth. Evidence reviews.
- Education Endowment Foundation. Evidence reviews.
- Breckon, J. & Roberts, I. (2016). Using research evidence: A practice guide. Alliance for Useful Evidence.
- Breckon, J. & Roberts, I. (2016). Using research evidence: A practice guide. Alliance for Useful Evidence.
- Easterbrook, P. et al (1991). Publication bias in clinical research. The Lancet.
- Easterbrook, P. et al (1991). Publication bias in clinical research. The Lancet.
- Dickersin, K. & Chalmers, I. (2011). Recognizing, investigating and dealing with incomplete and biased reporting of clinical research: From Francis Bacon to the WHO. Journal of the Royal Society of Medicine.
- Simes R. (1986). Publication bias: The case for an international registry of clinical trials. Journal of Clinical Oncology.
- Gonzales, J. & Cunningham, C. (2015). The promise of pre-registration in psychological research: Encouraging a priori research and decreasing publication bias. Psychological Science Agenda.
- Jannot, A-S. et al (2003). Citation bias favoring statistically significant studies was present in medical research. Journal of Clinical Epidemiology.
- Mismemer, B. et al (2006). Citation bias favoring positive clinical trials of thrombolytics for acute ischemic stroke: a cross-sectional analysis. Trials.
- Pham, B. et al (20016). Language of publication restrictions in systematic reviews gave different results depending on whether the intervention was conventional or complementary. Journal of Clinical Epidemiology.
- Walpole, C. (2019). Including papers in languages other than English in systematic reviews: important, feasible, yet often omitted. Journal of Clinical Epidemiology.
- Morrison, A. et al (2012). The effect of English-language restriction on systematic review-based meta-analyses: A systematic review of empirical studies. International Journal of Technology Assessment in Health Care.
- Gargouri, Y. et al (2010). Self-selected or mandated, open access increases citation impact for higher quality research. PLOS One.
- Higgins, et al (2019). Cochrane Handbook for Systematic Reviews of Interventions: Version 6. Cochrane.
- Hopewell, S. et al (2007). Grey literature in meta‐analyses of randomized trials of health care interventions. Cochrane Database of Systematic Reviews.
- Conn, V. et al (2003). Grey literature in meta-analyses. Nursing Research.
- McAuley, L. et al (2000). Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? The Lancet.
- Hopewell, S. et al (2007). Grey literature in meta‐analyses of randomized trials of health care interventions. Cochrane Database of Systematic Reviews.
- Hopewell, S. et al (2007). Grey literature in meta‐analyses of randomized trials of health care interventions. Cochrane Database of Systematic Reviews.
- McAuley, L. et al (2000). Does the inclusion of grey literature influence estimates of intervention effectiveness reported in meta-analyses? The Lancet.
- Benzies, K. et al (2006). State-of-the-evidence reviews: advantages and challenges of including grey literature. Worldviews on Evidence-Based Nursing.
- Adams, J. et al (2016). Searching and synthesising ‘grey literature’ and ‘grey information’ in public health: Critical reflections on three case studies. Systematic Reviews.
- Dicks, L. et al (2017). Knowledge synthesis for environmental decisions: An evaluation of existing methods, and guidance for their selection, use and development – A report from the EKLIPSE project. EKLIPSE.
- Breckon, J. & Roberts, I. (2016). Using research evidence: A practice guide. Alliance for Useful Evidence.
- Breckon, J. & Roberts, I. (2016). Using research evidence: A practice guide. Alliance for Useful Evidence.
- Dicks, L. et al (2017). Knowledge synthesis for environmental decisions: An evaluation of existing methods, and guidance for their selection, use and development – A report from the EKLIPSE project. EKLIPSE.
- Government Social Research Service. What is a rapid evidence assessment?