• Preferred reporting items for systematic review and meta-analysis protocols (PRISMA-P) 2015: elaboration and explanation.

    16 March 2018

    Protocols of systematic reviews and meta-analyses allow for planning and documentation of review methods, act as a guard against arbitrary decision making during review conduct, enable readers to assess for the presence of selective reporting against completed reviews, and, when made publicly available, reduce duplication of efforts and potentially prompt collaboration. Evidence documenting the existence of selective reporting and excessive duplication of reviews on the same or similar topics is accumulating and many calls have been made in support of the documentation and public availability of review protocols. Several efforts have emerged in recent years to rectify these problems, including development of an international register for prospective reviews (PROSPERO) and launch of the first open access journal dedicated to the exclusive publication of systematic review products, including protocols (BioMed Central's Systematic Reviews). Furthering these efforts and building on the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-analyses) guidelines, an international group of experts has created a guideline to improve the transparency, accuracy, completeness, and frequency of documented systematic review and meta-analysis protocols--PRISMA-P (for protocols) 2015. The PRISMA-P checklist contains 17 items considered to be essential and minimum components of a systematic review or meta-analysis protocol.This PRISMA-P 2015 Explanation and Elaboration paper provides readers with a full understanding of and evidence about the necessity of each item as well as a model example from an existing published protocol. This paper should be read together with the PRISMA-P 2015 statement. Systematic review authors and assessors are strongly encouraged to make use of PRISMA-P when drafting and appraising review protocols.

  • Choosing Important Health Outcomes for Comparative Effectiveness Research: An Updated Review and User Survey.

    15 March 2018

    BACKGROUND: A COS represents an agreed minimum set of outcomes that should be measured and reported in all trials of a specific condition. The COMET (Core Outcome Measures in Effectiveness Trials) initiative aims to collate and stimulate the development and application of COS, by including data on relevant studies within a publically available internet-based resource. In recent years, there has been an interest in increasing the development of COS. Therefore, this study aimed to provide an update of a previous review, and examine the quality of development of COS. A further aim was to understand the reasons why individuals are searching the COMET database. METHODS: A multi-faceted search strategy was followed, in order to identify studies that sought to determine which outcomes/domains to measure in clinical trials of a specific condition. Additionally, a pop up survey was added to the COMET website, to ascertain why people were searching the COMET database. RESULTS: Thirty-two reports relating to 29 studies were eligible for inclusion in the review. There has been an improvement in the description of the scope of a COS and an increase in the proportion of studies using literature/systematic reviews and the Delphi technique. Clinical experts continue to be the most common group involved in developing COS, however patient and public involvement has increased. The pop-up survey revealed the most common reasons for visiting the COMET website to be thinking about developing a COS and planning a clinical trial. CONCLUSIONS: This update demonstrates that recent studies appear to have adopted a more structured approach towards COS development and public representation has increased. However, there remains a need for developers to adequately describe details about the scope of COS, and for greater public engagement. The COMET database appears to be a useful resource for both COS developers and users of COS.

  • Accumulating research: a systematic account of how cumulative meta-analyses would have provided knowledge, improved health, reduced harm and saved resources.

    15 March 2018

    BACKGROUND: "Cumulative meta-analysis" describes a statistical procedure to calculate, retrospectively, summary estimates from the results of similar trials every time the results of a further trial in the series had become available. In the early 1990 s, comparisons of cumulative meta-analyses of treatments for myocardial infarction with advice promulgated through medical textbooks showed that research had continued long after robust estimates of treatment effects had accumulated, and that medical textbooks had overlooked strong, existing evidence from trials. Cumulative meta-analyses have subsequently been used to assess what could have been known had new studies been informed by systematic reviews of relevant existing evidence and how waste might have been reduced. METHODS AND FINDINGS: We used a systematic approach to identify and summarise the findings of cumulative meta-analyses of studies of the effects of clinical interventions, published from 1992 to 2012. Searches were done of PubMed, MEDLINE, EMBASE, the Cochrane Methodology Register and Science Citation Index. A total of 50 eligible reports were identified, including more than 1,500 cumulative meta-analyses. A variety of themes are illustrated with specific examples. The studies showed that initially positive results became null or negative in meta-analyses as more trials were done; that early null or negative results were over-turned; that stable results (beneficial, harmful and neutral) would have been seen had a meta-analysis been done before the new trial; and that additional trials had been much too small to resolve the remaining uncertainties. CONCLUSIONS: This large, unique collection of cumulative meta-analyses highlights how a review of the existing evidence might have helped researchers, practitioners, patients and funders make more informed decisions and choices about new trials over decades of research. This would have led to earlier uptake of effective interventions in practice, less exposure of trial participants to less effective treatments, and reduced waste resulting from unjustified research.