Cookies on this website
We use cookies to ensure that we give you the best experience on our website. If you click 'Continue' we'll assume that you are happy to receive all cookies and you won't see this message again. Click 'Find out more' for information on how to change your cookie settings.
  • Evidence for the selective reporting of analyses and discrepancies in clinical trials: a systematic review of cohort studies of clinical trials.

    3 July 2018

    BACKGROUND: Most publications about selective reporting in clinical trials have focussed on outcomes. However, selective reporting of analyses for a given outcome may also affect the validity of findings. If analyses are selected on the basis of the results, reporting bias may occur. The aims of this study were to review and summarise the evidence from empirical cohort studies that assessed discrepant or selective reporting of analyses in randomised controlled trials (RCTs). METHODS AND FINDINGS: A systematic review was conducted and included cohort studies that assessed any aspect of the reporting of analyses of RCTs by comparing different trial documents, e.g., protocol compared to trial report, or different sections within a trial publication. The Cochrane Methodology Register, Medline (Ovid), PsycInfo (Ovid), and PubMed were searched on 5 February 2014. Two authors independently selected studies, performed data extraction, and assessed the methodological quality of the eligible studies. Twenty-two studies (containing 3,140 RCTs) published between 2000 and 2013 were included. Twenty-two studies reported on discrepancies between information given in different sources. Discrepancies were found in statistical analyses (eight studies), composite outcomes (one study), the handling of missing data (three studies), unadjusted versus adjusted analyses (three studies), handling of continuous data (three studies), and subgroup analyses (12 studies). Discrepancy rates varied, ranging from 7% (3/42) to 88% (7/8) in statistical analyses, 46% (36/79) to 82% (23/28) in adjusted versus unadjusted analyses, and 61% (11/18) to 100% (25/25) in subgroup analyses. This review is limited in that none of the included studies investigated the evidence for bias resulting from selective reporting of analyses. It was not possible to combine studies to provide overall summary estimates, and so the results of studies are discussed narratively. CONCLUSIONS: Discrepancies in analyses between publications and other study documentation were common, but reasons for these discrepancies were not discussed in the trial reports. To ensure transparency, protocols and statistical analysis plans need to be published, and investigators should adhere to these or explain discrepancies.

  • Optimism bias leads to inconclusive results-an empirical study.

    3 July 2018

    OBJECTIVE: Optimism bias refers to unwarranted belief in the efficacy of new therapies. We assessed the impact of optimism bias on a proportion of trials that did not answer their research question successfully and explored whether poor accrual or optimism bias is responsible for inconclusive results. STUDY DESIGN: Systematic review. SETTING: Retrospective analysis of a consecutive-series phase III randomized controlled trials (RCTs) performed under the aegis of National Cancer Institute Cooperative groups. RESULTS: Three hundred fifty-nine trials (374 comparisons) enrolling 150,232 patients were analyzed. Seventy percent (262 of 374) of the trials generated conclusive results according to the statistical criteria. Investigators made definitive statements related to the treatment preference in 73% (273 of 374) of studies. Investigators' judgments and statistical inferences were concordant in 75% (279 of 374) of trials. Investigators consistently overestimated their expected treatment effects but to a significantly larger extent for inconclusive trials. The median ratio of expected and observed hazard ratio or odds ratio was 1.34 (range: 0.19-15.40) in conclusive trials compared with 1.86 (range: 1.09-12.00) in inconclusive studies (P<0.0001). Only 17% of the trials had treatment effects that matched original researchers' expectations. CONCLUSION: Formal statistical inference is sufficient to answer the research question in 75% of RCTs. The answers to the other 25% depend mostly on subjective judgments, which at times are in conflict with statistical inference. Optimism bias significantly contributes to inconclusive results.

  • Indirect comparisons of treatments based on systematic reviews of randomised controlled trials.

    3 July 2018

    BACKGROUND: Randomised controlled trials are the most effective way to differentiate between the effects of competing interventions. However, head-to-head studies are unlikely to have been conducted for all competing interventions. AIM: Evaluation of different methodologies used to indirectly compare interventions based on meta analyses of randomised controlled trials. METHODS: Systematic review of Cochrane Database of Systematic Reviews, Cochrane Methodology Register, EMBASE and MEDLINE for reports including meta analyses that contained an indirect comparison. Searching was completed in July 2007. No restriction was placed on language or year of publication. RESULTS: Sixty-two papers identified contained indirect comparisons of treatments. Five different methodologies were employed: comparing point estimates (1/62); comparing 95% confidence intervals (26/62); performing statistical tests on summary estimates (8/62); indirect comparison using a single common comparator (20/62); and mixed treatment comparison (MTC) (7/62). The only methodologies that provide an estimate of the difference between the interventions under consideration and a measure of the uncertainty around that estimate are indirect comparison using a single common comparator and MTC. The MTC might have advantages over other approaches because it is not reliant on a single common comparator and can incorporate the results of direct and indirect comparisons into the analysis. Indirect comparisons require an underlying assumption of consistency of evidence. Utilising any of the methodologies when this assumption is not true can produce misleading results. CONCLUSIONS: Use of either indirect comparison using a common comparator or MTC provides estimates for use in decision making, with the preferred methodology being dependent on the available data.

  • Reporting of adverse events in systematic reviews can be improved: survey results.

    3 July 2018

    OBJECTIVE: To assess how information about adverse events is included in systematic reviews. STUDY DESIGN AND SETTING: We included all new Cochrane reviews published in the Cochrane Database of Systematic Reviews (CDSRs) and all new reviews (2003--2004) in the Database of Abstracts of Reviews of Effects (DAREs) in Issue 1 2005 of The Cochrane Library. RESULTS: More than half of Cochrane (44/78) and DARE (46/79) reviews assessed drug interventions. The rest assessed surgery (Cochrane [12]; DARE [10]), psychosocial, educational, or physiotherapy interventions (22; 23). Seventy-six percent (59/78) of Cochrane reviews mentioned adverse events as an outcome compared with 48% (38/79) of DARE reviews. Most reviews mentioning adverse events were of drug interventions (Cochrane [41/59]; DARE reviews [29/38]). Considering reviews that mentioned adverse events, 95% (56/59) of Cochrane reviews included only randomized trials and 73% (43/59) included an analysis of adverse events. For 10 Cochrane reviews, adverse events had not been reported by the included trials. In contrast, 58% (22/38) of DARE reviews mentioning adverse events included only randomized trials, the rest included both randomized and nonrandomized studies. CONCLUSIONS: Most Cochrane reviews of drug interventions considered adverse events. This was not the case for DARE reviews and for Cochrane reviews of nondrug interventions. This could be improved.

  • Evaluating maternity care: a core set of outcome measures.

    3 July 2018

    BACKGROUND: Comparing the relative effectiveness of interventions on specific outcomes across trials can be problematic due to differences in the choice and definitions of outcome measures used by researchers. We sought to identify a minimum set of outcome measures for evaluating models of maternity care from the perspective of key stakeholders. METHODS: A 3-round, electronic Delphi survey design was used. Setting was multinational, comprising a range of key stakeholders. Participants consisted of a single heterogeneous panel of maternity service users, midwives, obstetricians, pediatricians/neonatologists, family physicians/general practitioners, policy-makers, service practitioners, and researchers of maternity care. Members of the panel self-assessed their expertise in evaluating models of maternity care. RESULTS: A total of 320 people from 28 countries expressed willingness to take part in this survey. Round 1 was completed by 218 (68.1%) participants, of whom 173 (79.4%) completed round 2 and 152 (87.9%) of these completed round 3. Fifty outcomes were identified, with both a mean value greater than the overall group mean for all outcomes combined (x=4.18) and rated 4 or more on a 5-point Likert-type scale for importance of inclusion in a minimum data set of outcome measures by at least 70 percent of respondents. Three outcomes were collapsed into a single outcome so that the final minimum set includes 48 outcomes. CONCLUSIONS: Given the inconsistencies in the choice of outcome measures routinely collected and reported in randomized evaluations of maternity care, it is hoped that use of the data set will increase the potential for national and international comparisons of models for maternity care. Although not intended to be prescriptive or to inhibit the collection of other outcomes, we hope that the core set will make it easier to assess the care of women and their babies during pregnancy and childbirth.

  • The sensitivity and precision of search terms in Phases I, II and III of the Cochrane Highly Sensitive Search Strategy for identifying reports of randomized trials in medline in a specific area of health care--HIV/AIDS prevention and treatment interventions.

    3 July 2018

    OBJECTIVES: To detect term(s) in the Cochrane Highly Sensitive Search Strategy (HSSS) that retain high sensitivity but improve precision in retrieving reports of trials in the PubMed version of medline. METHODS: Individual terms from the PubMed version of the HSSS were added, term by term, to an African HIV/AIDS strategy to identify reports of trials in medline using PubMed. The titles and abstracts of the records retrieved were read by two handsearchers and checked by a clinical epidemiologist. The sensitivity and precision of each term in the three phases of the HSSS were calculated. RESULTS: Of 7,719 records retrieved, 285 were identified as reports of trials [204 randomized (RCTs); 81 possibly randomized or quasi-randomized (CCTs)]. Phase III had the highest sensitivity (92%). Overall, precision was very low (3.7%). One term, 'random*[tw]', retrieved all RCTs found by our search and improved precision to 29%. The least sensitive terms, yielding no records, were '(doubl* AND mask*)[tw]' and terms containing 'trebl*' or 'tripl*', except for '(tripl* AND blind*)[tw]'. The highest precision per term was for 'Double-blind Method [MeSH]' (76%). CONCLUSIONS: To retrieve all RCTs and CCTs found by our search, seven terms are needed but precision remains low (4.3%). Developments in the methods of search strategy design may help to improve precision while retaining high levels of sensitivity by identifying term(s) which occur frequently in relevant records and are the most efficient at discriminating between different study designs.

  • Individual patient data meta-analysis in cancer.

    2 July 2018

    As in many areas of health care, treatments for cancer may differ only moderately in their effects on major end points, such as death. But, such differences are worth knowing about, particularly in common diseases in which they could represent a substantial benefit to public health. Large-scale randomized evidence allows moderate differences to be investigated reliably, and one way to achieve this is by meta-analyses of updated and centrally collected individual patient data from all relevant trials. This paper illustrates why this form of research can often be important in cancer. It also offers the first list of such projects, as a source of information on current and past research in this area.