Skip to main content

Assessment of transparency and selective reporting of interventional trials studying colorectal cancer

Abstract

Background

Colorectal cancer (CRC) is currently one of the most frequently diagnosed cancers. Our aim was to evaluate transparency and selective reporting in interventional trials studying CRC.

Methods

First, we assessed indicators of transparency with completeness of reporting, according to the CONSORT statement, and data sharing. We evaluated a selection of reporting items for a sample of randomized controlled trials (RCTs) studying CRC with published full-text articles between 2021–03-22 and 2018–03-22. Selected items were issued from the previously published CONSORT based peer-review tool (COBPeer tool). Then, we evaluated selective reporting through retrospective registration and primary outcome(s) switching between registration and publication. Finally, we determined if primary outcome(s) switching favored significant outcomes.

Results

We evaluated 101 RCTs with published full-text articles between 2021–03-22 and 2018–03-22. Five trials (5%) reported all selected CONSORT items completely. Seventy-four (73%), 53 (52%) and 13 (13%) trials reported the primary outcome(s), the allocation concealment process and harms completely. Twenty-five (25%) trials were willing to share data. In our sample, 49 (49%) trials were retrospectively registered and 23 (23%) trials had primary outcome(s) switching. The influence of primary outcome(s) switching could be evaluated in 16 (16/23 = 70%) trials, with 6 (6/16 = 38%) trials showing a discrepancy that favored statistically significant results.

Conclusions

Our results highlight a lack of transparency as well as frequent selective reporting in interventional trials studying CRC.

Peer Review reports

Background

Cancer is currently an important public health issue worldwide. Colorectal cancer (CRC) is the third most commonly diagnosed cancer in males and the second in females. In 2020, more than 1.9 million new cases were diagnosed according to the World Health Organization Global Cancer Observatory (GCO) database (https://gco.iarc.fr/). In the past years, an increasing rate of interventional trials have been conducted in oncology [1, 2] in order to improve screening, find new treatments and overall improve prognosis and quality of life of patients with cancer.

Previous studies highlighted an important waste in the production and reporting of research in various fields [3, 4]. This waste could happen in the different research steps: inadequate research question, inappropriate study design, conduct and analysis, inaccessible research results and incomplete or unusable reports of study documentations and results [4, 5].

Lack of transparency and selective reporting of trials are common and main issues when it comes to interpretation and reproducibility of results [6,7,8]. In order to help with trial reporting, various guidelines have been developed for each type of research. For instance, the Consolidated Standards of Reporting Trials (CONSORT) statement issued reporting guidelines for randomized controlled trials (RCTs) in 2010. Furthermore, access to study protocols and documentations can help detect selective reporting such as primary outcome(s) switching [5].

Methods

The aim of our work was to assess transparency through completeness of reporting and data sharing intention, as well as selective reporting, in RCTs studying CRC management.

Search strategy and eligibility criteria

This work is a follow up study of a previous work aiming to assess availability of results in CRC trials. Our search strategy on the ClinicalTrials.gov registry has been previously described (submitted article). Details for our search strategy and eligibility criteria are available in Additional file 1.

We evaluated a sample of completed RCTs studying CRC management in adults, registered on ClinicalTrials.Gov, and with results published in a full-text article in English between 2021–03-22 and 2018–03-22.

Data extraction

Data extraction was done by one independent reviewer. In case of difficulties, it was discussed with a senior reviewer. We developed a standardized data extraction form (Additional files 2 and 3). The extraction was done with all trial documentations available (full-text articles, protocols, statistical analysis plans and the ClinicalTrials.gov registry when appropriate).

Outcome measures

Transparency indicators

Access to the trial documentation

We systematically checked whether we had access to the protocol and statistical analysis plan, and where (registry/article). We also looked at the content and determined whether we had access to the full protocol or to an abbreviated or redacted protocol and whether the protocol was available in English.

Data sharing

We recorded whether investigators had made a data sharing statement (availability statement), and if/where/how investigators planned to share data (retrieval methods, accessibility, content, date). We also assessed if the statement was done through the registry and/or through the published article (Additional files 2 and 3).

Completeness of reporting

Completeness of reporting of articles was assessed using a modified version of the CONSORT-based peer-review tool COBPeer tool checklist (Table 1) consisting of the 11 most important and frequently incompletely reported CONSORT items [9, 10]. Each item is elicited with sub-items explicating what should be reported. We evaluated for each sub-item if the information was reported: Yes/No/Not-assessable (NA). Finally, each item was rated as “completely reported” (i.e. if all sub-items were adequately reported), “partially reported” (i.e. if at least one sub-item was missing) and “not reported” (i.e. if all sub-items were missing). The overall trial reporting rate followed the same rule using each items’ final reporting result. If the primary outcome was not clearly defined in the full-text article, and in order to evaluate the CONSORT subitem 6a “reporting of primary outcome” (Table 1), we chose the outcome used for the primary objective, or for the calculation of the sample size, or the first primary outcome listed in the registry. If trials had several primary outcomes, we applied the same strategy for each primary outcome and rated the overall reporting.

Table 1 Modified version of the COBPeer tool [9]

Selective reporting

Retrospective registration on ClinicalTrials.gov

We assessed the percentage of trials with retrospective registration (trials that were registered after the trial start date).

Primary outcome(s) switching

Identification of primary outcome(s) switching

We searched for primary outcome(s) switching between the published full-text article and the ClinicalTrials.gov registry (Table 1). Primary outcome(s) switching was defined as adding or removing a primary outcome, or changing its definition (including changing, adding or removing the time frame or metric). Combination of more than one discrepancy were also considered (i.e. change in definition resulting in adding/removing a primary outcome). If the two primary outcomes differed because the registered primary outcome was more imprecise, we classified the trials as having “imprecise outcome registration” and not primary outcome(s) switching. For comparison of primary outcomes, we used the outcome(s) in the ClinicalTrials.gov registry labelled as “original primary outcome(s)” and not “current primary outcome(s)”, unless only a “current outcome(s)” was available. If the article did not mention a clear primary outcome, primary outcome switching could not be assessed.

Evaluation of the effect of primary outcome(s) switching

We also determined whether primary outcome(s) switching favored significant primary outcomes by applying the following strategy. From the full-text article, we extracted p-values for all outcomes reported in the article. We quoted results according to statistical significance: results significantly supporting or refuting the study intervention (or one of the groups in multi-arm trials) (i.e., p < 0.05), results that did not reach statistical significance (i.e., p ≥ 0.05), or unclear results. The same quoting was applied for equivalence or non-inferiority trials, according to the margin of equivalence set. A discrepancy was considered to favor significant results when a new statistically significant efficacy primary outcome was introduced or when a non-significant one was omitted or defined as secondary in the published article. We also considered a discrepancy as positive when a new, statistically non-significant safety primary outcome was introduced in the published article (e.g. if the experimental arm had no more adverse effects than the comparator even though the trial was not powered to show a difference). All the other cases were considered as negative discrepancies. The influence of some discrepancies could not be assessed because the article contained no results for the registered primary outcome or for the new primary outcome in the article (e.g. no summary measure for primary outcome described in the article). Similarly, the influence of discrepancies for “imprecise outcome registration” could not be assessed. For these cases, the influence was considered “Non-assessable”. Discrepancies were identified by one of us (A.P.) and confirmed with another member (I.B.).

Statistical analysis

We used the R software (R studio Version 1.2.5033) for all analysis. Binary results were given in percentages.

Results

Sample identification

A total of 101 RCTs fulfilled our eligibility criteria and were evaluated for transparency and selective reporting.

Our dataset with results extracted for each included trial has been uploaded on Zenodo and is accessible with the following link https://doi.org/10.5281/zenodo.5841651.

Transparency indicators

Access to trial documentation

About a third of trials in our sample (34, 34%) gave open access to the protocol (Table 2). It could be accessed through the published article (supplementary document or reference for a separate article), and/or the registry. In all cases the protocol was in English, and for 29 (29/34 = 85%) trials it was complete (i.e. not abbreviated or redacted). Finally, the statistical analysis plan was available for 32 (32%) trials.

Table 2 Mains results for transparency indicators for the 101 trials in our study

Data sharing statement

In our sample, 25 (25%) trials were willing to share data (Table 2). Of these, 22 (22/25 = 88%) gave information on where data could be accessed, including two with free access on the National Center for Biotechnology Information (NCBI). Information on the time frame of availability was specified by six trials (6/25 = 24%). Finally, 8 (8/25 = 32%) trials only agreed to share data with researchers and 14 (14/25 = 56%) had additional limitations (e.g. subject to approval). For the one trial where the data sharing statement was available in both the registry and publication, information was consistent. Finally, two trials (2%) did not agree to share their datasets while agreeing to share the protocol and/or statistical analysis plan and/or informed consent form.

Completeness of reporting (Tables 1 and 2)

Overall, only five trials (5%) reported all selected CONSORT items completely. Main results are summed up in Table 2.

Reporting of primary outcome(s) measure(s) (item 6a)

The primary outcome was not clearly identified in 12 (12%) trials, so a primary outcome was chosen for reporting (see Methodology section). For the 27 (27%) trials with partial reporting, the most frequently missing subitems were: description of the outcome assessor (20/27, 74%), and timing of outcome assessment (6/27, 22%).

Reporting of randomization and allocation concealment (items 8a and 9)

Most trials (73, 72%) specified the methods for the generation of the allocation sequence. The process of allocation concealment was reported by 53 (52%) trials.

Reporting of blinding (item 11)

Forty-seven (47%) trials were blinded. Among those, 32 (32/47 = 68%) described the blinding procedure completely. The most frequently missing subitems for completeness of reporting were description of how blinding was done in seven (7/47 = 15%) trials and who was blinded in four (4/47 = 9%).

Reporting of participant flow (items 13a, 13b)

Eighty-one (80%) trials provided a complete description of the participant flow, either as a diagram and/or in the main text. Trials with partial reporting commonly missed to report the number of patients who discontinued the trial intervention (12/19 = 63%).

Reporting of trial results for the primary outcome(s) (item 17a)

Among the 42 (42%) trials with partial reporting of primary outcome results, the effect size (e.g. odds ratio, hazard ratio, mean difference) and its precision were not reported in 34 (34/42 = 81%) and 38 (38/42 = 90%) trials respectively. Summary outcome result for each arm was missing in four (4/42 = 10%) trials.

Reporting of harms (item 19)

The majority of trials reported incomplete (49, 48%) or no information on harms (39, 39%). Regarding partial reports, the most frequently missing subitems were: the method for data collection (33/49 = 67%), the method of harms attribution (39/49 = 80%) and the timing of harms surveillance (24/49 = 49%). Among the 13 (13%) trials with complete reporting on harms, information was extracted from the protocol in 11 cases.

Reporting of registration related information (item 23)

Most trials reported the trial registration number in the published paper (90, 89%).

Selective reporting

Retrospective registration

Among the 101 trials, 49 (49%) were retrospectively registered.

Primary outcome(s) switching

In our sample, 12 (12/101 = 12%) trials could not be assessed for primary outcome(s) switching in the absence of a clear primary outcome(s) in the article. Three (3/101 = 3%) trials had “imprecise outcome registration” and were not assessed for primary outcome(s) switching. Sixty-six trials (66/101 = 65%) trials reported primary outcome(s) as pre-defined in the trial registry and 23 (23/101 = 23%) trials had primary outcome(s) switching. Ten (10/23 = 43%) trials had an added or removed primary outcome between registration and publication, five (5/23 = 22%) trials had only a change of definition (including time frame or metric), and eight (8/23 = 35%) trials had a combination of change of definition and added or removed primary outcome. For the 23 trials with primary outcome(s) switching, the influence of the discrepancy could be evaluated in 16 (16/23 = 70%) trials. Among them, 6 (6/16 = 38%) trials has a discrepancy that favored statistically significant results. Finally, no trials with primary outcome(s) switching mentioned the change in the article.

Discussion

Our work is the first to study transparency and selective reporting in a large sample of completed and terminated RCTs studying CRC management.

Our results showed little availability of trial protocols (34%), few positive sharing statements (25%) and rare completeness of reporting (5%). Important items or subitems often partially or not reported were harms, description of allocation concealment, information on the individual assessing the outcome and the number of patients who discontinued treatment in each trial arm. On top of that, almost half of trials were retrospectively registered and 23% had primary outcome(s) switching.

It has been shown in previous works that research transparency is crucial to allow for reproducibility of results and for the constitution of a strong body of evidence [7, 8]. Data sharing is important to evaluate the quality of the primary research which can impact future conclusions from reanalyzes of trials or from evidence synthesis research such as meta-analyses [8, 11, 12]. It can also help detect selective reporting used to make a trial “more publishable” (e.g. selective reporting of positive results). It was also shown that access to trial protocols helped detect selective outcome reporting [5, 13]. With the same goal of transparency, the 2010 CONSORT statement provides a minimum set of recommendations for reporting of RCTs. Previous works have shown that adherence of authors to these guidelines is low, which is in line with our findings [9,10,11,12]. Regarding selective reporting, about half of trials in our work were retrospectively registered. This could either mean that data is not always of high quality in registries [2] or that trials were voluntarily registered after the trials start date (all evaluated trials in our sample had a study start date after 2004 when registration became mandatory). For the identification of primary outcome(s) switching, we used the “original registered primary outcome” and not “current registered primary outcome”, considering that an update of the primary outcome on the registry after the study start date should have been mentioned in the article. Twenty-three % of trials in our work showed primary outcome(s) switching. Previous studies have also shown that outcome(s) switching between registry and publication is a frequent issue and that discrepancies sometimes favor statistically significant outcomes in the article [14, 15]. This was the case for 38% (6/16 = 38%) of concerned trials for which the influence of discrepancy was assessable. To our knowledge, similar work on assessment of transparency and selective reporting has not been performed for interventional research on other types of cancer. Therefore, it would also be interesting to perform this work in other frequent types of cancer, such as breast and prostate cancer, as well as in less frequent types of cancer, to see if prevalence impacts the overall reporting results.

Various tools have been developed to help for the reporting of clinical trials. For authors, an online writing aid tool for randomized trial reports, the CONSORT-based WEB tool (COBWEB tool), has helped improve completeness of reporting for RCTs [16]. Similarly, the COBPeer tool has been developed to help with the peer-review process of RCTs [9]. It was shown that trained early career researchers using the COBPeer tool were more likely to detect inadequate reporting (incomplete reporting or switch in primary outcome(s)) than the usual peer-review process used by journals [9]. Finally, reporting of RCTs can also be assessed through registries and not only the publication. For instance, one study found that trial results for RCTs studying drugs were more completely reported on the ClinicalTrials.gov registry than in the published articles [17]. Efforts need to be focused on reducing discrepancies between reports in registries and publications, but also improving the overall quality of reporting.

Our work has some limitations. First, by searching for RCTs through one registry we did not consider unregistered trials or trials registered on another registry. Also, the information on registries is not always of high quality. As an example, we found four trials with imprecise registration outcome. Then, we only evaluated 101 full-text published articles published over the past three years. We chose this time range for an optimal evaluation of recent articles since quality of reporting has improved over time following guidelines’ publication [18, 19]. Data sharing can lead to several challenges such as choosing the optimal sharing format, as well as the cleaning and interpretation of the original data for reanalyzes [8]. Furthermore, access to individual patient data is mainly useful for systematic reviews, and in particular, individual patient data meta analyses [20]. Finally, evaluation of reporting was mainly done by one reviewer and checked with a senior reviewer.

Conclusion

In conclusion, there is a lack of transparency for evaluated RCTs with only few trials showing complete reporting of the selected CONSORT-items and/or willing to share data or study documentations. Also, selective reporting is frequently encountered. There is room for improvement.

Availability of data and materials

The datasets used and analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

CRC:

Colorectal cancer

COBPeer tool:

CONSORT based peer-review tool

COBWEB tool:

CONSORT-based WEB tool

CONSORT:

Consolidated Standards of Reporting Trials

GCO:

World Health Organization Global Cancer Observatory

NA:

Non-assessable

RCTs:

Randomized controlled trials

US:

United States

References

  1. Vale C, Stewart L, Tierney J, UK Coordinating Committee for Cancer Research National Register of Cancer. Trends in UK cancer trials: results from the UK Coordinating Committee for Cancer Research National Register of Cancer Trials. Br J Cancer. 2005;92:811–4.

    Article  CAS  Google Scholar 

  2. Viergever RF, Li K. Trends in global clinical trial registration: an analysis of numbers of registered clinical trials in different parts of the world from 2004 to 2013. BMJ Open. 2015;5:e008932.

    Article  Google Scholar 

  3. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet. 2009;374:86–9.

    Article  Google Scholar 

  4. Macleod MR, Michie S, Roberts I, Dirnagl U, Chalmers I, Ioannidis JPA, et al. Biomedical research: increasing value, reducing waste. Lancet. 2014;383:101–4.

    Article  Google Scholar 

  5. Glasziou P, Altman DG, Bossuyt P, Boutron I, Clarke M, Julious S, et al. Reducing waste from incomplete or unusable reports of biomedical research. Lancet. 2014;383:267–76.

    Article  Google Scholar 

  6. Ioannidis JPA, Greenland S, Hlatky MA, Khoury MJ, Macleod MR, Moher D, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–75.

    Article  Google Scholar 

  7. McNutt M. Reproducibility Science. 2014;343:229.

    CAS  PubMed  Google Scholar 

  8. Bauchner H, Golub RM, Fontanarosa PB. Data Sharing: An Ethical and Scientific Imperative. JAMA. 2016;315:1237–9.

    PubMed  Google Scholar 

  9. Chauvin A, Ravaud P, Moher D, Schriger D, Hopewell S, Shanahan D, et al. Accuracy in detecting inadequate research reporting by early career peer reviewers using an online CONSORT-based peer-review tool (COBPeer) versus the usual peer-review process: a cross-sectional diagnostic study. BMC Med. 2019;17:205.

    Article  Google Scholar 

  10. Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int J Surg. 2012;10:28–55.

    Article  Google Scholar 

  11. Boutron I, Créquit P, Williams H, Meerpohl J, Craig JC, Ravaud P. Future of evidence ecosystem series: 1. Introduction Evidence synthesis ecosystem needs dramatic change. J Clin Epidemiol. 2020;123:135–42.

    Article  Google Scholar 

  12. Ebrahim S, Sohani ZN, Montoya L, Agarwal A, Thorlund K, Mills EJ, et al. Reanalyses of randomized clinical trial data. JAMA. 2014;312:1024–32.

    Article  CAS  Google Scholar 

  13. Chan A-W, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291:2457–65.

    Article  CAS  Google Scholar 

  14. Chan A-W, Song F, Vickers A, Jefferson T, Dickersin K, Gøtzsche PC, et al. Increasing value and reducing waste: addressing inaccessible research. Lancet. 2014;383:257–66.

    Article  Google Scholar 

  15. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009;302:977–84.

    Article  CAS  Google Scholar 

  16. Barnes C, Boutron I, Giraudeau B, Porcher R, Altman DG, Ravaud P. Impact of an online writing aid tool for writing a randomized trial report: the COBWEB (Consort-based WEB tool) randomized controlled trial. BMC Med. 2015;13:221.

    Article  Google Scholar 

  17. Riveros C, Dechartres A, Perrodeau E, Haneef R, Boutron I, Ravaud P. Timing and completeness of trial results posted at ClinicalTrials. gov and published in journals. PLoS Med. 2013;10:e1001566 discussion e1001566.

    Article  Google Scholar 

  18. Dechartres A, Trinquart L, Atal I, Moher D, Dickersin K, Boutron I, et al. Evolution of poor reporting and inadequate methods over time in 20 920 randomised controlled trials included in Cochrane reviews: research on research study. BMJ. 2017;357:j2490.

    Article  Google Scholar 

  19. Moher D, Glasziou P, Chalmers I, Nasser M, Bossuyt PMM, Korevaar DA, et al. Increasing value and reducing waste in biomedical research: who’s listening? Lancet. 2016;387:1573–86.

    Article  Google Scholar 

  20. Leahy J, O’Leary A, Afdhal N, Gray E, Milligan S, Wehmeyer MH, et al. The impact of individual patient data in a network meta-analysis: An investigation into parameter estimation and model selection. Res Synth Methods. 2018;9:441–69.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

Not applicable.

Author information

Authors and Affiliations

Authors

Contributions

P.R. and I.B. have contributed to the conceptualization of the manuscript. A.P. has done the formal analysis and writing of the original draft of the manuscript. A.P., I.B. and P.R. have contributed to the reviewing and editing of the manuscript. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Anna Pellat.

Ethics declarations

Ethics approval and consent to participate

Not applicable.

Consent for publication

Not applicable.

Competing interests

The authors declare they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Search strategy and eligibility criteria.

Additional file 2.

Data extraction sheet.

Additional file 3.

Data sharing explanation sheet.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Pellat, A., Boutron, I. & Ravaud, P. Assessment of transparency and selective reporting of interventional trials studying colorectal cancer. BMC Cancer 22, 278 (2022). https://doi.org/10.1186/s12885-022-09334-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12885-022-09334-5

Keywords