Estimating the false discovery risk of (randomized) clinical trials in medical journals based on published p-values
Language English Country United States Media electronic-ecollection
Document type Journal Article
PubMed
37647247
PubMed Central
PMC10468063
DOI
10.1371/journal.pone.0290084
PII: PONE-D-22-04565
Knihovny.cz E-resources
- MeSH
- Biomedical Research * MeSH
- Insufflation * MeSH
- Periodicals as Topic * MeSH
- Publication type
- Journal Article MeSH
The influential claim that most published results are false raised concerns about the trustworthiness and integrity of science. Since then, there have been numerous attempts to examine the rate of false-positive results that have failed to settle this question empirically. Here we propose a new way to estimate the false positive risk and apply the method to the results of (randomized) clinical trials in top medical journals. Contrary to claims that most published results are false, we find that the traditional significance criterion of α = .05 produces a false positive risk of 13%. Adjusting α to.01 lowers the false positive risk to less than 5%. However, our method does provide clear evidence of publication bias that leads to inflated effect size estimates. These results provide a solid empirical foundation for evaluations of the trustworthiness of medical research.
Department of Psychological Methods University of Amsterdam Amsterdam The Netherlands
Department of Psychology University of Toronto Mississauga Mississauga Canada
Institute of Computer Science Czech Academy of Sciences Prague Czech Republic
See more in PubMed
Baker M. Reproducibility crisis. Nature. 2016;533(26):353–66. PubMed
Fanelli D. Opinion: Is science really facing a reproducibility crisis, and do we need it to? Proceedings of the National Academy of Sciences. 2018;115(11):2628–2631. doi: 10.1073/pnas.1708272114 PubMed DOI PMC
Ioannidis JP. Why most published research findings are false. PLoS Medicine. 2005;2(8):e124. doi: 10.1371/journal.pmed.0020124 PubMed DOI PMC
Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015;349 (6251). PubMed
Camerer CF, Dreber A, Forsell E, Ho TH, Huber J, Johannesson M, et al.. Evaluating replicability of laboratory experiments in economics. Science. 2016;351(6280):1433–1436. doi: 10.1126/science.aaf0918 PubMed DOI
Camerer CF, Dreber A, Holzmeister F, Ho TH, Huber J, Johannesson M, et al.. Evaluating the replicability of social science experiments in Nature and Science between 2010 and 2015. Nature Human Behaviour. 2018;2(9):637–644. doi: 10.1038/s41562-018-0399-z PubMed DOI
Coyne JC. Replication initiatives will not salvage the trustworthiness of psychology. BMC Psychology. 2016;4(1):1–11. doi: 10.1186/s40359-016-0134-3 PubMed DOI PMC
Fanelli D, Costas R, Ioannidis JP. Meta-assessment of bias in science. Proceedings of the National Academy of Sciences. 2017;114(14):3714–3719. doi: 10.1073/pnas.1618569114 PubMed DOI PMC
Wooditch A, Fisher R, Wu X, Johnson NJ. P-value problems? An examination of evidential value in criminology. Journal of Quantitative Criminology. 2020;36(2):305–328. doi: 10.1007/s10940-020-09459-5 DOI
Barnes J, TenEyck MF, Pratt TC, Cullen FT. How powerful is the evidence in criminology? On whether we should fear a coming crisis of confidence. Justice Quarterly. 2020;37(3):383–409. doi: 10.1080/07418825.2018.1495252 DOI
Nuijten MB, van Assen MA, Augusteijn HE, Crompvoets EA, Wicherts JM. Effect sizes, power, and biases in intelligence research: a meta-meta-analysis. Journal of Intelligence. 2020;8(4):36. doi: 10.3390/jintelligence8040036 PubMed DOI PMC
Stanley TD, Carter EC, Doucouliagos H. What meta-analyses reveal about the replicability of psychological research. Psychological Bulletin. 2018;144(12):1325–1346. doi: 10.1037/bul0000169 PubMed DOI
Ioannidis JPA, Stanley TD, Doucouliagos H. The power of bias in economics research. The Economic Journal. 2017;127(605):F236–F265. doi: 10.1111/ecoj.12461 DOI
Van Aert RC, Wicherts JM, Van Assen MA. Publication bias examined in meta-analyses from psychology and medicine: A meta-meta-analysis. PloS One. 2019;14(4):e0215052. doi: 10.1371/journal.pone.0215052 PubMed DOI PMC
Mathur MB, VanderWeele TJ. Estimating publication bias in meta-analyses of peer-reviewed studies: A meta-meta-analysis across disciplines and journal tiers. Research Synthesis Methods. 2021;12(2):176–191. doi: 10.1002/jrsm.1464 PubMed DOI PMC
Lamberink HJ, Otte WM, Sinke MR, Lakens D, Glasziou PP, Tijdink JK, et al.. Statistical power of clinical trials increased while effect size remained stable: an empirical analysis of 136,212 clinical trials between 1975 and 2014. Journal of Clinical Epidemiology. 2018;102:123–128. doi: 10.1016/j.jclinepi.2018.06.014 PubMed DOI
Bartoš F, Gronau QF, Timmers B, Otte WM, Ly A, Wagenmakers EJ. Bayesian model-averaged meta-analysis in medicine. Statistics in Medicine. 2021;40(30):6743–6761. doi: 10.1002/sim.9170 PubMed DOI PMC
Bartoš F, Maier M, Wagenmakers EJ, Nippold F, Doucouliagos H, Ioannidis JPA, et al. Footprint of publication selection bias on meta-analyses in medicine, environmental sciences, psychology, and economics; 2022. Available from: https://arxiv.org/abs/2208.12334. PubMed
Jager LR, Leek JT. An estimate of the science-wise false discovery rate and application to the top medical literature. Biostatistics. 2014;15(1):1–12. doi: 10.1093/biostatistics/kxt038 PubMed DOI
Goodman SN. Discussion: An estimate of the science-wise false discovery rate and application to the top medical literature. Biostatistics. 2014;15(1):13–16. doi: 10.1093/biostatistics/kxt035 PubMed DOI
Gelman A, O’Rourke K. Difficulties in making inferences about scientific truth from distributions of published p-values. Biostatistics. 2014;15(1):18–23. doi: 10.1093/biostatistics/kxt034 PubMed DOI
Benjamini Y, Hechtlinger Y. Discussion: An estimate of the science-wise false discovery rate and applications to top medical journals by Jager and Leek. Biostatistics. 2014;15(1):13–16. doi: 10.1093/biostatistics/kxt032 PubMed DOI
Ioannidis JP. Discussion: Why “An estimate of the science-wise false discovery rate and application to the top medical literature” is false. Biostatistics. 2014;15(1):28–36. doi: 10.1093/biostatistics/kxt036 PubMed DOI
Sorić B. Statistical “discoveries” and effect-size estimation. Journal of the American Statistical Association. 1989;84(406):608–610. doi: 10.1080/01621459.1989.10478811 DOI
Brunner J, Schimmack U. Estimating population mean power under conditions of heterogeneity and selection for significance. Meta-Psychology. 2020;4. doi: 10.15626/MP.2018.874 DOI
Bartoš F, Schimmack U. Z-curve. 2.0: Estimating replication rates and discovery rates. Meta-Psychology. 2022;6:1–14.
Bartoš F, Maier M, Wagenmakers EJ, Doucouliagos H, Stanley TD. Robust Bayesian meta-analysis: Model-averaging across complementary publication bias adjustment methods. Research Synthesis Methods. 2022;14(1):99–116. doi: 10.1002/jrsm.1594 PubMed DOI PMC
Held L, Micheloud C, Pawel S. The assessment of replication success based on relative effect size. The Annals of Applied Statistics. 2022;16(2):706–720. doi: 10.1214/21-AOAS1502 DOI
Pawel S, Held L. The sceptical Bayes factor for the assessment of replication success. Journal of the Royal Statistical Society: Series B (Statistical Methodology). 2022;84(3):879–911. doi: 10.1111/rssb.12491 DOI
Ly A, Etz A, Marsman M, Wagenmakers EJ. Replication Bayes factors from evidence updating. Behavior Research Methods. 2019;51(6):2498–2508. doi: 10.3758/s13428-018-1092-x PubMed DOI PMC