Most cited article - PubMed ID 38327122
Footprint of publication selection bias on meta-analyses in medicine, environmental sciences, psychology, and economics
Meta-analysis assigns more weight to studies with smaller standard errors to maximize the precision of the overall estimate. In observational settings, however, standard errors are shaped by methodological decisions. These decisions can interact with publication bias and p-hacking, potentially leading to spuriously precise results reported by primary studies. Here we show that such spurious precision undermines standard meta-analytic techniques, including inverse-variance weighting and bias corrections based on the funnel plot. Through simulations and large-scale empirical applications, we find that selection models do not resolve the issue. In some cases, a simple unweighted mean of reported estimates outperforms widely used correction methods. We introduce MAIVE (Meta-Analysis Instrumental Variable Estimator), an approach that reduces bias by using sample size as an instrument for reported precision. MAIVE offers a simple and robust solution for improving the reliability of meta-analyses in the presence of spurious precision.
- MeSH
- Humans MeSH
- Meta-Analysis as Topic * MeSH
- Computer Simulation MeSH
- Observational Studies as Topic * MeSH
- Publication Bias MeSH
- Reproducibility of Results MeSH
- Sample Size MeSH
- Check Tag
- Humans MeSH
- Publication type
- Journal Article MeSH
The influential claim that most published results are false raised concerns about the trustworthiness and integrity of science. Since then, there have been numerous attempts to examine the rate of false-positive results that have failed to settle this question empirically. Here we propose a new way to estimate the false positive risk and apply the method to the results of (randomized) clinical trials in top medical journals. Contrary to claims that most published results are false, we find that the traditional significance criterion of α = .05 produces a false positive risk of 13%. Adjusting α to.01 lowers the false positive risk to less than 5%. However, our method does provide clear evidence of publication bias that leads to inflated effect size estimates. These results provide a solid empirical foundation for evaluations of the trustworthiness of medical research.
- MeSH
- Biomedical Research * MeSH
- Insufflation * MeSH
- Periodicals as Topic * MeSH
- Publication type
- Journal Article MeSH