π Appears in:
Plain English Summary
This blockbuster paper β over 35,000 citations β argued that most published scientific results are wrong, kicking off the "replication crisis." Using straightforward math, Ioannidis showed a finding's trustworthiness depends on study size, effect strength, analytical wiggle room, and how plausible the hypothesis was beforehand. The uncomfortable punchline? Well-designed clinical trials get it right about 85% of the time, but underpowered exploratory studies are true only 12-23% of the time. This framework became the go-to toolkit for skeptics questioning psi research, since psi effects tend to be small with limited samples β exactly the recipe Ioannidis warns about.
Research Notes
Landmark paper (35,000+ citations) that launched the replication crisis. Provides the mathematical framework skeptics invoke to challenge psi claims: small samples, small effects (d=0.1-0.3), analytical flexibility, and belief-driven bias all push PPV toward zero. Directly relevant to every methodology debate in this library.
Mathematical modeling using 2x2 contingency tables proves that most published research findings are false. Positive predictive value (PPV) depends on statistical power (1-beta), pre-study odds (R), Type I error rate (alpha), bias (u), and number of competing teams (n). The core formula PPV = (1-beta)R/(R - betaR + alpha) shows a finding is more likely true than false only when power times pre-study odds exceeds 0.05. Six corollaries identify conditions reducing PPV: small studies, small effects, multiple testing, analytical flexibility, conflicts of interest, and competitive fields. Simulations show adequately powered RCTs achieve PPV = 85%, underpowered exploratory research achieves 12-23%, and discovery-oriented genomics achieves < 0.2%.
Links
Related Papers
Cited By
- Why Psychologists Must Change the Way They Analyze Their Data: The Case of Psi β Wagenmakers, Eric-Jan (2011)
- Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling β John, Leslie K (2012)
- Power failure: why small sample size undermines the reliability of neuroscience β Button, Katherine S (2013)
- Can Parapsychology Move Beyond the Controversies of Retrospective Meta-Analyses? β Kennedy, J.E (2013)
- Bayesian and Classical Hypothesis Testing: Practical Differences for a Controversial Area of Research β Kennedy, J.E (2014)
- Testing for Questionable Research Practices in a Meta-Analysis: An Example from Experimental Parapsychology β Bierman, Dick J (2016)
- Why Most Research Findings About Psi Are False: The Replicability Crisis, the Psi Paradox and the Myth of Sisyphus β Rabeyron, Thomas (2020)
- Evidence for Anomalistic Correlations Between Human Behavior and a Random Event Generator: Result of an Independent Replication of a Micro-PK Experiment β Walach, Harald (2020)
- Planning Falsifiable Confirmatory Research β Kennedy, James E (2024)
- An Agenda for Purely Confirmatory Research β Wagenmakers, Eric-Jan (2012)
- Why Science Is Not Necessarily Self-Correcting β Ioannidis, John P.A (2012)
Companion
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant β Simmons, Joseph P (2011)
- Estimating the Reproducibility of Psychological Science β Open Science Collaboration (2015)
- The Garden of Forking Paths: Why Multiple Comparisons Can Be a Problem, Even When There Is No "Fishing Expedition" or "P-Hacking" and the Research Hypothesis Was Posited Ahead of Time β Gelman, Andrew (2013)
- Commentary: Reproducibility in Psychological Science: When Do Psychological Phenomena Exist? β Heino, Matti T. J (2017)
- The "File Drawer Problem" and Tolerance for Null Results β Rosenthal, Robert (1979)
More in Methodology
Paranormal belief, conspiracy endorsement, and positive wellbeing: a network analysis
Addressing Researcher Fraud: Retrospective, Real-Time, and Preventive Strategies β Including Legal Points and Data Management That Prevents Fraud
Quantum Aspects of the Brain-Mind Relationship: A Hypothesis with Supporting Evidence
Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of research
Experimental evidence of non-classical brain functions
π Cite this paper
Ioannidis, John P.A (2005). Why Most Published Research Findings Are False. PLoS Medicine. https://doi.org/10.1371/journal.pmed.0020124
@article{ioannidis_2005_false,
title = {Why Most Published Research Findings Are False},
author = {Ioannidis, John P.A},
year = {2005},
journal = {PLoS Medicine},
doi = {10.1371/journal.pmed.0020124},
}