Why Science Is Not Necessarily Self-Correcting
π Original studyπ Appears in:
Plain English Summary
We like to think science fixes its own mistakes, but does it really? Ioannidis digs into the numbers and the picture is grim: only about 1% of psychology papers even try to repeat earlier work, and a tiny fraction of those are truly independent do-overs. With most studies running on weak statistical power (around 35%), somewhere between 30% and 95% of published 'significant' findings could simply be wrong and nobody's checking. He identifies thirteen roadblocks to self-correction, from journals refusing to publish replication attempts to researchers cherry-picking their best-looking results. This paper became a touchstone for both critics and supporters of parapsychology research, since both sides point to these same problems to argue their case. Fixes exist, but each comes with trade-offs.
Research Notes
Foundational metascience paper for evaluating psi research credibility. The impediments Ioannidis catalogues β low power, rare replication, publication bias, allegiance bias β are the same ones debated in parapsychology. Both critics (applying these to psi claims) and proponents (applying them to mainstream null results) invoke this analysis.
Self-correction is widely assumed to be a defining hallmark of science, but how often does it actually occur? Reviewing empirical evidence from psychology and biomedicine, Ioannidis argues that self-correction requires active replication effort β yet only ~1% of psychology papers are replications, fewer than 0.2% are independent direct replications, and most yield confirming results (perpetuated fallacies). With average power of 35% and modest bias, unchallenged fallacies may constitute 30β95% of published significant findings. A taxonomy of six discoveryβreplication paradigms quantifies the problem. Thirteen impediments to self-correction are catalogued, including publication bias, selective reporting, underpowered studies, and editorial bias against replication. Proposed reforms each carry unintended risks unless pursuit of truth remains the overriding priority.
Links
Related Papers
Cites
- Why Most Published Research Findings Are False β Ioannidis, John P.A (2005)
- How Many Scientists Fabricate and Falsify Research? A Systematic Review and Meta-Analysis of Survey Data β Fanelli, Daniele (2009)
- The "File Drawer Problem" and Tolerance for Null Results β Rosenthal, Robert (1979)
- False-Positive Psychology: Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant β Simmons, Joseph P (2011)
- Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability β Nosek, Brian A (2012)
- Measuring the Prevalence of Questionable Research Practices With Incentives for Truth Telling β John, Leslie K (2012)
- Editors' Introduction to the Special Section on Replicability in Psychological Science: A Crisis of Confidence? β Pashler, Harold (2012)
- An Agenda for Purely Confirmatory Research β Wagenmakers, Eric-Jan (2012)
Companion
- Theoretical Risks and Tabular Asterisks: Sir Karl, Sir Ronald, and the Slow Progress of Soft Psychology β Meehl, Paul E (1978)
- Small Telescopes: Detectability and the Evaluation of Replication Results β Simonsohn, Uri (2015)
- Options for Prospective Meta-Analysis and Introduction of Registration-Based Prospective Meta-Analysis β Watt, Caroline A (2017)
- Scientists behaving badly β Martinson, Brian C (2005)
- Scientific Utopia: II. Restructuring Incentives and Practices to Promote Truth Over Publishability β Nosek, Brian A (2012)
Also by these authors
More in Methodology
Paranormal belief, conspiracy endorsement, and positive wellbeing: a network analysis
Planning Falsifiable Confirmatory Research
Addressing Researcher Fraud: Retrospective, Real-Time, and Preventive Strategies β Including Legal Points and Data Management That Prevents Fraud
Quantum Aspects of the Brain-Mind Relationship: A Hypothesis with Supporting Evidence
Paranormal beliefs and cognitive function: A systematic review and assessment of study quality across four decades of research
π Cite this paper
Ioannidis, John P.A (2012). Why Science Is Not Necessarily Self-Correcting. Perspectives on Psychological Science. https://doi.org/10.1177/1745691612464056
@article{ioannidis_2012_science_not_self_correcting,
title = {Why Science Is Not Necessarily Self-Correcting},
author = {Ioannidis, John P.A},
year = {2012},
journal = {Perspectives on Psychological Science},
doi = {10.1177/1745691612464056},
}