Meta-Research on Reliability

An intuitive way to measure “reliability” of a research conclusion is to replicate the research and confirm that the same level of positive effect is observed. Typically, research conclusions are made based on a “false positive” rate of 5% (i.e., a 95% confidence level). In other words, if the research were replicated, in 5% of the replication cases, the results would not show a positive effect.

In recent years, several meta-studies have replicated 100s of research projects in the social sciences in order to test the reliability of research conclusions in general. These meta-studies selected experimental research from leading peer-reviewed academic journals for the replication because these types of experiments are quite thoroughly designed and very carefully implemented.

Based on the expected reliability of the conclusions (a false positive rate of 5%), no more than 5% of the replicated research projects should fail to show the same level of effect as the original, published research. However, in actuality, the percentage of projects that failed to show the expected effect ranged between 24% and 70%, with a consensus average of 45%. In other words, the actual false positive rate of 45% was 9 times higher than the expected rate of 5%.