What is it?
It has been found that many classical scientific studies that form a foundation of modern science are difficult or impossible to replicate or reproduce.
It has been particularly widely discussed in the fields of psychology and medicine. In 2017 in response to the crisis eight hundred scientists and mathematicians signed a paper "Redefine statistical significance". It proposes that in "fields where the threshold for defining statistical significance for new discoveries is P < 0.05, we propose a change to P < 0.005."
One of the major interpretations of how it comes to be, including
More information on the reason
Choosing the evidence that supports creators own conclusion may be coming from the– Our cognition is distorting facts in order to defend our stories and our ego and create the most favorable picture of ourselves.My-side bias
"There are many different techniques for collecting, interpreting, and analyzing facts, and different techniques often lead to different conclusions, which is why scientists disagree about the dangers of global warming, the benefits of supply-side economics, and the wisdom of low-carbohydrate diets. Good scientists deal with this complication by choosing the techniques they consider most appropriate and then accepting the conclusions that these techniques produce, regardless of what those conclusions might be. But bad scientists take advantage of this complication by choosing techniques that are especially likely to produce the conclusions they favor, thus allowing them to reach favored conclusions by way of supportive facts." – Daniel Gilbert, Stumbling On Happiness.
Daniel Khanaman at the end of Shane Parrish Podcast and in Sam Harris
Adam Grant in Sam Harris podcast
Scientists rise up against statistical significance, Nature link
Redefine statistical significance link
I suggested an idea that I call "daisy chain" replications, where a group of labs that agree on the phenomenon and agree that behavioral priming is real get together. Each lab picks its favorite result. The result of lab A is replicated by lab B, the result of B replicated by C, and so on.
Social psychologists circled the wagons and developed a strong antipathy for the replicators. A President of the American Psychological Society called them "methodological terrorists."
One week later, the letter was leaked and published in Nature with an incendiary title: "Nobel Laureate tells social psychologists to clean up their act." I had naively failed to anticipate this outcome. Then all hell broke loose.
Believe it or not. I've been blamed for causing the replication crisis by attracting media attention to a minor problem. Some social psychologists have wondered about my motives for wanting to destroy social psychology by that letter, and I lost many friends.
The crisis provides ample evidence for the thesis that I'm developing today. People didn't change their minds. Social psychologists circled the wagons and developed a strong antipathy for the replicators. A President of the American Psychological Society called them "methodological terrorists," and another eminent psychologist suggested that people who have ideas of their own would not get involved in replications. There were essentially no takers for my suggestion that priming researchers should proactively replicate each other's work. This eventually convinced me that they did not have real confidence. They believed their findings were true, but they were not quite sure they could replicate them, and they didn't want to take the risk—another instance of belief perseverance.
Besides antagonizing social psychologists, I also managed to make myself unpopular among replicators when I published a paper on the etiquette of replication, which argued that replication should always be an adversarial collaboration. People argued that method sections should be sufficiently explicit to guarantee replicability without having to consult the author. I find this attitude shocking, just about as shocking as the defensiveness of priming researchers.
But none of this really matters. The crisis has been great for psychology. In terms of methodological progress, this has been the best decade in my lifetime. Standards have been tightened up, research is better, samples are larger. People pre-register their experimental plans and their plans for analysis
“It’s possible around 50 percent of the published psychological literature fails upon retesting, but no one knows precisely the extent of the instability in the foundations of psychological science.”
The replication crisis devastated psychology. This group is looking to rebuild it. https://www.vox.com/science-and-health/22360363/replication-crisis-psychological-science-accelerator via Instapaper
I didn’t read about this extensively but from talking to people working on this the following may be one of the most often mechanisms.
A scientist is conducting a very large study to test the hypothesis that X causes Y. They find out that X caused Y very rarely, at roughly the same rate as X caused A, C, D … O, P (and other 121 outcomes). But this was a very large study and out of 13241 possible outcomes F happened a lot. Scientist than tweaked the title of his paper X caused F.
I also think that scientist may subconsciously search for study conditions that will prove their revolutionary findings.