A central dogma in science is the requirement that scientific discoveries will be replicable (sometimes also called “reproducible”). An important scientific finding should be supported by further evidence from other studies by other researchers, preferably in other laboratories - or otherwise be refuted.
The problem: In recent years, there have been alarming voices by leading scientists in their respective fields that the replicability of discoveries has been “deteriorating”, to the point that an article in the New Yorker Magazine carried the subtitle “Is something wrong with the scientific method?”
The cause: The lack of replicability is too often attributed to the ethics and sociology of science: potential conflicts of interests, competition among researchers, exaggeration of initial results and even fraud, publication bias of discoveries, and avoiding the publication of negative results. However, it is our conviction that a large part of the problem has deeper roots: the industrialization process of the scientific work results in very large experiments, and builds huge databases. Mining these databases to arrive at promising discoveries is prone to produce many false discoveries, unless specifically attended to by addressing methodological and statistical problems, especially selective inference and choosing the appropriate variability.
Our research group strives to address the methodological roots of the replicability problem, by identifying and understanding the problems involved in a variety of fields, and by developing the statistical means to address them. You are invited to explore our ongoing work on replicability in various fields, and contact us with your questions and ideas for collaboration.