Solving the Positive Results Bias

One of the biggest problems facing science is that it’s done by us mere humans. We’re highly fallible and, as a result, science is vulnerable to our numerous list of biases. To some extent the scientific method, as a collective activity, has gradually evolved to shield itself against these individual-level biases. For instance, the notion of generating and testing hypotheses through a standardised set of methodological procedures, allows us to bypass the reliance on folk wisdom and human intuition. This is most evident in scientific achievements that are subversive to common beliefs and generate completely counter-intuitive explanations.

Still, the scientific process has plenty of room for improvement, with scientists coming equipped with problematic dispositions such as the confirmation bias: here, there exists a tendency, mostly at the subconscious level, for an individual to confirm their expectations and the hypotheses they test. Writ large the confirmation bias yields one clear consequence in the excess of reported positive results. It’s not just researchers who are to blame — editors and pharmaceutical companies are also implicated in this pressure for interesting, profitable and positive results at the expense of the much maligned negative.

Many tools (e.g. funnel plots) and publications (e.g. journal of negative results) exist that attempt to solve the positive results bias. The fact of the matter is that a large number of published research findings are false. This is especially relevant for the softer sciences, where fields such as psychology and economics are approximately 5 times higher among papers in reporting a positive result than, say, Space Science (Fanelli, 2010). Ioannidis (2005) offered six corollaries about the probability that a research finding is indeed true:

Corollary 1: The smaller the studies conducted in a scientific field, the less likely the research findings are to be true.

Corollary 2: The smaller the effect sizes in a scientific field, the less likely the research findings are to be true.

Corollary 3: The greater the number and the lesser the selection of tested relationships in a scientific field, the less likely the research findings are to be true.

Corollary 4: The greater the flexibility in designs, definitions, outcomes, and analytical modes in a scientific field, the less likely the research findings are to be true.

Corollary 5: The greater the financial and other interests and prejudices in a scientific field, the less likely the research findings are to be true.

Corollary 6: The hotter a scientific field (with more scientific teams involved), the less likely the research findings are to be true.

An easy implementable solution to this would be to decouple the methodology from the results: that is, you (the researcher) simply publishes their background literature and methodology, where it will be scrutinised and churned over in a manner similar to post-publication peer review. If the peer-review stage is initially focused on just reviewing the methodology, then you remove any temptation to simply publish, or for that matter submit, on the basis of what results your study produced. Also, it offers a greater focus on independent-confirmation of results (as the methodology is there for anyone to test before the results are published) and protection from people ripping off your work (e.g. some researchers don’t seem to be too fond of citing ideas they found on blogs).

There’s an obvious problem of who in the hell would want to just publish their methodology? Part of the effort would be to move some of the glory from generating interesting results onto generating interesting ideas to test. Still, in the initial implementation, I’m sure there would be plenty of young academics who would find this approach useful – it will allow them to develop important methodological skills and help foster an approach to how science should be done. Also, and perhaps more importantly, this offers protection for ideas: I’m sure there are numerous instances where, even in the case of well-established academics, there simply isn’t enough time, or money, to test a particular hypothesis brewing in your mind. It would be great if you could just dedicate time to coming up with a really interesting idea that someone else, with complementary skills and resources, could potentially test. Below is a conceptual diagram I came up with to provide a basic outline:

At stage one, the method and hypotheses would get published and undergo post-publication peer-review: here, the author will receive feedback which allows them to revise their initial approach. To do this you would need a pretty sophisticated commenting system (see here and here). Following an initial round of peer-review, we reach stage two of the process: using the outlined methodology, researchers go out and independently test the hypotheses. Independent testing of results is important in situations such as Daryl Bem’s supposed demonstration of precognitive abilities. In this case, we had an instance of one positive result (see green line), but it later came to light that, following replications of the original study, the results came out negative (see red lines). In short, had the journal adopted my approach of decoupling methodology from results, then one positive result wouldn’t have led to a ridiculous amount of controversy.



This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.

Reference

Fanelli D (2010). “Positive” results increase down the Hierarchy of the Sciences. PloS one, 5 (4) PMID: 20383332

Ioannidis JP (2005). Why most published research findings are false. PLoS medicine, 2 (8) PMID: 16060722