This is the third posting in this morning’s trilogy about research methods, and this one was prompted by an article in this month’s issue of Physics World :
Ball, P. (May 2016), No result, no problem? Physics World, 29(5), 38-41.
Ball (quoting from others, in particular Brian Nosek of the University of Virginia) points out that ‘positive’ results are far more likely to be published that neutral or negative results. He starts by reminding us of Michelson and Morley’s famous ‘null’ result of 1887, in which there was no discernible difference in the speed of light passing in different directions through “the ether”. The failure to observe the expected result went unexplained for nearly two decades until Einstein’s special theory of relativity showed that the ether was not required in order to understand the properties of light.
Coming back to the more mundane, who wants to see endless papers that report on results that didn’t happen? The literature is already sufficiently obese. Ball points out that in some fields there are specific null-result journals. Or surely, such results could just be published in the blogosphere or on preprint servers. Another possibility is linked to the suggestion that objectives of experiments should be declared in registered reports before the data are collected – see https://osf.io/8mpji/wiki/home/. This would “improve research design, whilst also focusing research initiatives on conducting the best possible experiments rather than getting the most beautiful possible results.”
Whatever, the results do need to be out there. Not everyone is going to have a result as significant as Michelson and Morley’s, but plain honesty – and a wish to stop others wasting their time in carrying out the same experiment, believing it to be new – means that all results should be shared. This should not be seen as a waste of time, but rather an example of what Ball describes as “good, efficient scientific method”.
I’d like to take this slightly further. I have encountered educational researchers who refuse to publish a result unless it is statistically significant. To return to my starting point of this morning, I’m a numerate scientist, I like statistically significant results…but I have seen some most unfortunate consequences of this insistence on ‘significance’ including (and no, I’m not joking) a self-justification of claiming that results are ‘significant’ at some arbitrary level e.g. 7%…PLEASE, just give your results as they are. Don’t tweak your methodology to make the results fit. Don’t claim what you shouldn’t. Recognise that appropriate qualitative research methodologies have a place alongside appropriate quantitative research methodologies – and be honest.
This is a good summary of the issue with statistical significance https://xkcd.com/882/
Thanks Tim. It’s lovely!