What to do when research results don’t replicate

Uni Melbourne staff have US$6m to work out how and why researchers decide

Duke University is  paying the US government $112m to settle claims a researcher used dodgy data in funding applications. On which CSIRO Chief Scientist Cathy Foley comments, “All the Australian science sector must avoid creating a science culture that leads to loss of integrity as our foundation – Duke University’s huge misconduct fine is a reminder to reward rigour,” (via Twitter). Can’t argue with that but even with honour among researchers there is the perpetual problem of research with results which don’t replicate.

It makes a case for open access to research data, and failing that running reality checks. Easy to say expensive to do. So, the US Defence Department is funding work on “automated tools” to assign confidence intervals on social and behavioural research results and claims that interest the military.

The University of Melbourne is participating in the project, with US$6.5m to crowdsource “thousands” of social scientists to evaluate research claims using “a structured group deliberation approach.” Associate Professor Fiona Fidler says the project is about understanding how researchers reach conclusions and deal with uncertainty in processes.


Subscribe

to get daily updates on what's happening in the world of Australian Higher Education