Clinical prediction models support medical decision-making – which patients get what treatment – it’s a problem when the data is doubtful
Adrian Barnett (QUT) and colleagues* analysed research paper abstracts to find statistical evidence of “hacking,” researchers fudging data to get a better “area under curve” statistic, which refers to risk probability.
This is a problem given how important clinical prediction is – as of January there are 4200 publications a week. “Factors driving poor model quality include inadequate sample sizes, inappropriate exclusions, poor handling of missing data, limited model validation and inflated estimates of performance,” they warn.
“Our results indicate that some researchers have prioritised reporting a “good” result in their abstract that will help them publish their paper.”
* Nicole White, Rex Parsons and Adrian Barnett (all QUT) and Gary Collins, (Uni Oxford) “Meta-research: Evidence of questionable research practices in clinical prediction models,” Open Science Framework, HERE
It a problem all-over, especially when no one has another look
A recent House of Commons committee report warned, “there have been increasing concerns raised that the integrity of some scientific research is questionable because of failures to be able to reproduce the claimed findings of some experiments or analyses of data and therefore confirm that the original researcher’s conclusions were justified,” CMM May 11).