The Australian Research Council is reviewing its two research metric schemes, Excellence for Research in Australia, and Engagement and Impact
QUT’s submission argues ERA has done what it was created to do, “provide a reliable means to confirm or correct previously impressionistic assessments of the quality of Australian university research.” However, its time has passed.
“ERA is no longer telling us much that we do not already know, and there are reasons to suspect that what it does say is now less reliable than it has been in the past.” The reasons for this are variously structural and behavioural and the result is the ARC, “should not run further rounds of ERA in the foreseeable future.”
As to EI, “It is structurally susceptible to manipulation and persuasion – to the particular detriment of smaller and less wealthy institutions – in ways that favour a variability in outcomes that have no relation to the underlying impact and engagement that it is intended to measure.”
Accordingly, “EI should be postponed until a more effective and reliable methodology is devised.”
QUT engages with the issues the ARC nominated for specific answers ,in a complex submission that diplomatically addresses the core controversies, now attached to ERA, in particular, the improvement in citation based research performance, compared to disciplines assessed by peer review.
“Some commentators have argued this suggests that the quality of Australian research in STEM is rapidly improving, while HASS research has stagnated. QUT finds this reasoning superficial and suspect, as it is at least as likely that the differential outcomes are explained by the methodological variance itself. An increase in the sophistication of strategies to maximise performance against assessment criteria in citation analysis fields, for example, could explain an increase in performance relative to fields where the rather more enduring scepticism of peers provides a consistent moderating influence. Indeed, the stability of peer review assessments may be read as a commentary on the assumptions about the fitness of metrical approaches as proxies for underlying quality, particularly over time. The divergence has now widened to the point that the fitness of the metrics as proxies for quality requires empirical validation.”
Get the word out
The ARC plans to release submissions to the research metrics review after it is out, which seems a bit late for a debate. So, CMM will report and/or link to, as many submissions as it can – send them in people.