By a Learned Reader
The Australian university research community woke up yesterday morning to the welcome outcome that 90 per cent of ERA 2018 units of assessment submitted nationally were rated at world-standard or above.
This came as no surprise to ERA aficionados. ERA outcomes have become largely predictable due to advances in the technology available to perform detailed and close to real-time analysis of research performance. In the citation disciplines ERA-like results for an entire field of research can now be generated with a high degree of accuracy in hours rather than months.
Therefore, as universities pore over the outcomes of ERA 2018 several salient questions come to mind. Can we trust the results? As an instrument of public policy has ERA done its job? Will the ERA framework serve our needs into the future? Do we even need to run another cycle of ERA?
Can we trust the results?
Based on ERA outcomes, dozens of global rankings, citation analysis and other methods, there is little doubt that Australian research has been on the rise globally, both for volume and quality, for around 15 years. The ERA 2018 outcomes align well with blunt indicators such as the Academic Ranking of World Universities Top 500 where Australia has 23 universities listed, up from 20 in 2015, and just 13 in 2004.
In terms of citation impact, a key ERA indicator for the science, technology, engineering, health and medical fields, Australian university performance has improved from 1.15 times the world average in 2003 to 1.46 for the latest five-year period (2014-2018). That indicator alone, based on more than 470,000 outputs, provides evidence that Australian university research on average is performing at around “four” – above world standard.
Let’s not forget that “world-standard” is a moving target for both citation and peer review disciplines. For citation disciplines the baseline for “world-standard” has been driven in recent years by research from China, with 20 times Australia’s research volume and a citation impact of 1.01. Therefore the “world-standard” benchmark has moved, providing an additional explanation for Australia’s rise.
While I would argue that the ERA 2018 outcomes reflect what they set out to measure and are indicative of a rising tide for Australian university research, we would be foolish to be complacent and believe this to be a case of mission accomplished.
As an instrument of public policy has ERA done its job?
As an instrument of public policy, they don’t come much more effective than ERA. ERA 2018 is the second of the four ERA cycles to measure and assess research published entirely since the inception of the current framework, developed in 2008 and piloted in early 2009.
Even before the inaugural ERA 2010 exercise, many of us had witnessed cultural change across the research community. We quickly came to realise that ERA is much more than a compliance exercise conducted at three yearly intervals.
Behind every ERA submission there are literally thousands of decisions made over six years with the intent of focusing effort on producing research of the highest calibre. These can range from decisions about recruitment, promotion and retention, where to publish, when to publish, and even whether to publish.
ERA was intended to drive funding as well as behaviour and, in a sense, it has, just not the way originally intended. According to Australian Bureau of Statistics figures, universities have more than doubled their expenditure on research and development from $4.3 billion in 2005 to $10.8 billion in 2016. The university sector has backed its areas of research strength and invested substantially from cash reserves producing spectacular outcomes on university league tables.
Can we put this solely down to ERA? No, but it is undoubtedly true that it was the right policy at the right time.
Will the ERA framework serve our needs into the future?
As a member of the Australian Research Council’s Indicators Development Group for ERA in 2008, it was obvious then that the ERA framework was at the cutting edge globally as a national research assessment.
ERA was revolutionary in that it was the world’s first major research assessment exercise to use citation analysis in appropriate disciplines and light-touch peer review in others, supported by metrics. It was also ambitious in planning to assess quality across more than 150 specific fields of research – the benchmark Research Assessment Exercise in the UK assessed only 67 units in 2008, reduced to around half of that number for Research Excellence Framework 2021.
The ERA framework delivers robust outcomes most of the time and the Australian Research Council has introduced measures to minimise the prospects of so-called “gaming”.
But in an era of ORCID identifiers, continuous research publication harvesting, automated classification of outputs to fields of research, and regular updating of world benchmarks (citation disciplines) there is no reason why the sector could not move in the mid-term to real-time submission and assessment.
Imagine monthly or even weekly updates to ERA ratings – although it might be best not to ponder that prospect right now!
Do we even need to run another cycle of ERA?
ERA in one form or another is here to stay – although the likelihood of it surviving in its current format has diminished following this iteration. Extending the interval between ERA submissions to around five years bears consideration while a move to a form of real-time reporting as described above could prove as revolutionary as when ERA was first devised in 2008. Either measure would address concerns around the cost effectiveness of the exercise.
Given the abundance of ratings of world standard and above we can conclude that there is little room for further upward movement within the current framework. Some have suggested that consideration might be given to a 5* classification, as in the UK, to distinguish the world leaders from those merely “well above world standard”. Others have suggested raising the assessment threshold to at least 100 weighted research outputs to incentivise the accumulation of critical mass in research units.
Whatever the future, universities are unlikely to turn their backs on assessing research quality. Having gained a longitudinal perspective on their research performance, Australian universities have become more globally competitive. ERA has had a reputational impact on institutions and while funding has not flowed directly, we all understand that prestige and reputation remain the most intangible but valuable long-term assets for any university.