by VIN MASSARO
The government in its ‘Job-ready Graduates’ paper has justified billion-dollar decisions, affecting a million Australian university students, based largely, it appears, on the basis of a single report from Deloitte Access Economics. So the question becomes: how robust is that analysis? And the answer is provided by the report itself – its authors are at pains to point out its limitations and that it is an incomplete survey that should be read with caution.
The Deloitte report, “Transparency in Higher Education Expenditure”, was completed in November 2019, based on university activity in the 2018 calendar year, and well before there were any signs of the impending crisis of COVID-19 and its implications for higher education. From the caveats and cautions it contains, it certainly does not claim to address the key considerations government needs to resolve if its policy is to be coherent and sustainable.
The report seeks to establish the transparent cost of teaching, but in fact provides an analysis of what is spent on teaching delivery rather than of the efficient cost of teaching.
The difference is significant because the first can be measured by ascertaining the costs attributable to the teaching function and producing data that reflect average costs by discipline across the system. The latter would need to measure the quality of teaching and seek to discover whether the difference in the lowest and highest expenditures leads to concomitant differences in the quality of the student outcome.
The Deloitte exercise raises several issues about reliability even at the level of making expenditure comparisons and the report itself outlines the caveats that must be applied to its interpretation. These include the fact that not all universities responded, and 13 of the 32 that did were not able to “attribute costs between levels of education for a given faculty or school” (p.34). The report also contains an acknowledgement that the data gathering was not uniform across the participating institutions.
Furthermore, it is silent on its treatment of discipline areas that contain significant cost variations depending on which sub-specialty of the discipline is being taught in any one university.
For example, the costs of teaching engineering are not uniform across its various sub-specialisations: civil, mechanical, systems, aeronautical, to name a few.
For perfectly good reasons, these fields will vary in cost, however not all universities will teach all of them and a single national average graph for engineering cannot take into account distinctions between sub-specialisation costs, and between those universities that teach most engineering specialisations, and therefore have a higher cumulative cost, and those that might teach only less expensive versions.
The difference between the low cost and high cost is not a measure of efficiency – simply a reflection of the legitimate cost of teaching the subject matter. It cannot, therefore, be assumed that the average cost of teaching engineering is a good basis for funding all engineering. This would also apply to science, social work, allied health and even, to a lesser but significant extent, languages.
The report further explains that it does not contain data on the cost of the research component of the Commonwealth Grant Scheme; “importantly, caution should be taken in drawing inferences regarding the sufficiency of CGS funding from these results. [While not specifically stated in the Higher Education Support Act 2003,] there is a general view that CGS funding is intended to cover some level of base research activity (which was excluded from the definition of teaching and scholarship costs used in this study), and the cost of such research may vary as a proportion of teaching costs.” (p. 34)
The government would therefore appear to be proposing to change funding levels on the assumption that the average spend is representative of the actual cost, despite the fact that the proportion of costs attributable to teaching range from 39 per cent to 73 per cent.
The Deloitte report suggests that teaching costs were being funded at about 11 per cent above the actual costs as measured by the report; and the government appears to have set its new funding levels to only about 6 per cent above these implied actual costs. While the new funding model assumes that research will be taken up separately, neither Deloitte nor the government figures take account of the need to allow time for scholarship to enable teaching staff to stay abreast of their subject – is that supposed to be covered by the 6 per cent margin?
Neither makes any mention of infrastructure costs and what proportion of funding is included in the per capita grant to cover buildings and maintenance. Funding for these was separate until the early 1990s, but was then integrated into the funding grants, and presumably continues to form part of the CGS. As universities get no Commonwealth support for capital and the Education Investment Fund is long gone, the decision to exclude capital from calculations of student tuition costs will see hundreds of millions spent each year on new and refurbished infrastructure put on hold – a significant blow to the building industry amid a recession.
So, whether the Deloitte report is an accurate reflection of reality is contestable, with the report’s own qualifications suggesting that it is not intended to be that accurate. Whether it is a sufficient basis for policy development is therefore open to debate.
If the government were interested in the quality of the higher education system, the standards applicable and the relative or threshold quality of the outcomes expected, it would have sought to establish the cost of teaching a discipline to a level that meets the standards expected of graduates.
A significant range in the cost of teaching a discipline across the system would lead to an investigation of whether the lowest cost was able to achieve the same standard or whether there might be a risk that quality would be affected by inadequate expenditure. Efficiency cannot be measured without reference to effectiveness.
The tendency to measure what is measurable rather than what is important has been a major weakness in the debates over the quality of higher education. There is an urgent need to develop a set of criteria focused on the threshold level at which appropriate and internationally comparable standards must be set and met. Only then can there be an assessment of what is required to meet those standards and the appropriate cost of achieving them.
Not only would this provide a better relationship between costs and outcomes, but it would provide the basis for government to determine what it is prepared to spend to achieve national higher education objectives.
Without closer scrutiny, the current real savings in university funding that accrue to the government risk leading to a reduction in the quality of graduate outcomes and a reduction in the number of graduates. That such major and far-reaching decisions should have been made on such a narrow information base raises serious concerns about their sustainability.
Professor Vin Massaro
Professorial Fellow, Melbourne Centre for the Study of Higher Education
University of Melbourne
15 July 2020