A recent book chapter[i] tracked the progress since 1970 of performance measurement systems in the Australian higher education sector. It charts the quantitative and qualitative measures adopted by the Australian Research Council’s Excellence in Research for Australia in 2010 and its subsequent iterations and its companion Engagement and Impact Assessment.

The ERA and EIA use are calculative practices constructed by the ARC and some from global rankings and ratings. These ratings and rankings produce numbers, calculations, tables and other visual devices using calibrated methodologies, often hidden in an algorithm. They are presented as rooted in logic and quasi-scientific reasoning.

However, in a numbers game, what is counted and how it is counted means everything ‒ and nothing. A recent London School blog by Jelena Brankovic[ii] contains a case study of one German university that skyrocketed through the rankings based on counting citations. Upon investigation, it was found that only one scholar belonged to an extensive international collaboration into the global health global burden of disease study. The university discovered that this scholar had a very high citation count with ten articles over two years. One was published in The Lancet and co-authored by hundreds of researchers.

Our recent research[iii] on performance management systems with Australian public universities explored the impact of contemporary calculative practices, termed “accountingisation”, on Australian academics’ work and values. The adage “you cannot manage what you cannot measure” privileges numbers over other ways of understanding the impact or relevance of research. In doing so, the accountingisation of the human world significantly affects the way we think about research quality. Calculative practices and numbers become powerful forces determining the reputation of individuals, disciplines and the universities themselves and the progression of academics within them.

Our published paper used a narrative story-telling approach and presented the combined data observations of two characters from 2010–2018: a “typical” accounting academic (Dr Vic Tim) and a “typical” vice-chancellor (VC Rea Lity). This approach compares the senior administration perspective on performance management systems with academic views. In part, we found that: “the stories presented by Dr Vic Tim and VC Rea Lity are disconnected and difficult to reconcile. However, Lity’s account is grounded in a strategic perspective, which provides a  rationale and foundation for developing and using performance measurement systems by senior university managers. Meanwhile, Tim’s account demonstrates the potential unintended and undesirable consequences of PMSs imposed upon him and his colleagues from an operational standpoint. Arguably, the value of the two narratives presented lies in differentiating between the use of PMSs (Lity’s primary focus) and the consequences of PMSs (the predominant concern of Tim).”

Our research paper showed a significant divide between the way senior executives viewed and used performance measurement systems and the lived experience of academics. The title of this paper, “What you see depends on where you look: Performance measurement of Australian accounting academics“, underscores the point that performance measurement – like most forms of evaluation – are contingent on the observer’s frame of reference.

They are akin to the “Young Woman, Old Woman” illustration (also known as My Wife and My Mother-in-Law), first published in the magazine Puck in 1915 and attributed to the British cartoonist William Ely Hill. This is shown in Figure 1, the image can be perceived either as a young woman or an older woman. It reminds us that different people can interpret the same picture differently.

Figure 1: Young Woman, Old Woman

However, once one has decided that the picture is of (say) an older woman, it takes a conscious effort to see a young woman. Different interpretations of the same picture are possible, but adopting an alternative interpretation is difficult.

 A similar example is the well-known “magic eye” autostereogram (see Figure 2. Autostereograms enable some people to see 3D images by focusing on 2D patterns (you may need to increase the size of Figure 2 to see Saturn – but it is there).

In a similar way, it is only through a close examination of the “performance metrics” imposed upon academics that the “full (magic) picture” about what they do, how they do it, and the value they can add can be appreciated. It should be noted, though, that not everyone can see “magic eye” autostereograms all the time. Reasons for this typically relate to problems with poor vision.              

 Figure 2: Can you see Saturn?

A third example of the pivotal role of the observer’s perspective is the so-called “wagon-wheel effect” , an optical illusion in which a spoked wheel appears to move differently from its actual rotation. Unlike the previous two examples, the wagon-wheel effect is an optical illusion. It can also be hazardous because the effect can make it appear that machinery is moving is stationary under certain conditions.

To understand what ambiguous pictures, autostereograms, and optical illusions have to do with performance measurement systems within the Australian higher education sector,  we need to consider several submissions to the Senate Standing Committees on Education and Employment[iv]  on research funding. For instance, the QUT submission says the ARC has been subject to “the imposition of costly administrative tasks” and highlights the extraordinary number of resources the ARC consumes, and individual public universities must allocate, to create data for evaluation exercise. The submission argues that resources should be freed up by abolishing the ERA and the EIA. The QUT  states “There is no clear benefit to be had in continuing to run those schemes, and the time and money could be put to far better use within the ARC for research”.

Guthrie’s submission to the Senate Standing Committee[v] states that setting priorities for research funding should not be a political decision but based on evidence and Australia’s national interests. Nevertheless, this challenging and straightforward statement is lost in the confusion of magic eyes, optical illusions and interpretations, all using calculative practices to blur reality.

The stated purpose of the ARC is ” to grow knowledge and innovation for the benefit of the Australian community through funding the highest quality research, assessing the quality, engagement and impact of research and providing advice on research matters … The outcomes of ARC‐funded research deliver cultural, economic, social and environmental benefits to all Australians.”

This stated purpose suggests that funding is a rigorous process based on evidence and peer review by Australian university researchers. It is not consistent with the minister and other politicians interfering in setting national priorities or targeting special interests. The Australian Association of University Professors Inc[vi] has much to say on the matter:

“We understand that research funding decisions intend to advance the national interest, but the decisions need to be made transparently with expertise about the issues that are investigated and the wider societal impact that could arise from possible outcomes of the research. Historical precedent has time and again shown that the interference by politicians or administrators, often only based on cursory knowledge of a specific project and disconnected from the current state of knowledge of the topic, distorts or devalues the benefit of research that crucially depends on impartiality and critical distance.”

Ultimately, the calculative practices used to make university decisions are all about making the numbers support a particular outcome. Nevertheless, in doing so, it causes us to reflect. What counts? Because, as accounting academics, we know more than most that not everything that counts can be counted, and not everything that can be counted, counts.

In the absence of a genuine discussion of how Australian universities have been ‘accountingised’, we ask for an independent inquiry into the calculative practices associated with the ARC’s ERA, Engagement and Impact Assessment and administration. This inquiry should encompass how national research priorities are set and if there has been any interference by ministers in these matters. Undoubtedly, that is something worth counting?


Emeritus Professor James Guthrie AM, Macquarie Business School

Dr Basil Tucker Senior Lecturer UniSA Business

[i] Martin-Sardesai, A. and Guthrie, J. (2018), “Accounting for the Construction of Research Quality in Australia’s Research assessment Exercise”, Epstein, M.J., Verbeeten, F.H.M. and Widener, S.K. (Ed.) Performance Measurement and Management Control: The Relevance of Performance Measurement and Management Control Research (Studies in Managerial and Financial Accounting, Vol. 33), Emerald Publishing Limited, Bingley, pp. 221-241.


[iii] Martin-Sardesai, A., Guthrie, J. and Tucker, B. (2020), “What you see depends on where you look: Performance measurement of Australian accounting academics”, Accounting, Auditing and Accountability Journal,

[iv] Public Universities Australia Senate Standing Committees on Education and Employment Australian Research Council Amendment (Ensuring Research Independence) Bill 2018 submission.

[v] Submitted by Emeritus Professor James Guthrie AM, Professor of Accounting, Macquarie Business School, Macquarie University, Senate Standing Committees on Education and Employment Australian Research Council Amendment (Ensuring Research Independence) Bill 2018 submission.

[vi] Australian Association of University Professors Inc. Senate Standing Committees on Education and Employment Australian Research Council Amendment (Ensuring Research Independence) Bill 2018 submission.


to get daily updates on what's happening in the world of Australian Higher Education