by MERLIN CROSSLEY
Many years ago when I used to row, I came across a machine called the Erg. It was just a standard indoor rowing machine but it was a test of truth. Everyone who wanted to try out for the crew would have a go on the Erg in front of everyone else. The spectators cheered and I pushed myself to the limit. I’m sort of surprised no one died but we were young and strong in those days.
The Erg converted your efforts into a number. It wasn’t the only criterion used for selections but if you got a good score on the Erg you earned some respect and serious consideration. You could train for the Erg and we did.
Of course, rowing is a team sport requiring not only strength and stamina, but also skill and teamwork. The Erg couldn’t test those latter qualities. Nevertheless, the Erg had its place. You couldn’t sweet talk the Erg. It didn’t matter if your dad had gone to school with the coach. And you couldn’t really cheat the Erg. You could have a bad day and get a bad score, but there really wasn’t a way of cheating to improve your score.
The Erg wasn’t a substitute for careful team selection by a professionally fair coach, but I was sort of glad we had the Erg.
As students navigate through their degrees and compete to demonstrate mastery of their chosen subjects, their stamina, and their strengths, I am glad we still have some invigilated exams.
We have fewer than we had before COVID and I don’t think we need invigilation in a big hall for mid-term one, two, and three in years one, two, and three but having tests of truth at key points within a degree seems sensible.
Some of my colleagues don’t like exams. As a student I found them stressful. But I’ve concluded they have their place.
One needs them because certifying individual capability is important. One also needs them now because in the post-COVID digital world concerns about cheating in assessments are increasing. Academics are doing their best to design assessments that are unique, can’t be googled, can’t be outsourced to others or to Artificial Intelligence, and can’t be overwhelmed by groups of colluding students. At the same time universities have stepped up their strategies for detecting electronic cheating and students are helping by reporting breaches.
But ultimately remote assessment involves an arms race. Sadly, the cheats and anti-cheats are investing more and more into a game that may never end. Arms races waste money. So, one part of the solution is to bring back the Erg.
This can be done by introducing invigilated exams, vivas, or various other assessments that happen in real time and operate as races of truth.
The question arises about whether we can afford things like vivas and individual Erg sessions. Probably not if we want to assess every lecture, every tutorial, every lab, every week. But yes, if we can focus our attention on assessments that matter and move from sweating the small stuff to programmatic assessment that hits only when necessary.
Of course, some will say that exams and vivas are not authentic and don’t reflect real life. But that misses the point. Real life is infinitely complicated, and nothing mirrors it exactly. The important thing is to take common sense approaches to do what we can and assess what we can.
Ergs are not perfect but they are part of the solution and when used sparingly at key moments within a degree can provide quality assurance and can be part of a process of ensuring that students reach the standards for which they are certified.
I can understand why people don’t like exams. The Erg was awful. I even think it was dangerous. But bad as it was, I think a world without it might be worse. I could be biased. I had long legs and people with long legs tended to do well on the Erg. But I still feel it made things fairer – it operated to overcome old boys’ networks, because if you trained hard and did well you simply couldn’t be ignored, whereas if you weren’t strong you couldn’t cheat or talk your way into the team. It mostly tested strength and stamina rather than skill, but the people who did best weren’t just muscle bound athletes. The people who did best were those who prioritised their training, planned it, and worked with determination and energy to achieve their goals.
Taken together with other assessment exercises it worked as a “sorting hat” that grouped people with similar attributes, ambitions, and priorities together and helped shape some great teams from diverse individuals in a way that brought harmony because it was seen as fair.
Merlin Crossley is DVC Academic Quality at UNSW