by AMANDA JANSSEN and AMY MILKA
The big news in higher education is how “chatbots” will require us to rethink how we do assessment. While generative AI has been with us for many years, the release of ChatGPT in December 2022 delivered a freely available and sophisticated engine able to generate human like textual responses to questions.
Using the latest Natural Language Processing technologies ,ChatGPT allows computers to interpret and mimic verbal communication based on a vast language database and thereby mimic human language. The concern is that this kind of technology will threaten the reliability of current assessment practices (exams, and in-class and online assignments) – after all it would only take a student a few seconds to type out their assignment question and for this new generative technology to churn out a passable—even good—response. Current text matching tools widely used as part of institutional integrity measures are unable to identify these kinds of AI-assisted outputs, (Turnitin is working hard on detection capabilities
Last week, University of South Australia and University of Adelaide (members of the Australian Academic Integrity Network (AAIN)) hosted an on-line forum, on assessment design and academic integrity. Originally scheduled well before the release of ChatGPt, the forum organisers expected a small number of participants, but in response to huge interest the meeting was extended to include an open discussion on ChatGPT.
Over 170 participants engaged in vibrant discussions about strategies for mitigating academic misconduct. In response to a provocation to identify assessments that were “cheatable” all of the 12 on-line groups referred to poor assessment design. The conversation groups reported on assessment design principles that engage students in active, authentic learning in ways that align with academic integrity practices. For example, the principle of regularly updating assessments was linked to not having essays with a generic or descriptive brief and not using quizzes with published questions.
There were also robust responses to a discussion prompt: “with only one month to the start of semester, what key things can we as higher education institutions do to work with the AI and to manage associated risks?” and it became clear participants felt we need to work with these kinds of tools, build them into our teaching and change the way we assess and teach.
Universities urgently need to recognise that chatbots are here to stay and their responses must include, educating staff and students and supporting academics to understand how to improve assessment design in ways that marry academic integrity to authentic learning and amending policy to this new form of misconduct. Examples of using this engine to improve academic integrity in assessment design are getting students to use AI engines and then critique the response.
What transpires from all the big news’ on Chat GPT is that we are talking about ways to ensure our assessments are rigorous enough to mitigate academic misconduct or updated to include this technology.
An important question therefore arises ‘what is the purpose of assessment?’ and how do we keep assessments fair, valid, valuable and relevant in the light of AI and expert machines?
Amanda Janssen, Uni South Australia, Dr Amy Milka, Uni Adelaide