By ERICA SOUTHGATE

On Valentine’s Day, OpenAI gifted us a paper – Better Language Models and Their Implications – that rocked my educator’s world. OpenAI had developed an artificial intelligence (AI) model that had learnt, in an unsupervised way using millions of webpages, how to undertake writing tasks, many of which were of reasonable quality according to objective benchmarks.

Imagine a future where an AI responds to an assessment task by producing original writing at pass or credit levels. No two responses would be the same because the AI would learn to check against what it and other AI had already produced.

Traditional written assessment relies on students producing original work. What happens when the machine is the author and we can’t detect this? Will current concerns about contract cheating be replaced by AI panic?

The whole idea of learning to express ideas elegantly and logically in writing as a foundation of Western education will be radically challenged.

Lest we return to an over-reliance on the great ‘sorting hat’ of the handwritten exam, we will need to cleverly engage with machine-learning generated text and other AI artefacts in critical and creative ways.

OpenAI made the unusual ethical decision not to release the model’s code. While they understood the value of it for writing and speech recognition systems, they also recognised it might be used maliciously to impersonate others and automate abusive or fake online content. They admit there are programmers who can figure it out anyway.

AI ethicists have mandated education as a high stakes domain. It’s time educators led the charge to influence design, reimagine pedagogies and ask critical questions about AI’s potential benefits and harm in learning contexts. I’m ready, are you?

 

Erica Southgate, Associate Professor of EdTech and Digital Learning,

University of Newcastle, Australia.

2016 Equity Fellow (NCSEHE)

[email protected]


Subscribe

to get daily updates on what's happening in the world of Australian Higher Education