by JASON LODGE

Unless you have been living under a rock recently, it’s clear that artificial intelligence (AI) is rapidly advancing. While it’s exciting to see what these technologies can do, it’s also understandable that many people struggle to comprehend the technical details of how they work.

Thankfully, the concept of ‘explainable AI‘ is helpful in demystifying these technologies. However, for many of us (me included) there remain serious gaps in our understanding. Lakoff and Johnson argue that metaphors and analogies are how we often bridge these gaps to try to make sense of complex concepts and phenomena.

To this end, I recommend Professor Martin Weller’s work, Metaphors of Ed-Tech. In his book and accompanying podcast series, he provides an overview of the various metaphors used to make sense of educational technologies over time. While metaphors are useful tools, they can also be dangerous if we rely on them too heavily (“digital natives” anyone?). The wrong metaphor can lead us down a path of misunderstanding, sometimes with serious consequences.

Educators, educational leaders, and professionals supporting teaching in higher education are no doubt exhausted from the pivot to on-line learning in recent years. Now, we’re faced with the challenge of incorporating generative AI into teaching and learning practices and policies. While it’s tempting to shy away from something so complex, the higher education sector must engage with the issues at hand. The metaphors being used to make sense of generative AI are important, as they shape our understanding of the technology and its role in higher education.

Similar to other examples described by Weller, some metaphors being used to understand generative AI are misleading. As I’ve discussed previously, these technologies are nothing like a calculator and the response to their emergence needs to be more elaborate than it was when calculators became available.

With the sophistication of generative AI, it’s also easy to anthropomorphise these technologies. For example, ChatGPT’s conversational interface leads to the temptation of thinking of it as a human-like entity. But we must be cautious of assigning agency to generative AI. While there’s evidence to suggest that these technologies have developed theory of mind and have emergent properties, they’re not independent actors in the world like humans are (there’s also the matter of free will but I’m going to leave that one alone).

As the gap between machine and human capabilities narrows, the issue of accountability comes into play. This issue goes beyond plagiarism and cheating. Who is responsible for the output of generative AI? What if it creates fraudulent documents or disseminates dangerous information such as how to create weapons? How can we hold technology accountable in such cases? What level of blame is apportioned to a student who provides the prompts to generate such outputs? I don’t know the answers to these questions, I’m no lawyer either. However, these questions are critical to consider as we navigate the intersection of AI and higher education.

While we’re probably not on the brink of super-intelligent terminators taking over the world, we must recognise the importance of the metaphors we use to understand generative AI. It’s not as simple as comparing it to a calculator or a human being. These technologies are complex and evolving at an astonishing pace. We must approach these developments with caution, thoughtfulness, and nuance, even if we’d rather just be getting on with business as usual after all the disruption of the last three years.

Acknowledgement/confession: ChatGPT was used to edit this article.

Jason M. Lodge, PhD is an associate professor of educational psychology in the School of Education and a Deputy Associate Dean (Academic) in the Faculty of Humanities and Social Sciences at The University of Queensland.

 


Subscribe

to get daily updates on what's happening in the world of Australian Higher Education