Before I kick off the final instalment of this column (in this place), I’d like to quickly thank Stephen Matchett for his tireless work on CMM. I am very grateful to have had the opportunity to be part of it.
I will be carrying on the Ed/Tech must-reads column from next week as a free Substack newsletter, so please sign up for uninterrupted service. Now on with the show.
As HE leaders continue to search for the academic integrity silver bullet and vendors continue to promise the world, the news from the world of Gen AI detection tools remains bleak. This study from five Stanford computing academics isn’t peer-reviewed but it does make a strong case that detection tools consistently generate false positives when evaluating the work of non-native English speakers. In addition, they find that they were able to use iterative prompting to largely bypass detectors, with requests such as “elevate the provided text by employing literary language.”
Student Perceptions of AI-Generated Avatars in Teaching Business Ethics: We Might not be Impressed from Postdigital Science and Education
Among the “fun” advancements in our current age of GenAI has been the ability to generate video and audio of realistic human avatars from text. Vallis, Wilson, Gozman and Buchanan (Uni Sydney) explored student perceptions of the use of these avatars in a redesigned Business Ethics unit. They found that students were far more ambivalent than they had expected and were interested in the potential of being able to customise your own digital lecturer. Some students weren’t aware that avatars had been used until it was pointed out, which itself sparked further thinking about ethics. The fact that the avatars were too “smooth,” lacking the usual fillers, stumbles and digressions was noted as a downside.
Prototypes-in-progress for bi(nary)-curious university educators and researchers from Safe-to-fail AI
For those people keen to get their hands (virtually) dirty, this site from Armin Alimardani (Uni Wollongong) and Emma Jane (UNSW) offers some usable prototypes of GenAI tools built specifically for use in Australian Higher Education. These include student quiz feedback, a course outline FAQ, conversational AI and a speech recognition tool.
ASCII art by chatbot from AI weirdness
And finally, in reassuring news from the AI trenches, this collection of bizarre attempts at ASCII art (art made up of letters, numbers and characters) from ChatGPT shows that some areas are still safe. A giraffe that looks more like an elongated human skull and a running uniform that looks like the outline of a heart are highlights for me.
And that’s it for me. I hope to see you next week on the Substack.
Colin Simpson has worked in education technology, teaching, learning design and academic development in the tertiary sector since 2003 at CIT, ANU, Swinburne University and Monash University. He is also one of the leaders of the ASCILITE TELedvisors Network. For more from Colin, follow him on Twitter @gamerlearner (or @[email protected] on Mastodon)