AI in education: what the media misses

An Edith Cowan U team looked at what was being covered in February media reports of ChatGPT in higher education and how its potential for learning and student support rated against reports of academic integrity risk

Miriam Sullivan, Andrew Kelly and Paul McLaughlan (all Edith Cowan U) found* the big issues in the coverage were,

academic integrity: “the most common themes that arose in the data were general concerns about academic integrity and ways that students could be discouraged from using ChatGPT”

avoidance: “half of all articles argued that ChatGPT should be avoided because it was likely to make errors and had inherent limitations.”

“It was hypothesised in multiple articles that students would lose critical thinking skills”

policy: “the most common response quoted was that a particular university was undecided”

embrace: “the two most common reasons provided for allowing students to use ChatGPT  were that it was too hard to ban and that students would need to use it in the workplace.”

But the quoted views are generally from university staff, with students getting way less of a say and “very little commentary” on how AI can help students.

Which it really can.

The authors argue, ChatGPT, “has the potential to demystify academic conventions for non-traditional students,” and “enhance the academic success of students from different equity groups.”

“There is a need to shift the  discussion about ChatGPT to a more constructive student-led  discourse,” they conclude.

* “ChatGPT in higher education: Considerations for academic integrity and student learning,” Journal of Applied Learning and Teaching 6,1 (2023) HERE