Monash U researchers analysed feedback of HE students’ open-ended assessment tasks by instructors and ChatGPT *
They found it,
* “can generate more detailed feedback that more coherently and fluently summarises students’ performance than the instructor”
* “achieved “a high-level of agreement with human instructors”
* could “provide feedback on the process of students completing the task, e.g. suggesting learning strategies in feedback, in addition to feedback on task level that indicates how well students performed”
The paper points to methodological qualifications and performance limitations but the authors also add,
“ “we surprisingly found that ChatGPT could generate a considerable-number of process focus feedback … this implies the promising values of ChatGPT in guiding students towards improving their tasks or even developing learning skills.”
And this was done on ChatGPT, superseded in March by GPT4, which includes a socratic-teaching function.
* Wei Dai, Jionghao Lin, Flora Jin, Tongguang Li, Yi-Shan Tsai, Dragan Gašević and Guanliang Chen (all from the Centre for Learning Analytics at Monash U,” Can Large Language Models Provide Feedback to Students? A Case Study on ChatGPT” EdArXiv Preprints, April 14