University examiners fail to spot ChatGPT answers in real-world test
Exams taken in person make it harder for students to cheat using AITrish Gant / Alamy
Ninety-four per cent of university exam submissions created using ChatGPT weren’t detected as being generated by artificial intelligence, and these submissions tended to get higher scores than real students’ work.
Peter Scarfe at the University of Reading, UK, and his colleagues used ChatGPT to produce answers to 63 assessment questions on five modules across the university’s psychology undergraduate degrees. Students sat these exams at home, so they were allowed to look at notes and references, and they could potentially have used AI although this wasn’t permitted.
The AI-generated answers were submitted alongside real students’ work, and accounted for, on average, 5 per cent of...