In an ironic twist emblematic of the AI age, the Australian Catholic University (ACU) reportedly used artificial intelligence tools to detect students cheating with AI — only to end up falsely accusing many of them. A detailed investigation by the Australian Broadcasting Corporation (ABC) revealed that students were charged with academic misconduct based on flawed AI detection software, leading to months of distress, withheld results, and even lost job opportunities.
One of the affected students, Madeleine, a final-year nursing student, was accused of using AI on an assignment while she was applying for graduate positions. “It was already a stressful enough time,” she told the ABC, recalling how she received an email titled ‘Academic Integrity Concern’ from the university. It took six months for her name to be cleared, by which time she believes the accusation had already hurt her career prospects.
The Growing Crackdown
According to internal documents obtained by the ABC, ACU recorded nearly 6,000 academic misconduct cases across its nine campuses in 2024 — with about 90 percent linked to AI use. Deputy Vice-Chancellor Professor Tania Broadley admitted that the figures were “substantially overstated,” but confirmed that half of all confirmed breaches did involve the “unauthorised or undisclosed use of AI.”
While the university insisted that any case relying solely on Turnitin’s AI detection report was dismissed, students told ABC that the system often operated otherwise. “It’s AI detecting AI,” said one paramedic student, whose essay was flagged as 84 percent AI-generated. “Almost my entire paper was highlighted blue.”
Burden of Proof on Students
Students interviewed by ABC described a stressful and confusing process, where the burden of proof fell entirely on them. Many were asked to submit handwritten notes, typed drafts, and even browser histories to prove their innocence.
“They’re not police,” said one student, frustrated by the university’s requests. “But when you’re facing the cost of repeating a unit, you just do what they want.”
Professor Broadley acknowledged the delays and mishandling, admitting that “investigations were not always as timely as they should have been.” She added that “significant improvements” had since been made, including new training on the ethical use of AI for both staff and students.
Faulty Software, Flawed Outcomes
The controversy centers on Turnitin’s AI detection tool, introduced in 2023 to identify AI-generated content. The company itself cautioned on its website that its reports “may not always be accurate” and should not be used as the sole evidence in disciplinary action.
Despite this, email records obtained by ABC showed that ACU sometimes relied on Turnitin’s AI reports alone. The university eventually abandoned the tool in March 2024 after acknowledging its unreliability. But the damage, for many students, was already done.
One student sarcastically wrote on social media, “Apparently I used AI to create someone’s heart rate — how am I supposed to write a heart rate differently?”
Staff Also Struggling
The fallout hasn’t been limited to students. Academics, too, are struggling to adapt. Leah Kaufmann, ACU academic and vice-president of the National Tertiary Education Union, told the ABC that “staff are struggling to keep up” with AI-related investigations.
“The buck stops with the person out the front of the class,” she said. “But how can they do better when the technology and policies are changing every semester?”
Elsewhere, other universities are taking a different path. At the University of Sydney, Professor Danny Liu told the ABC that the focus should shift from punishment to education. “Banning AI is the wrong approach,” he said. “We want to verify whether a student is learning, not whether they’re cheating. Because if they’re not learning, they’re cheating themselves.”
One of the affected students, Madeleine, a final-year nursing student, was accused of using AI on an assignment while she was applying for graduate positions. “It was already a stressful enough time,” she told the ABC, recalling how she received an email titled ‘Academic Integrity Concern’ from the university. It took six months for her name to be cleared, by which time she believes the accusation had already hurt her career prospects.
The Growing Crackdown
According to internal documents obtained by the ABC, ACU recorded nearly 6,000 academic misconduct cases across its nine campuses in 2024 — with about 90 percent linked to AI use. Deputy Vice-Chancellor Professor Tania Broadley admitted that the figures were “substantially overstated,” but confirmed that half of all confirmed breaches did involve the “unauthorised or undisclosed use of AI.”
While the university insisted that any case relying solely on Turnitin’s AI detection report was dismissed, students told ABC that the system often operated otherwise. “It’s AI detecting AI,” said one paramedic student, whose essay was flagged as 84 percent AI-generated. “Almost my entire paper was highlighted blue.”
Burden of Proof on Students
Students interviewed by ABC described a stressful and confusing process, where the burden of proof fell entirely on them. Many were asked to submit handwritten notes, typed drafts, and even browser histories to prove their innocence.
“They’re not police,” said one student, frustrated by the university’s requests. “But when you’re facing the cost of repeating a unit, you just do what they want.”
Professor Broadley acknowledged the delays and mishandling, admitting that “investigations were not always as timely as they should have been.” She added that “significant improvements” had since been made, including new training on the ethical use of AI for both staff and students.
Faulty Software, Flawed Outcomes
The controversy centers on Turnitin’s AI detection tool, introduced in 2023 to identify AI-generated content. The company itself cautioned on its website that its reports “may not always be accurate” and should not be used as the sole evidence in disciplinary action.
Despite this, email records obtained by ABC showed that ACU sometimes relied on Turnitin’s AI reports alone. The university eventually abandoned the tool in March 2024 after acknowledging its unreliability. But the damage, for many students, was already done.
One student sarcastically wrote on social media, “Apparently I used AI to create someone’s heart rate — how am I supposed to write a heart rate differently?”
Staff Also Struggling
The fallout hasn’t been limited to students. Academics, too, are struggling to adapt. Leah Kaufmann, ACU academic and vice-president of the National Tertiary Education Union, told the ABC that “staff are struggling to keep up” with AI-related investigations.
“The buck stops with the person out the front of the class,” she said. “But how can they do better when the technology and policies are changing every semester?”
Elsewhere, other universities are taking a different path. At the University of Sydney, Professor Danny Liu told the ABC that the focus should shift from punishment to education. “Banning AI is the wrong approach,” he said. “We want to verify whether a student is learning, not whether they’re cheating. Because if they’re not learning, they’re cheating themselves.”
You may also like
Eye on China, India & Australia ink to strengthen defence cooperation, ink 3 pacts
Union minister Manohar Lal offers prayers at Gurudwara Guru Nanak Darbar in Dubai
'Terrorism in any form unacceptable': PM Modi calls his 'friend' Netanyahu; congratulates Israeli PM on Gaza peace plan progress
UAE: 9-day mid-term break in the starting October 13, your ultimate guide to local adventures and getaways
From runways to market stalls: Gujarat's schemes unlock youth potential