Listen to the article

0:00
0:00

AI Detection Software Causing Chaos for Australian University Students

It was nearing the end of semester when “Mary,” a mature-aged law student, found herself staring at her computer screen, repeatedly refreshing the browser as she waited for her final grades to appear. An email arrived from her course coordinator about widespread use of artificial intelligence in assignments, but she dismissed it, thinking AI wasn’t relevant to her work.

She was wrong. Using AI detection software, Queensland University of Technology had flagged Mary’s work—along with many of her classmates’—as artificially generated. With exams less than a month away, they now face allegations of academic misconduct.

“We’re looking at the same piece of legislation, we’re quoting the same cases, we’re looking at the same issues,” Mary told the ABC. “And yet it’s marked in red as not your original work.”

Mary’s experience is not isolated. Following revelations that Australian Catholic University wrongfully accused thousands of students of AI-related misconduct, the ABC has discovered at least a dozen Australian universities are using AI detection software to catch cheating—often with problematic results.

Across the country, students report that their institutions are relying heavily on these tools to place individuals and entire cohorts under investigation for academic misconduct. The consequences are severe.

“It’s financial, it’s professional, it’s personal,” Mary explained. “A couple of thousand dollars for the subject, the cost to your Grade Point Average, and if you are studying law in Australia, like me, and found guilty of academic misconduct, will the Law Society even register you?”

In response to inquiries, Queensland University of Technology stated that students receive education on “expected standards of AI” and are offered support when needed.

Allegations and Bargaining Chips

For Victorian student Beth, accusations of AI use cut her higher education journey short. During her time at the University of Melbourne, she was twice accused of using AI to cheat.

In both instances, Beth claims the university offered what amounted to “an effective plea bargain”—if she stopped fighting the case, she could accept a “lesser” punishment and avoid a formal academic misconduct finding.

During her undergraduate arts degree at age 19, the “bargaining chip” was a penalty of four marks on a written assessment. “They made it very clear that the four marks weren’t just because it got flagged, but because I’d wasted their time arguing my case,” she said. “I just decided to take the punishment because I was simply too scared to argue further.”

Two years later, while pursuing honors in ancient world studies, Beth said a series of em-dashes triggered the AI detector. This time, the university allegedly wanted her to completely rewrite her essay in exchange for avoiding a formal finding.

Fed up and emotionally exhausted, Beth withdrew from her degree. For six months afterward, she battled self-doubt and lost confidence in her abilities. “I was an archaeologist and a heritage advisor doing really important work, but I would just think, ‘I’m too stupid to be here, Melbourne Uni thinks I’m using generative AI and I’m dumb’,” Beth said. “I was really in this spiral of what I like to call like an AI depression.”

The ABC spoke with other students who described a culture of fear and anxiety around academic misconduct and AI use at the University of Melbourne. When contacted, Deputy Vice-Chancellor Professor Gregor Kennedy declined to comment on allegations that the institution encouraged students to admit wrongdoing but stated the university “proactively” educated students and staff on academic integrity issues, including AI use.

A Fragmented Approach

Australia’s 43 universities must comply with the Tertiary Education Quality and Standards Authority (TEQSA) but are entitled to create their own policies on AI. This has led to what students and staff describe as a “smorgasbord of incompetence” and a “game of hit and hope.”

Neither TEQSA nor Universities Australia maintains data on which institutions use AI detection tools. According to TEQSA guidelines, these tools are permitted but not recommended as standalone evidence of academic misconduct. However, many universities appear to be heavily relying on the software.

The ABC confirmed more than a dozen institutions, including Queensland University of Technology and University of Melbourne, use the technology. All claimed AI detection is never used as sole evidence of misconduct, though some experts estimate the number of universities using these tools is closer to 30.

Associate Professor Mark Bassett, who oversees academic integrity and AI at Charles Sturt University, suggests the high adoption rate of AI detection software offers a “lazy” alternative to more comprehensive solutions.

“It lets leaders at universities point to something they’ve done,” Dr. Bassett told the ABC. “So when TEQSA comes and says, ‘What are you doing about mitigating the risks of Gen AI?’ They can say, ‘Well, we’ve got the detector.'”

Calls for Reform

The lack of a national approach to AI in universities has created inconsistency across the sector. Universities Australia CEO Luke Sheehy acknowledged the challenges but declined to comment on whether a rethink is needed.

“Universities are taking their own approaches to AI,” he told the ABC, adding that the sector is struggling to keep pace with rapid AI development. “I sympathize with students that have gone through a process where it looks like they’ve been accused of cheating and they haven’t, but it’s also important that there’s a proper process to review that.”

Sheehy suggested dissatisfied students should contact the National Student Ombudsman. Meanwhile, Federal Education Minister Jason Clare did not respond to repeated requests for comment.

As universities continue to navigate the complex landscape of AI in education, students like Mary and Beth remain caught in the crossfire—their academic futures and professional reputations hanging in the balance.

Verify This Yourself

Use these professional tools to fact-check and investigate claims independently

Reverse Image Search

Check if this image has been used elsewhere or in different contexts

Ask Our AI About This Claim

Get instant answers with web-powered AI analysis

👋 Hi! I can help you understand this fact-check better. Ask me anything about this claim, related context, or how to verify similar content.

Related Fact-Checks

See what other fact-checkers have said about similar claims

Loading fact-checks...

Want More Verification Tools?

Access our full suite of professional disinformation monitoring and investigation tools

8 Comments

  1. This is a concerning issue. Flawed AI detection systems should not be penalizing students for legitimate work. Universities need to ensure their processes are fair and accurate before accusing students of misconduct.

    • Absolutely. Using AI tools to police student assignments raises serious concerns about due process and academic integrity. The universities need to re-evaluate their approach.

  2. John F. Johnson on

    Troubling to see such widespread problems with flawed AI detection systems. This highlights the need for more rigorous testing and validation before relying on these tools in high-stakes academic settings. The universities have a responsibility to get this right.

  3. Misidentifying student work as AI-generated could have severe consequences. I hope the universities work quickly to resolve these issues and ensure fairness in their academic integrity processes. Transparent communication with students is critical.

    • Well said. Students deserve a fair process, not a flawed AI system that could derail their academic careers. The universities need to take this seriously and find a better solution.

  4. This news is very concerning. Relying on faulty AI tools to police student work is unacceptable and could lead to irreparable harm. The universities must re-evaluate their approach and prioritize accuracy over automation.

  5. The impact on students’ academic careers and reputations could be devastating. More transparency and oversight is needed to prevent wrongful accusations from AI-based systems. This is a complex issue that requires nuanced solutions.

    • William Williams on

      I agree. Universities must address the shortcomings of these AI tools before deploying them so broadly. The stakes are too high for students to be unfairly penalized.

Leave A Reply

A professional organisation dedicated to combating disinformation through cutting-edge research, advanced monitoring tools, and coordinated response strategies.

Company

Disinformation Commission LLC
30 N Gould ST STE R
Sheridan, WY 82801
USA

© 2025 Disinformation Commission LLC. All rights reserved. Designed By Sawah Solutions.