Skip to main content
All posts
March 22, 2026·14 min read

Texas A&M Accused 17 Students of Using AI Last Semester. Most Were Innocent.

Marcus RodriguezMarcus Rodriguez

In the fall of 2025, a Texas A&M professor in the College of Arts and Sciences ran every final paper in their class through an AI detection tool. Seventeen students flagged positive. The professor reported all seventeen to the Aggie Honor System Office for academic dishonesty.

The accusations triggered formal investigations. Students had to write response statements. Some hired lawyers. Several were international students on F-1 visas, which meant an academic dishonesty finding could threaten their immigration status.

After weeks of investigation, the majority of the accusations were dismissed. The students had written their papers themselves. The AI detection tool was wrong.

Nobody apologized.

The AI detection problem nobody wants to fix

Here's what you need to understand about AI detection tools in 2026: they are probabilistic guessing machines being treated as forensic evidence.

Turnitin's AI detection, which Texas A&M uses through Canvas, works by analyzing text for patterns that are statistically associated with AI-generated writing. It produces a percentage score — "We are X% confident this was AI-generated."

The problem: human writing and AI writing overlap significantly in their statistical patterns. Academic writing especially — formal, structured, well-organized — hits many of the same markers that AI detection tools look for. A well-written student paper can look "AI-generated" simply because the student is a good writer.

Turnitin's own documentation states that its AI detection should not be used as the sole basis for academic integrity action. Their false positive rate for native English speakers is estimated at 1-3%. For non-native English speakers, independent research suggests it can be as high as 10-15%.

At a university the size of Texas A&M with over 70,000 students, a 3% false positive rate means thousands of students could be wrongly flagged every semester. Not all of them get reported. But enough do.

What happened at Texas A&M

The fall 2025 incident wasn't isolated. Across multiple colleges — Liberal Arts, Engineering, Science, Business — students reported being accused of AI use they didn't commit. The pattern was disturbingly consistent:

The accusation: Professor runs papers through AI detection. Tool flags the student. Professor files an Honor Code report without additional investigation.

The student's position: They wrote the paper themselves. They may have used Grammarly's spell check. They may have used Google to research. They didn't use ChatGPT or any other AI to write it.

The investigation: The Aggie Honor System Office reviews the case. The student writes a response. Sometimes there's a hearing. The process takes weeks. During this time, the student's grade is in limbo.

The resolution (in many cases): Insufficient evidence. Case dismissed. The student's record technically shows no violation. But the stress, the time lost, the anxiety about visas, scholarships, and graduate school applications — none of that gets refunded.

The ones who weren't innocent: Some students did use AI. The detection tools are right sometimes. The problem isn't that AI detection never works — it's that it's being used as judge and jury when it should only be used as a starting point for investigation.

Who gets hit hardest

International students

This is the part that should make you angry.

Texas A&M has one of the largest international student populations of any public university. These students often write in a more formal, structured style — either because that's how they were taught English or because they're being extra careful with grammar. This formality is exactly what AI detection tools flag.

An international student on an F-1 visa who gets an academic integrity violation faces a cascade of consequences that go far beyond a grade:

  • The violation can affect their student status
  • It can complicate OPT (Optional Practical Training) applications after graduation
  • It can appear on background checks for H-1B visa sponsorship
  • In extreme cases, it can lead to dismissal from the university, which triggers a 15-day departure requirement from the country
  • For a domestic student, a false accusation is stressful and unfair. For an international student, it can be life-altering.

    Students who write well

    The bitter irony: students who actually put effort into their writing are more likely to be flagged than students who turn in sloppy work. Clean structure, varied vocabulary, coherent arguments — these are qualities of both good writing AND AI writing. The student who learned to write well is being punished for sounding too competent.

    ESL students in writing-intensive courses

    Students whose first language isn't English but who work hard to produce polished academic prose. They run their work through Grammarly's basic check. They revise carefully. They produce writing that's more formal than a native speaker's casual style. The detection tool sees formality and says "AI."

    What Texas A&M's actual AI policy says

    Texas A&M's approach is similar to most large universities: there is no single campus-wide AI ban. The Aggie Honor Code defines unauthorized assistance broadly, and AI use falls under that umbrella when not explicitly permitted by the instructor.

    The key points:

    Instructor discretion: Each professor sets their own AI policy in their syllabus. If the syllabus doesn't address AI, the default is that submitting AI-generated work as your own is an Honor Code violation.

    Disclosure is expected: When AI use is permitted, students are generally expected to disclose how they used it.

    The Aggie Honor Code applies: "An Aggie does not lie, cheat, or steal, or tolerate those who do." AI-generated work presented as your own falls under this.

    But here's the gap: The policy puts responsibility on students to follow rules that are often unclear, while giving professors access to detection tools that are demonstrably unreliable. Students are expected to be perfect while the systems evaluating them are deeply imperfect.

    How to protect yourself at Texas A&M

    Given that the detection tools are unreliable and the consequences of a false accusation are severe, Aggies need a defensive strategy:

    1. Document your entire writing process. Keep outlines, rough drafts, notes, and research logs. If accused, you need to prove you wrote your work. Timestamps on Google Docs or Word documents showing incremental progress are powerful evidence.

    2. Write in stages, not all at once. A paper that goes from zero to finished in one document session looks suspicious even to humans. Write your outline on Monday, your first draft on Wednesday, your revision on Friday. The documented process is your armor.

    3. Use AI for research, not writing. Perplexity Academic mode and Lazy AI provide cited research without generating your text. When your AI usage is "I found sources" rather than "I generated paragraphs," there's nothing to detect.

    4. Keep your AI conversations. If you use any AI tool for any purpose related to coursework — even just asking questions to understand concepts — save the conversation. It shows your intent was learning, not outsourcing.

    5. Self-check with the same tools your professor uses. Before submitting, run your paper through Turnitin's draft checker (available to students through many Canvas integrations) or a free AI detection tool. If it flags sections, rewrite them in a more personal voice.

    6. Know the Aggie Honor System process. If accused, you have the right to respond in writing, present evidence, and request a hearing. Don't assume you'll be found guilty just because a tool flagged you. Come prepared with documentation.

    7. For international students specifically: Connect with the International Student Services office immediately if accused. They understand the visa implications and can advocate for you. Don't navigate this alone.

    The tools that keep you safe

    Citation-first research tools:

  • Perplexity Academic Mode — real sources, real citations, verifiable
  • Lazy AI with Academic focus — same approach, auto-selects the best model
  • Google Scholar (traditional but reliable)
  • Learning aids (not writing aids):

  • Claude for concept explanation — ask it to teach you, not write for you
  • Multi-Chat to compare multiple AI perspectives on a topic
  • DeepSeek for step-by-step math and reasoning
  • What to avoid:

  • Copy-pasting any AI output into assignments
  • AI paraphrasing tools (QuillBot, Wordtune, etc.) on source material
  • Generating code you can't explain line by line
  • This system needs to change

    The current situation at Texas A&M — and at universities across the country — is unsustainable. You have unreliable detection tools being used as evidence, inconsistent policies across departments, and students bearing the consequences of a system that hasn't caught up to the technology.

    Students shouldn't need a defense strategy against their own university's grading process. Professors shouldn't be filing reports based solely on software output. And international students definitely shouldn't be facing visa consequences because an algorithm thought their English was "too good."

    Until the system changes, protect yourself. Document everything. Use AI to learn, not to produce. And if you're accused unfairly, fight it with evidence.


    Aggies: use AI the right way. LazySusan gives you 50+ AI models including Perplexity's Academic mode with real citations — research tools that help you learn without putting your academic record at risk. Student plan: $99/year with your .edu email.

    Stop juggling AI subscriptions

    50+ models including ChatGPT, Claude, Gemini, and more.

    Get Started Free