Why students don’t disclose AI usage (and how faculty can address it)

When students hesitate to disclose AI use, the instinct is to assume dishonesty. But research consistently shows non-disclosure is a trust problem, not a character problem. Fear of punishment under vague policies, inconsistent enforcement, and disclosure forms that feel like confession all suppress transparency. Faculty who understands these structural drivers can rebuild the conditions for honest AI disclosure before reaching for detection.

Unpacking the Fear: Why Students Don’t Disclose AI Use

A 2024 study at King’s Business School, King’s College London found that up to 74% of students failed to complete a mandatory AI declaration on their coursework coversheet, even though every other section of the same form was routinely filled in. The researchers identified four reasons: fear of academic consequences, unclear guidelines, inconsistent enforcement across courses, and peer influence. None of those are character flaws. All of them are predictable responses to a policy environment that asks for honesty without first creating the conditions for it.

A separate 2025 study of students and faculty at a research university in Hong Kong described what students experience as the “invisible labor” of ambiguous policy: decoding contradictory enforcement signals, finding loopholes to reduce exposure, and feeling unsettled by how differently peers were treated. Where policies are vague, students do not become more compliant. They become more strategic.

What makes disclosure feel risky

Students are also operating in an environment where AI detection tools are known to make mistakes. A February 2026 survey covered by Inside Higher Ed found three-quarters of UK students who use AI feel stressed their work will be wrongly flagged as cheating. At the 2025 EDUCAUSE Annual Conference, researchers from Lamar University and Texas A&M described students juggling conflicting class and assignment-level AI policies, with no reliable way to know when they were in the clear.

Put yourself in that position. You used AI to brainstorm. You are not sure if that counts as a violation. You know detection tools make errors. Staying silent feels like the safer bet. That calculation is rational, not dishonest, and it will keep repeating as long as disclosure carries more risk than silence.

The problem with vague policies

A 2024 Global AI Student Survey by the Digital Education Council found that 86% of students were unaware of their university’s AI guidelines, even at institutions that had published them. This is not indifference. Students genuinely do not know what the rules are. Without a shared understanding of where acceptable AI assistance ends and prohibited use begins, they default to silence.

A 2025 paper in New Directions for Adult and Continuing Education noted that even institutions with AI policies in place give faculty limited guidance on how to verify student disclosures or what steps to take when AI use is suspected but unprovable. The policy states an expectation. The enforcement process does not exist. That gap is exactly where trust collapses on both sides.

What faculty can change right now

The first shift is specificity. A blanket statement like “AI use must be disclosed” leaves students guessing. Clear course-level guidance specifying what is permitted (brainstorming, grammar checks, outlining) and what is not (drafting, submitting AI-generated text) gives students a real line to work with. The University of North Carolina’s approach of mandating AI policy language on every undergraduate syllabus while allowing instructors to customize it is one model that moves in this direction.

The second shift is in how disclosure itself is framed. When students experience a declaration form as a mechanism to catch them, they respond accordingly. Framing it as a record of their writing process rather than a confession changes the dynamic. Students who believe disclosure informs understanding of their learning are more likely to be honest than students who believe it opens a misconduct inquiry.

The policy-to-practice divide

Why disclosure alone is not enough

Even well-designed disclosure frameworks have a ceiling. They rely entirely on self-reporting. A student who declares AI use is taken at their word. A student who does not declare is indistinguishable from one who genuinely did not use AI. Faculty are left comparing finished documents against an invisible standard, with no visibility into how those documents were actually produced.

This is the authorship validation gap that self-reporting cannot close. Whether a student brainstormed with AI or had it written their essay, the final document may look the same. Disclosure, by itself, cannot resolve that. The institutions beginning to move past this limitation are those pairing disclosure requirements with documentation of the writing process itself, capturing keystrokes, revisions, thinking pauses and AI interactions in real time. When the writing journey is on record, what a student claims about their process can be reviewed against what actually happened.

From catching students to understanding them

The question is not really whether students should disclose AI use. They should. The question is whether the systems institutions have built make honesty feel safer than silence. Right now, for many students, it does not.

The finished essay tells faculty very little about whether learning happened. The process that produced it tells them a great deal. Faculty who shifts their focus from the final document to the writing journey ask a fundamentally different question: not “did this student cheat?” but “how did this student actually work?” That shift changes what disclosure means for students, and it changes what institutions need to support it.

Faculty navigating the authorship validation challenge in their courses may find that writing process documentation tools like DocuMark offer what disclosure forms alone cannot: a submission integrity record that makes the writing journey reviewable, turning student claims about process into something that can actually be checked. The question shifts from suspicion to evidence, and from there, trust has a chance to follow.


You might also like

Leave A Reply

Your email address will not be published.