How to create an AI usage policy for your university

Most universities now have some form of AI policy on paper. Very few have one that actually works in practice. A policy that defines principles but leaves enforcement, disclosure, and course-level implementation to chance is not a policy, it is a wish. Building one that holds up requires clarity on what the policy is for, who it binds, and how compliance can be verified beyond a student’s word.

Why most AI policies fall short

A 2024 EDUCAUSE AI Landscape Study found that only 23% of institutions had any AI-related acceptable use policies in place, and nearly half of respondents disagreed that their institution had appropriate guidelines for ethical and effective decision-making about AI. A 2025 study in New Directions for Adult and Continuing Education found that while 94% of top US universities had developed AI guidelines, those guidelines vary widely in scope, enforceability, and faculty involvement — leaving educators uncertain about what is permissible, encouraged, or restricted.

The problem is not that institutions lack the will to create good policies. It is that most policies are written at the institutional level and handed down without the infrastructure needed to make them real at the course level. A provost-level statement on transparency does not tell a student whether they can use AI to brainstorm their literature review.

Start with what the policy needs to do

Before drafting any language, it helps to be clear about what the policy is actually trying to accomplish. The EDUCAUSE 2024 Action Plan on AI Policies organises this into two domains: governance and operations. Governance covers data privacy, intellectual property, equitable access, and how AI use is monitored and evaluated. Operations cover faculty professional development, infrastructure, and implementation at the course level.

A policy that only addresses one domain will develop gaps in the other. Institutions that focus on governance without operational support leave faculty to interpret vague principles independently. Institutions that focus on operations without clear governance leave themselves exposed on data privacy and enforcement. Both dimensions need to be addressed, even if in different documents.

Define permitted and prohibited use clearly

This is the step most institutions handle poorly. Statements like “students must use AI responsibly” or “AI use must be disclosed” are not definitions. They are aspirations. Research from King’s College London found that 74% of students failed to complete mandatory AI declarations, with ambiguous guidelines identified as one of the key reasons. Students who do not know exactly where the line is will draw their own, inconsistently.

Clear policy language specifies which AI tools are permitted, for which tasks (brainstorming, grammar checking, research assistance, drafting, editing), in which contexts (formative versus summative assessments), and what disclosure looks like in practice. Duke University’s advice is worth taking seriously here: one-size-fits-all AI policies are not sustainable. Course-specific statements, anchored to a shared institutional framework, give faculty the flexibility to match AI permissions to their pedagogical goals while giving students a consistent baseline expectation.

Build the disclosure process around trust, not compliance

Many institutions treat disclosure as the primary integrity mechanism. Students declare whether and how they used AI, and the institution relies on that declaration. The problem is that disclosure without trust does not produce honest disclosure. A 2025 study tracking AI policy at a Hong Kong university found students treating vague policies as puzzles to decode rather than norms to follow, investing significant effort in navigating ambiguity rather than engaging transparently.

Disclosure works better when it is designed as documentation of process rather than confession of use. The framing matters. Students who report their AI use as part of describing how they worked, rather than admitting to something potentially prohibited, are more likely to give honest accounts. Institutions should pair disclosure requirements with restorative rather than purely punitive responses, particularly for first-time or unclear cases, to remove the incentive to stay silent.

Give faculty what they need to implement the policy

Even a well-written institutional policy will be inconsistently applied if faculty receive it without support. A 2025 Frontiers in Education study examining faculty perspectives on academic integrity policy found that inconsistent enforcement is not usually the result of faculty indifference. It reflects procedural complexity, unclear guidance, and a lack of training on how to handle suspected AI misuse. Faculty operating without clear process fall back on their own judgement, which varies.

Effective implementation requires sample syllabus language that faculty can adapt, clear guidance on what to do when AI misuse is suspected, and training on what process documentation (rather than detection scores) looks like as evidence. Policies that are handed down without these support structures are not ready to be enforced.

Make verification part of the design

The final piece most policies omit is a credible verification mechanism. Self-reported disclosure, even well-designed, tells an institution what students say they did. It does not tell them what students actually did. This is the authorship validation gap, and it is where many integrity cases become unprovable: a student denies AI use, the institution has a suspicion but no evidence, and the case either goes unaddressed or proceeds on thin grounds.

A growing number of institutions are exploring writing process documentation as the structural solution to this gap. Rather than scanning the finished submission for AI signals, process documentation captures the writing session from start to finish keystrokes, revisions, AI interactions, and thinking pauses. The resulting record does not replace faculty judgment, but it gives that judgment something real to work with. For administrators building out an authorship validation workflow, tools like DocuMark offer a submission integrity layer that makes student process claims verifiable rather than assumed, without requiring institution-wide deployment from day one.

Use real examples before writing your own policy

Before drafting, see how peer institutions have handled this. Policies from Stanford, MIT, ETH Zurich, Imperial College London, and dozens of other universities are publicly available and offer a concrete sense of the range of approaches, from permissive frameworks that leave decisions to instructors, to structured tiered systems that specify AI use by assessment type.

Trinka’s University AI Policy Hub brings together AI policies from universities across the US and beyond in one searchable repository, making it easier to compare approaches, find language that fits your institutional context, and understand how leading institutions are handling disclosure, enforcement, and course-level implementation. If you are building or revising your institution’s AI policy, it is a practical starting point.