Reclaiming Authentic Learning: Faculty, students, and AI in shared responsibility

The Current Paradox of Academic Integrity

The state of academic integrity in the age of AI is a paradox. While the tools for dishonesty have changed dramatically, the underlying human reasons for cheating have not. Academic dishonesty is a constant; human behavior remains consistent across generations. Cheaters will always find a way to cheat, regardless of the available tools.

The primary motivations for cheating are still the same: a lack of respect for the assignment’s purpose, a lack of self-confidence, and a perceived lack of resources or time. AI tools themselves rarely cause cheating—only the laziest attempt it outright, and such attempts are often quickly exposed when prompts are left in the work or the writing style changes suddenly.

In this new age, we must revisit fundamental definitions. What constitutes dishonesty, Do our rules serve the realities of collaborative processes—where a tool may act like a colleague, yet the author still shapes the final product? How do you document the use of a tool that partnered on process, much the same way a colleague or tutor might, yet left the final product phrasing edits to the author? Dishonesty much account for real world process that is often collaborative, both before and after AI hit the scene.

From Suspicion to Responsibility and Trust

The culture of suspicion surrounding AI is often rooted in faculty’s lack of understanding about the new tools. Many worry about obsolescence, and resist engaging with technologies they did not grow up with. Let’s be honest, many educators hope to retire before they have to fully engage with it.

The days of training students for a world that no longer exists are over. To keep curricula relevant, faculty need resources, support, tools, and training. As long as educators fear “red-eyed robots” are coming for their jobs, they will continue to promote a culture of suspicion. Only when educators gain firsthand experience with AI and replace media-driven fears with lived understanding, will trust replace suspicion.

Authentic transparency from both educators and learners is essential. No one should hide behind classroom management techniques to mask knowledge gaps. Instead, everyone must be candid about what they know, what they don’t yet know, and how they will work together to close that gap.

Breaking the Policing Mindset

AI detection tools often reinforce a policing mindset, making “catching cheaters” the focus rather than teaching. When educators adopt this mindset, catching wrongdoers becomes the focus, leading to a hyper-fixation on classroom management instead of on the core tasks of teaching and learning. This arms race between AI capabilities and detection technologies delays the harder work—redesigning assessments for modern needs.

A healthier short-term solution is to validate student writing rather than fixating on identifying AI use. This approach shifts the focus from suspicion to affirmation.

The Mindset Shift Universities Need

For responsible AI integration, universities must move beyond cost-driven decision-making toward human-centered approaches. Key shifts include:

  • Rethinking Assessment: Moving toward “ungrading” models where advancement is based on demonstrated competence, naturally reducing incentive to cheat.
  • Clear Institutional Guidance: In a more realistic scenario, universities must provide faculty with crystal-clear guidelines on what is and isn’t acceptable use of AI at an institutional level. This will not only offer a sense of direction but also prevent faculty from secretly using AI, mirroring the behavior they fear in students.
  • Empower Grassroots Communities: This strategy reduces the financial burden on the institution while giving faculty the opportunity to learn from one another. Since many useful AI tools are available in free versions, this approach is highly accessible.
  • Embrace Continuous Training: This isn’t a one-and-done training session. Universities must commit to regular training and create safe spaces where faculty can share their successes and failures with AI. This fosters a cultural shift toward a growth mindset and a commitment to transparency.
  • Prioritize People Over Price: Ultimately, universities should evaluate AI tools based on their impact on users, the community, and institutional well-being, ensuring they align with the institution’s mission and vision. The focus should not be on the cost of subscriptions or one-time training but on the long-term cultural change required for successful integration.

When AI Detection Gets It Wrong

Based on the latest data, AI detection tools can produce false positives at a rate of up to 12%. The real issue, however, is not the tool itself, but how emotionally charged instructors use them. Problems arise when faculty members, acting on what they believe is absolute proof, angrily confront a student. This unprofessional behavior sets the educator up for failure because there is no way to be 100% certain of a student’s intent or guilt. These confrontations often end poorly and have, in some cases, led to successful student litigation.

Instead of accusing students, a more effective and professional approach is to follow a simple script. When an assignment seems suspicious, a faculty member can say, “This sounds like it was written by someone other than you.” Note that this doesn’t specifically accuse a student of using AI, as people have been writing papers for others long before large language models existed.

The conversation should then shift to a coaching moment. The teacher can show the student how the new paper differs from their previous work, highlighting their established writing style and past performance level. The instructor can then directly ask, “Did you receive outside assistance on this paper?” Many students will admit to getting help because they genuinely don’t understand the problem.

For those who insist the work is their own, the instructor can walk them through the specific “red flags” and advise them on the importance of cultivating a distinctive voice for their future career. Of course, repeated and willful instances of this behavior from a small minority of students can be addressed as a formal student conduct issue, just like any other form of intentional cheating.

Helping Faculty Integrate AI Literacy

Clear frameworks, like the AI Assessment Scale (AIAS) or the Generative AI Inclusion Threshold (GAiLT) help define expected AI use in assignments. This clarity benefits both students and faculty.

This provides faculty with models for how to design assignments where the use of AI is explicitly defined, from “no AI” to “full AI.” From the student’s perspective, this creates a clear, unambiguous signal about what’s expected, making it easier for them to understand a faculty member’s objections to a submitted assignment. Universities should also support faculty and assessment redesign, beyond providing a framework.

  • Workshops and Communities: Faculty need access to workshops and supported communities of inquiry to learn and share productive practices.
  • New Assessment Models: Since traditional essays are now largely a “fool’s errand,” instructors should be encouraged to use project-based, multimodal submissions or assessments that require in-person engagement.
  • Shared Governance and Policy: Faculty need to be part of creating explicit policies that guide them. These policies do not need to be restrictive, but they must be clearly written and effectively communicated.

By providing these resources, universities can empower faculty to control the level of AI use in their courses and move beyond simple detection to more meaningful, authentic assessment.

Using Process Trails for Feedback

Platforms like DocuMark can record a student’s entire creation process, from drafts to final submission. This shifts focus from policing integrity to authenticating effort and voice. This allows instructors to give feedback on the development of learning skills, such as revision, critical thinking, and problem-solving, rather than just evaluating the final product. This encourages small assessments rather than catastrophic one, while gently guiding the learner toward success as they proceed through their journey.

Process trails allow instructors to target feedback toward skill development like revision, critical thinking, and problem-solving, rather than only judging the end product. This process happens seamlessly in the background, requiring no extra work from the student. As a result, both instructors and learners can concentrate on what truly matters: mastering skills and engaging deeply with ideas. Focus remains on human connections and mastery of outcomes. This fosters a more positive and supportive learning environment, encouraging good academic habits while minimizing the need for confrontational situations.

The Challenges of AI Literacy

The key challenges to incorporate AI Literacy into the education sector are numerous but ultimately come down to the culture and resources of each individual institution. For any new AI literacy initiative to succeed, we must first approach it with empathy and ask ourselves a series of thoughtful questions about our community and our culture.

1. Are We Ready for Change?

Before we introduce a new program, we need to genuinely understand our current environment. Are our faculty, students, and staff already feeling stretched thin? If people are feeling overwhelmed, a new requirement, no matter how valuable, can feel like just “one more thing” on an endless to-do list. The first step is to create a safe and supportive space—like town halls or anonymous surveys—where people can voice their concerns, fears, and even skepticism about AI without judgment. Only by listening first can we gauge whether they see potential value in this or just another burden.

2. How Will We Connect and Communicate?

A great idea can fail if no one knows about it or if the message is confusing. We need to consider our institution’s communication style. Is it a unified system, or is it fragmented, where important messages can get lost? We need a clear plan to get the word out. For example, will we use a multi-channel approach, such as official emails, posts in a weekly newsletter, and announcements in department meetings?

Furthermore, the training itself must meet people where they are. A one-size-fits-all approach won’t work. We should consider offering different tracks—perhaps a “Curious Beginner” path focused on basics and ethics, and an “Advanced User” path for those ready to integrate specific AI tools into their workflows.

3. What’s the Motivation to Participate?

We need to respect people’s time and effort. Will they be compensated for the hours they invest in training? More importantly, is their participation formally recognized and rewarded? A powerful incentive is to offer a certificate upon completion that is directly tied to their annual performance plan or professional development goals. This shows that the institution values their commitment to growth.

4. Who is Leading This, and Where Are the Resources Coming From?

A successful initiative needs clear ownership and dedicated resources. We must identify who truly has “skin in the game.” If the program is expected to magically materialize from good will and people’s spare time, it is unlikely to succeed. Relying on passion alone is not a sustainable strategy.

Equally important is where the resources originate. If the funding and content come from a single department (like IT or a specific academic college), the training might be perceived as biased or politically motivated. To ensure broad trust and adoption, an initiative like AI literacy should ideally be led by a neutral, institution-level body, like a Center for Teaching and Learning or a provost’s office task force. This ensures the focus remains on empowering the entire community, not advancing a specific agenda.

Ideally some of this comes from a grassroots effort, not entirely top down. There is appetite for it, this should not be too difficult if you know your people. Also, it is critical that there is sincere buy in from the absolute top. A top academic leader with anything less than 100% support of this initiative will ultimately contribute to its overall lack of meaningful impact.

Future Readiness and Advice

  • Be bold and transparent about your use of AI. While you must always respect explicit rules prohibiting its use, you should confidently explore AI’s potential in ambiguous or “gray” areas. Use it as a tool to learn and create but take full ownership of the final result—you are responsible for its accuracy and quality. This approach may feel risky, but it is essential for driving the conversation forward at your institution.
  • Transparent authorship ecosystems, where process logs accompany submissions, could redefine integrity as a shared responsibility rather than a game of detection. Tools like DocuMark make this proof automatic, fostering trust and collaboration.

TRANSPARENCY STATEMENT: Dr. Brian Arnold wrote these responses and takes full responsibility for the views expressed here. He partnered with Google Gemini LLM on the formatting of text and clarification of several key phrases.

Dr. Brian Arnold, PhD – 

He is an expert in EdTech, AI integration, and academic innovation, with over 20 years of experience. He has led transformative initiatives in higher education, focusing on AI in teaching, learning, and equity. As a professor and AI Council chair, Brian is dedicated to shaping the future of learning and fostering collaboration in educational spaces.

Dr. Brian Arnold: He is an expert in EdTech, AI integration, and academic innovation, with over 20 years of experience. He has led transformative initiatives in higher education, focusing on AI in teaching, learning, and equity. As a professor and AI Council chair, Brian is dedicated to shaping the future of learning and fostering collaboration in educational spaces.