Beyond Detection: Restoring Confidence in Academic Integrity in the Age of AI

Poster Discussion at Charleston Conference 2025, USA by Rebecca Bryant

Generative AI has rapidly permeated U.S. higher education, with studies reporting over 92% of students using AI tools to complete academic tasks. However, this convenience comes at a cognitive cost. Recent research indicates that frequent reliance on generative AI may diminish the brain’s cognitive engagement in learning. Meanwhile, 83% of faculty express concern over students’ ability to evaluate AI-generated content, contributing to anxiety about academic integrity. At the same time, traditional AI detectors are proving unreliable, with studies reporting biased outputs and high error rates. This issue is further exemplified by the recent protests at universities over academic sanctions due to these detectors flagging students’ work incorrectly.

This clearly calls a shift from detection toward guidance of responsible AI use. It also places educators in an ethical minefield, raising uncertainties about how to uphold academic integrity without clear policies or trustworthy tools. As pressure mounts, we propose solutions that are beyond a flawed detection paradigm to address the core faculty concerns: access to reliable tools, robust training on AI literacy, and shared best practices for the integration of AI.

Our poster will provide insights into 1. rise in AI adoption, faculty concerns, and the inadequacy of current solutions and 2. explore a proactive approach that shifts from policing to transparency, improving the student-faculty trust and refocusing on the actual learning outcomes. We plan to present proactive and transparent solution, Trinka DocuMark, to help institutions safeguard academic integrity, promote responsible AI use, encourage constructive student dialogue, and restore trust in the higher education landscape in the age of generative AI.