"I'm Not a Cop": The Professors Who Refuse to Police AI Use
Some professors are rejecting AI detection entirely — citing inaccuracy, surveillance concerns, and trust. We found their actual syllabus language from 210 course policies.
"I'm Not a Cop": The Professors Who Refuse to Police AI Use
The Statement
In a course syllabus for Seminar in Applied Anthropology at the University of North Carolina, Charlotte, Dr. Donna Lanclos wrote something that stopped us mid-scroll:
"Please do not use Generative AI tools as a substitute for your own thinking, reflections, and voice. If you are uncertain, that's OK, and it's fine for it to be reflected in your writing. I prefer that to the smooth and certain bullshit that is extruded by GAI tools."
"In short, don't do the neoliberal academy's work for them. I can't stop you, and I'm not a cop, so I won't be using detection software. But these tools extrude highly mediocre and bland and often very wrong content. None of you are mediocre, and you deserve better." -- Donna Lanclos, Seminar in Applied Anthropology, University of North Carolina, Charlotte
This is not a professor who forgot to write an AI policy. It is not someone who could not be bothered. Lanclos linked a Springer-published academic article in her syllabus alongside this statement. She has written and presented on the intersection of surveillance, learning, and institutional power. When she says "I'm not a cop," she is not being glib. She is making an argument -- one grounded in pedagogical theory, anthropological ethics, and a deeply considered rejection of the surveillance model of academic integrity.
She is also not alone.
We read all 210 course-level AI policies in the Syllabi Policies for Generative AI Repository, drawn from 181 institutions and 75 disciplines. Among those 210 policies, a small but deliberate group of professors have concluded that AI detection is not just impractical but actively harmful. Their reasoning converges on three themes: trust is more effective than surveillance, detection tools are unreliable, and policing undermines the relationship between teacher and student.
This post profiles those professors and their arguments. For our full quantitative analysis of all 210 syllabus policies, see What 210 Professors Actually Say About AI. For our database of admissions-level AI policies at 174 universities, see the GradPilot AI Policy Observatory.
The Trust-Based Approach
The most striking thing about the anti-detection professors is not what they reject but what they build instead. Several have constructed their entire AI policy around a single foundation: mutual trust.
"The AI policy for this class is based on a high degree of trust -- both my trust that you are fully disclosing your use of AI, and your trust that I will allow you to demonstrate that your use was fully disclosed." -- David Weiss, English Composition, Georgia Gwinnett College
There is a symmetry to Weiss's framing that is easy to miss. He is not simply saying "I trust you not to cheat." He is making a bilateral promise. He trusts students to disclose. In return, he promises to receive that disclosure without punishment -- to let students demonstrate that their use was honest and intentional. The contract runs in both directions.
At Northern Virginia Community College, Ray Orkwis puts it more bluntly:
"Truth and Consequences: Also, I'm here to trust you, as I hope you're here to trust me." -- Ray Orkwis, College Composition I, Northern Virginia Community College
These are not naive positions. They are not the product of professors who have not thought through the risks. They are deliberate pedagogical choices, made by people who have weighed the alternatives and concluded that the surveillance model costs more than it prevents.
The cost, as they see it, is relational. Once a professor positions themselves as a detective -- running every submission through a detector, treating flagged passages as evidence, initiating integrity proceedings on the basis of algorithmic probability scores -- the classroom relationship shifts. The professor becomes an adversary. The student becomes a suspect. And the learning environment becomes something closer to a courtroom.
But the trust-based approach is not naive trust. It is not "anything goes." Consider this: the same David Weiss who leads with trust also writes the following:
"While I do not rely entirely on AI detectors to accurately identify AI use, I may use some (for instance, Turnitin and GPTZero) to flag sections of submitted work that do not come across as reflecting a student's individual voice." -- David Weiss, English Composition, Georgia Gwinnett College
Read that carefully. He does not rely on detectors. But he may use them -- not as proof, but as a flag. The distinction is crucial. Detection as a heuristic is very different from detection as an enforcement mechanism. Weiss trusts his students to be honest. He also pays attention to when a submission does not sound like the student who wrote it. Those are not contradictory positions. They are what experienced teaching actually looks like: trust, but with eyes open.
The Detection Accuracy Problem
The professors who reject AI detection are not arguing from philosophy alone. They are also arguing from the data -- specifically, from the growing body of evidence that detection tools produce unreliable results.
Across the 210 policies we analyzed, 33 explicitly mention AI tool inaccuracy or the problem of hallucination. That is nearly one in six. Professors are not just abstractly worried about detection; they are citing the known limitations of the technology they are being asked to rely on.
"AI engines are notoriously unreliable on facts." -- Loretta Notareschi, First Year Writing, Regis University
"LLMs (e.g., ChatGPT) do not know, remember, or reason: they are 'fancy autocorrect.'" -- University of Texas at Tyler, Cognitive Psychology
If the AI tools themselves are unreliable -- if they fabricate sources, hallucinate facts, and produce text that is confidently wrong -- then the tools designed to detect their output face a compounding problem. They are trying to identify the output of a stochastic process using a second stochastic process. The margin of error is not additive. It multiplies.
And the empirical evidence bears this out. A widely cited Stanford study found that 61% of TOEFL essays written by non-native English speakers were misclassified as AI-generated by popular detection tools. Sixty-one percent. That is not an edge case. That is a majority of a specific population being systematically misidentified.
The implications for higher education are severe. International students -- who already navigate visa requirements, cultural adjustment, language barriers, and financial pressures that domestic students do not face -- are disproportionately likely to be falsely accused of AI use. Their writing, which may be grammatically correct but stylistically different from native English prose, triggers the same patterns that detectors associate with machine-generated text. We covered this problem in detail in our investigation of international students and AI detection bias.
The unreliability extends beyond the false positive problem. As we documented in our analysis of Turnitin's failures, even the market-leading detection tool has demonstrated significant accuracy problems. And as we explored in our investigation of which colleges actually use AI detectors, the gap between claimed accuracy and observed performance is substantial.
Professors who reject detection are not ignoring academic integrity. They are looking at the available enforcement technology and concluding it is not fit for purpose. That is a different argument entirely.
Georgia State's "Preponderance of Information" Standard
While individual professors build trust-based policies, at least one institution has developed a formal alternative to detection-based enforcement. Georgia State Perimeter College has articulated what may be the most institutionally sophisticated approach in the entire dataset:
"Georgia State University evaluates academic honesty concerns using a preponderance-of-information standard rather than automated detection tools. Determinations are based on a review of submitted work, assignment expectations, observed inconsistencies, and relevant process evidence." -- Georgia State Perimeter College, Freshman Composition
Unpack what that language does. It explicitly names a standard -- "preponderance of information" -- borrowing from legal terminology to signal that this is a deliberate, defensible framework, not an ad hoc judgment call. It explicitly rejects automated detection tools. And then it lists what evidence it will use instead: the submitted work itself, the assignment expectations, observed inconsistencies between what was submitted and what was expected, and process evidence -- meaning drafts, notes, revision history, and other artifacts of the writing process.
This is not "we do nothing." This is "we do something better." Georgia State has replaced the black box of algorithmic detection with a transparent evidentiary standard. A student accused under this framework can know exactly what evidence is being considered and can respond to it.
Compare this to Cal Poly's inversion of the burden:
"In my class, the burden does not fall on the instructor to prove that AI was used, but rather on the student to prove that learning has occurred." -- Cal Poly, Engineering, Design, and Social Justice
Both approaches share a common move: they shift the frame from "catching cheaters" to "assessing learning." Detection-based enforcement asks: "Did the student use AI?" Georgia State and Cal Poly ask a different question: "Did the student learn?" The second question is both harder to game and more aligned with the actual purpose of education.
The Alternative Enforcement Menu
If you do not use detection software, how do you actually enforce an AI policy? The professors in this dataset have developed a range of alternatives that are, in many cases, more rigorous than running a submission through Turnitin.
Knowledge Checks
The College of Western Idaho includes what it calls "knowledge checks for course content" -- structured conversations with students about their submitted work. The logic is straightforward: a student who wrote their own essay can discuss it. A student who did not, cannot. No algorithm required.
Oral Defense
Multiple professors across the dataset reserve the right to discuss submitted work verbally with students. This is the oldest form of academic assessment, predating not just AI detectors but the internet itself. A student who can walk through their argument, explain their sources, and respond to questions about their reasoning has demonstrated mastery in a way that no written submission alone can prove. A student who cannot has revealed something important -- again, without any detection software involved.
Revision History
Professors who require drafts and revision history can see the evolution of student thinking. A paper that appears fully formed with no prior drafts is suspicious not because an algorithm flagged it, but because writing does not work that way. The professor who reads a rough first draft, a restructured second draft, and a polished final version has far more evidence of genuine student work than any detector could provide.
The "Never Hit Copy" Heuristic
Georgia Tech offers one of the cleanest rules in the dataset:
"Never hit 'Copy' within your conversation with an AI assistant." -- Georgia Tech, BMED2250
This is enforcement by design rather than by detection. If a student is allowed to consult AI but prohibited from copying its output directly, the AI becomes a thinking partner rather than a ghostwriter. The student must process, rephrase, and integrate the information -- which is, of course, a form of learning. The rule is elegant because it is nearly impossible to violate while also learning nothing.
Environmental Accountability
Kennesaw State takes a lateral approach: instead of detecting AI use, require students to account for it. Their policy asks students to estimate the carbon, water, and electricity costs of each AI interaction. This reframes AI use not as a moral failing but as a resource consumption decision -- one with measurable environmental consequences. A student who must calculate that their ChatGPT conversation consumed an estimated 500 milliliters of water is thinking about AI in a fundamentally different way than a student who is merely trying not to get caught.
The "Meet With Me" Requirement
Cal Poly's approach is perhaps the simplest: if AI use is suspected, the student meets with the professor. Not to be punished. Not to face an integrity board. To demonstrate learning. This turns the enforcement moment into a pedagogical one. The student who can explain their work walks away having deepened their understanding through articulation. The student who cannot gets a conversation about what went wrong, rather than a sanction.
The Numbers Behind the Sentiment
The anti-detection professors are a minority. But the data suggests they are the leading edge of a larger trend, not outliers.
Here is what the numbers from our analysis of 210 syllabus policies reveal:
- Only 4 of 210 policies name a specific detection tool. That is 1.9%. The overwhelming majority of professors -- even those with strict AI policies -- do not mention Turnitin, GPTZero, or any other detector by name.
- At least 4 policies explicitly reject detection. These are not professors who forgot to mention it. They are professors who considered it and said no.
- 33 policies cite AI inaccuracy as a concern -- either the inaccuracy of AI-generated content or the inaccuracy of the tools used to detect it.
- 84.4% of restrictive policies have no enforcement mechanism. They tell students not to use AI but say nothing about how that prohibition will be enforced. The policy is, functionally, an honor system -- whether the professor intended it that way or not.
- 58.1% of all policies take conditional or mixed approaches focused on learning rather than policing. The majority position is not "ban AI" or "allow AI." It is "use AI thoughtfully, and here is what that means in my course."
The pattern is strikingly similar to what we found in admissions. As we documented in our analysis of the enforcement gap, 77.6% of universities at the admissions level have no enforcement mechanism (E0). The same gap between policy and enforcement that we see in admissions offices appears in classrooms. Institutions write rules they cannot or will not enforce, and the professors who are honest about this -- who say "I'm not a cop" -- are simply making the implicit explicit.
For a deeper dive into the enforcement gap across both admissions and classroom settings, see our enforcement gap investigation.
What This Means for Students
If you are a student reading this, here is the critical takeaway: "no detection" does not mean "no consequences."
The professors who refuse to use detection software are not saying AI use is fine. Most of them are saying the opposite -- they care deeply about authentic student work, and they have concluded that detection tools are not the right way to protect it.
Here is what that means in practice:
Professors notice voice inconsistency. A student who writes in one voice all semester and then submits a paper that reads like a different person will be noticed. Not by an algorithm. By the human being who has been reading their work for weeks.
Professors notice when students cannot discuss their own work. The oral defense, the knowledge check, the casual "tell me about your argument" conversation -- these are far harder to fake than a written submission. A student who outsourced their thinking to AI will struggle in these moments in ways that are immediately apparent.
Trust-based classrooms often have higher expectations, not lower. When a professor extends trust, they are also extending an expectation of honesty that goes beyond what a detection-based system requires. In a detection-based system, the implicit standard is "do not get caught." In a trust-based system, the standard is "be transparent." Deception in a trust-based system is treated more seriously precisely because the professor invested relational capital in the student's honesty.
Students in these classrooms are expected to disclose. The professors profiled in this post are not saying "use AI secretly and I will look the other way." They are saying "I trust you to tell me the truth about how you produced this work." That is a higher standard of integrity, not a lower one.
If you are navigating these questions in the context of college or graduate admissions, our guide on whether to tell colleges you used AI covers the strategic considerations. And if you need to write an AI disclosure statement, our guide to writing AI disclosures for college applications walks through the process step by step.
The Principled Position
The professors in this post are not lazy. They are not technophobic. They are not ignoring the problem of academic integrity. They have looked at the available tools for enforcing AI policies, found them wanting, and built something different.
Donna Lanclos builds her policy on anthropological ethics. David Weiss builds his on bilateral trust. Georgia State builds on evidentiary standards borrowed from law. Cal Poly builds on demonstrated learning. Georgia Tech builds on a single, elegant rule about the copy button.
None of them have given up on the idea that students should do their own thinking. All of them have given up on the idea that an algorithm can tell them whether students did.
In a landscape where 84.4% of restrictive policies have no enforcement mechanism anyway, these professors are at least being honest about it. They are naming the gap that everyone else is quietly ignoring. And in doing so, they are building something that might actually work: classrooms where the relationship between teacher and student is strong enough that detection software is beside the point.
Data source: All faculty quotes in this post are drawn from the Syllabi Policies for Generative AI Repository, a public dataset of 210 course-level AI policies from 181 institutions. Admissions enforcement data from GradPilot's AI Policy Observatory and enforcement gap analysis.
Worried About AI Detection?
170+ universities now use AI detection. Check your essays before submission.