The Best Things Professors Have Written About AI: A Wall of Quotes from 210 Syllabi

From Harvard's one-word AI policy to a professor who compares ChatGPT to Janet from The Good Place, here are the most striking, funny, and thoughtful things professors have written about AI in their syllabi.

GradPilot TeamFebruary 22, 202613 min read
Check Your Essay • 2 Free Daily Reviews

The Best Things Professors Have Written About AI: A Wall of Quotes from 210 Syllabi

From a one-word ban to a meditation on swimming with Janet -- professors are writing some of the most interesting prose about AI, and most of it is buried in syllabi no one reads.

We read 210 syllabus AI policies from colleges and universities across the United States. Most were boilerplate. Copy-pasted from a dean's office template, dropped into a course outline between the grading rubric and the disability services paragraph. Unremarkable.

Some, however, were extraordinary. Thoughtful. Funny. Philosophical. Angry. Vulnerable. A few were accidentally poetic. What follows is a curated collection of the most striking passages we found -- organized by tone, with minimal commentary. The quotes speak for themselves.

Data source: Syllabi Policies for Generative AI Repository. For our full quantitative analysis of syllabus language patterns, see What 210 Professors Actually Say About AI.


"Don't." -- The Blunt

Some professors do not see the need to elaborate. When the answer is no, the answer is no.

"Don't."

-- Harvard, Gender and Technology course. Harvard's admissions policy is equally blunt: L4, AI prohibited.

One word. No qualifier. No hedge. No three-paragraph explanation with a flowchart of acceptable use cases. Just: don't.

"I prefer that to the smooth and certain bullshit that is extruded by GAI tools."

-- Donna Lanclos, Seminar in Applied Anthropology, University of North Carolina, Charlotte

The word "extruded" does a lot of work in that sentence. It conjures an industrial process -- something squeezed through a die, uniform and featureless. Lanclos is not being careless with language here. She is being precise.

"I want to read ONLY the beautifully human writing that comes from your unique human brain."

-- UC Santa Cruz, Sociology course. UC Santa Cruz's admissions policy

"In my class, the burden does not fall on the instructor to prove that AI was used, but rather on the student to prove that learning has occurred."

-- Cal Poly, Engineering, Design, and Social Justice

This is a quiet inversion of how most academic integrity conversations work. Instead of a detection arms race, this professor shifts the entire frame: prove you learned something.

"At your stage, using AI is a waste of time at best, and a crime against your ability to learn at worst."

-- Lyon College, Computer Science

"Due to the hands-on, exploratory nature of the content in this laboratory course, I do not allow any use of generative artificial intelligence."

-- Wendy St. John, BIOL110L, College of Marin. Thirteen words of reasoning. Done.


"Imagine We Had a Janet" -- The Creative

A few professors reached for metaphor. The best ones landed.

"Imagine you lived in a world where everyone had immediate access to an anthropomorphized vessel of knowledge... Imagine, that is, we had a Janet. Yet this Janet had also just been rebooted, so she would sometimes give us a cactus when we asked for water."

-- Wake Forest, Ethics First Year Seminar

If you have not watched The Good Place, Janet is a non-human entity who contains all the knowledge in the universe but occasionally malfunctions in spectacular ways. The metaphor is almost unreasonably good. And the professor was not done.

"If you wanted to learn to swim, you might ask Janet to explain it to you, to show you how to swim, and maybe even provide feedback on your stroke. But you wouldn't ask her to show up at the gym at 6 AM and swim your practice laps for you."

-- Same professor, same syllabus, Wake Forest

Two paragraphs of a syllabus and this professor has articulated the distinction between using AI to learn and using AI to avoid learning more clearly than most op-eds manage in 2,000 words.

"Think of AI like an e-bike. If our goal is to only get somewhere faster, an e-bike might do the job. If our goal is to become a better cyclist, an e-bike can interfere with that happening. Long term, if you use an e-bike to do all your cycling you might end up in worse shape. Worse when the battery dies you will be stranded."

-- Georgia Tech, BMED2250

The e-bike analogy works because it refuses to moralize. It does not say AI is bad. It says it depends on what your goal is. And that last line -- "when the battery dies you will be stranded" -- quietly raises the stakes.

"AI answers are based on crowdsourced logic. If you've ever eaten at a 4.8-star Yelp restaurant and the food was bad, you know why this is problematic for accuracy."

-- Pierce College, CIS 130

"If you want to see the dangers of AI for pictures, go to the 3rd floor of the S building and look at the BEES. One has 5 wings, 2 have one antenna, and some of the flowers have the wrong leaves."

-- San Diego City College, Art

There is something wonderful about this. No abstraction, no philosophical hand-wringing. Just: go look at the bees. Count the wings.

"When we choose to use it, we should be using it as a ladder and not a crutch."

-- Harvey Mudd College, Microprocessor-based Systems


"None of You Are Mediocre" -- The Encouraging

Not every professor responded to AI with alarm. Some used the moment to tell students what they actually thought of them.

"I can't stop you, and I'm not a cop, so I won't be using detection software. But these tools extrude highly mediocre and bland and often very wrong content. None of you are mediocre, and you deserve better."

-- Donna Lanclos, UNC Charlotte

Lanclos appears twice in this collection for a reason. The full passage is even better than the soundbite. "I'm not a cop" rejects the surveillance framing. "None of you are mediocre" reframes the entire conversation from prohibition to aspiration. This is a professor who trusts her students.

"You are already a better writer than Copilot."

-- Colorado Mesa University, English 111

Six words. No footnotes needed.

"The AI winter is over... a lot of the teaching-focused literature seems to be coming from an adversarial standpoint, and I am getting slow with age. Thankfully, your minds are young and sharp, and can assist me in the process."

-- Habib University, CS 412 Algorithms

This one stands out for its vulnerability. A computer science professor admitting they are still figuring it out, and asking students to figure it out alongside them. This is rare in syllabus language, which tends toward the declarative and authoritative.

"It is extremely tempting to rely on AI for something that seems initially 'much better' than you can do -- but giving in to that temptation only leaves you less and less prepared -- and feeling more and more like an impostor."

-- Mercer University

The impostor syndrome connection is sharp. The professor is arguing that AI does not cure impostor syndrome -- it creates it, by widening the gap between what you submit and what you actually know.

"Technology has been important to writing since its ancient invention, but the constant has been (and will be) the human imagination and the conversation of humanity that drives us to seek technology to accomplish our ambitions."

-- Monmouth University


"ChatGPT Cannot Imagine Freedom" -- The Philosophical

Some professors used their syllabi to do genuine intellectual work -- treating the AI question not as an administrative nuisance but as a real philosophical problem worth thinking about.

"An Affirmation of Humanity: God created humans in his image, and gifted us with creativity and language."

-- Biola University, English/Writing

This is from a Christian university, and whatever your theological commitments, the framing is notable: creativity and language as divine gifts that should not be delegated. It is a complete worldview compressed into one sentence of a syllabus.

"ChatGPT cannot imagine freedom or alternatives; it can only present you with plagiarized mash-ups of the data it's been trained on."

-- Columbia University, UN & Globalization

The word "imagine" is doing important philosophical work here. The professor is not saying ChatGPT produces bad text. The professor is saying it cannot engage in the fundamental act that makes intellectual work matter: imagining that things could be different than they are.

"Finding this style usually happens in a zone that Brian Eno terms, 'happy accidents.' Utilizing AI to generate your own work prevents you from making those accidental connections."

-- University of Pittsburgh, course unspecified

Brian Eno's concept of "happy accidents" -- the productive mistakes that lead to creative breakthroughs -- is a genuinely interesting lens. If AI optimizes for plausibility, it optimizes away the weird detours where original thinking happens.

"LLMs (e.g., ChatGPT) do not know, remember, or reason: they are 'fancy autocorrect.' They predict which words tend to be near other words."

-- Lauren Kirby, University of Texas at Tyler, Cognitive Psychology

Coming from a cognitive psychology professor, this distinction between prediction and reasoning has particular weight. This is someone whose entire field is the study of how minds actually work, and they are telling you: this is not that.

"LLMs are not a neutral tool like computers, internet searches, or word processing software, but essentially a highly sophisticated form of plagiarism."

-- UC Berkeley, Later Wittgenstein (Philosophy)

A philosophy course on Wittgenstein -- a thinker obsessed with the relationship between language and meaning -- calling LLMs "a highly sophisticated form of plagiarism." There is a deep irony here that the professor is certainly aware of: Wittgenstein argued that language only has meaning in use, within specific forms of life. An LLM uses language without participating in any form of life at all.


"This Policy Was Created by ChatGPT" -- The Self-Aware

Here is where things get strange. A not-insignificant number of professors disclosed -- sometimes proudly, sometimes sheepishly -- that their AI policy was itself written with AI.

"This was the policy created by CHATGPT"

-- Santa Barbara Community College, Finance. That is the entire disclosure.

No elaboration. No apology. No explanation of the irony. Just an all-caps confession and onward to the grading rubric.

"Can you spot the sentences it wrote?"

-- CSU Fullerton, Criminal Justice

This professor turned their own disclosure into a challenge. It is either a teaching moment or a dare, and possibly both.

"This policy was edited for clarity using ChatGPTo"

-- University of Missouri, Biology (the typo is preserved from the original)

"ChatGPTo" -- not quite ChatGPT, not quite a model name, possibly an autocorrect artifact. There is a small poetry in a policy about AI tools containing a typo that may itself be an AI artifact.

"Several GenAI models were used in the creation of this document, including but not limed to ChatGPT, Claude, Co-pilot, and Gemini."

-- Western New England University (typo: "limed" instead of "limited")

Four different AI models were used to write a policy about AI. "Not limed to" is the kind of error that a human proofreading AI output might miss -- or that a human typing quickly would make. Either way, it survived into the final syllabus.

"Note...some of this was written with Ai... I also likely saw something on Twitter that prompted me writing this. I can't remember."

-- Clemson University

The honesty here is almost disarming. The professor cannot remember where the idea came from, is not sure exactly which parts are AI-written, and is telling you all of this. It is the most human thing in the entire dataset.

Across the 210 syllabi we analyzed, 15 policies (7%) admit to being at least partly AI-written. Using AI to write rules about AI is either peak hypocrisy or peak honesty, depending on your read. We lean toward honesty. At minimum, these professors are modeling the transparency they ask of their students.


"16 Ounces of Fresh Water" -- The Environmental

A small but growing number of professors are raising a question most AI discourse ignores entirely: what does this cost the planet?

"Not to mention that a conversation with ChatGPT can consume 16 ounces of fresh water, the size of the water bottle that you brought to class."

-- University of Washington, Writing About Technology

The water bottle comparison is brilliant pedagogy. It takes an abstract resource cost and places it on the desk in front of you. Every conversation with ChatGPT costs a water bottle. Now multiply by every student in every class.

Students must include "a short impact statement estimating: Carbon emissions (CO2 in grams or kilograms), Water usage (liters), Electricity usage (watt-hours)"

-- Kennesaw State University, PRWR6255 Grantwriting

This is the single most unusual requirement in the entire dataset. The professor is not banning AI. They are requiring students to calculate the environmental cost of every AI interaction they use for class. It treats AI use as an environmental decision, not just an academic integrity decision.

"We understand there are significant environmental & ethical impacts associated with using Large Language Model (LLM) Generative AI (GAI) Tools, and thus we are not requiring their use."

-- University of Florida, Social Entrepreneurship

The logic here is worth pausing on. The professor is not saying AI is banned. They are saying they will not force you to use something with known environmental and ethical costs. It is an opt-in framing built on an environmental conscience.


What We Took Away

These quotes come from 210 syllabi across dozens of institutions -- R1 research universities, community colleges, liberal arts schools, religious institutions. The professors teach computer science, philosophy, art, biology, finance, and grantwriting. They agree on almost nothing except that this moment matters and deserves more than boilerplate.

The best syllabus policies -- whether permissive or prohibitive -- share one quality: they sound like they were written by a specific person thinking carefully about a specific course. The worst ones (not collected here) sound like they were written by a committee, or by the very tools they are trying to regulate.

All quotes sourced from the Syllabi Policies for Generative AI Repository, a publicly maintained dataset of university syllabus language on generative AI. For how universities handle AI at the admissions level -- a different but related question -- see our AI Policy Observatory.

Worried About AI Detection?

170+ universities now use AI detection. Check your essays before submission.

Related Articles

Submit Essays That Get Accepted

Join thousands of students who verify their essays pass AI detection before submission

No credit card required

170+

Universities Tracked

99.8%

Detection Accuracy

0.004%

False Positives