Can You Use ChatGPT in College? AI Policies in 210 Syllabi Across 75 Disciplines
We analyzed 210 course-level AI policies from 181 universities across 75 disciplines. Here's what professors actually say — from total bans to required use — and what it means for students.
Can You Use ChatGPT in College? What 210 Syllabi Across 75 Disciplines Actually Say
One Professor, Two Policies, Zero Consistency
At College of Marin, Professor Wendy St. John teaches two biology courses. Both are listed under BIOL110. Both appear in the same course catalog. And the AI policies could not be more different.
In the lecture section, BIOL110, the syllabus reads:
"The use of generative artificial intelligence (AI) tools...is an emerging skill, and throughout the semester, I will provide basic tutorials about how to leverage it."
In the lab section, BIOL110L, the same professor writes:
"Due to the hands-on, exploratory nature of the content in this laboratory course, I do not allow any use of generative artificial intelligence."
Same professor. Same subject. Same semester. One course embraces AI as an emerging skill worth teaching. The other bans it entirely.
This is not an anomaly. It is the pattern. Across 210 course-level AI policies we analyzed from 181 universities spanning 75 academic disciplines, the single most consistent finding is inconsistency. The question is not whether your school allows AI. It is whether this specific professor, in this specific course, in this specific section, allows AI.
If you are a student trying to figure out the rules, "check the university website" is not enough. You need to check the syllabus.
Data source: Syllabi Policies for Generative AI Repository
The Big Picture: 210 Policies in One Chart
We classified every policy in the dataset into one of three categories: Restrictive (AI is prohibited or treated as academic dishonesty), Mixed/Conditional (AI is permitted under specific circumstances with disclosure or restrictions), and Embracing (AI is actively encouraged or required).
The overall distribution:
- Mixed/Conditional: 71% -- the vast majority
- Restrictive: 15% -- a clear minority
- Embracing: 14% -- roughly equal to restrictive
The headline number is the 71%. Seven in ten professors who wrote an AI policy landed somewhere in the middle. They did not ban AI. They did not celebrate it. They set conditions: use it for brainstorming but not final drafts; cite it when you do; use it for research but not for writing; disclose every interaction.
But that top-level number masks enormous variation across disciplines. When we grouped the 75 disciplines into four broad categories, the differences became sharp.
| Group | Restrictive | Mixed/Conditional | Embracing |
|---|---|---|---|
| STEM | 22% | 59% | 19% |
| Humanities | 14% | 81% | 5% |
| Professional | 17% | 64% | 19% |
| Social Sciences | 17% | 72% | 11% |
The conventional narrative about AI in academia goes something like this: STEM departments embrace AI because they understand technology, and Humanities departments restrict it because they value original thinking. That narrative is wrong.
STEM is not the most embracing group. It is the most polarized. It has both the highest restrictive rate (22%) and the highest embracing rate among academic groups (tied with Professional at 19%). STEM professors either love AI or fear it. Very few are indifferent.
Humanities, meanwhile, is overwhelmingly cautious-conditional. At 81% Mixed, it has the highest rate of conditional policies by a wide margin. Only 5% of Humanities syllabi fully embrace AI, but only 14% fully restrict it either. The typical Humanities professor is not slamming the door. They are holding it open a crack and watching carefully.
For a full explanation of how we classify AI policies, see our methodology page.
The Disciplines That Surprised Us
Biology: The Surprise Embracer
If you had to guess which science would be most welcoming toward AI, you might pick Computer Science or Data Science. You probably would not pick Biology.
But Biology has the highest embracing rate of any discipline in the dataset: 50% Embracing, with the other 50% Mixed/Conditional and zero fully restrictive policies.
The College of Marin contrast we opened with explains why. Biology professors are making a sharp distinction between conceptual learning and hands-on skills. When students are learning about cellular processes, evolutionary theory, or ecological models, professors see AI as a legitimate tool for exploring concepts, generating study questions, and scaffolding understanding. Fairfield University's biology syllabus captures this stance:
"AI tools can be valuable for learning when used thoughtfully and responsibly."
But when the same students walk into a lab, the rules flip. Lab work is physical. You cannot have ChatGPT pipette a solution, identify a specimen under a microscope, or calibrate an instrument. The hands-on nature of laboratory science creates a natural boundary that lecture-based courses do not have.
This is a more sophisticated approach than it first appears. Rather than issuing a blanket policy, biology professors are asking a specific question: does AI use undermine the learning objective of this particular activity? When the answer is no, they permit it. When the answer is yes, they restrict it. That question-by-question approach is arguably what every discipline should be doing.
Computer Science: Zero Bans
Computer Science is the only discipline in the entire dataset with a 0% restrictive rate. Not one CS professor out of every CS syllabus in the repository chose to ban AI entirely. The split is 75% Mixed/Conditional and 25% Embracing.
The reason is straightforward: CS professors see AI as a professional tool that students need to learn how to use, not a threat to academic integrity. Georgia Tech's approach is representative. Their policy treats AI assistance the same way it treats human collaboration:
"We treat AI-based assistance, such as ChatGPT and Github Copilot, the same way we treat collaboration with other people."
That framing is significant. By equating AI with peer collaboration rather than with plagiarism, Georgia Tech sidesteps the entire academic integrity debate. The rules for using ChatGPT are the same as the rules for asking a classmate for help: permitted, expected, but you still need to understand the material yourself.
Not every CS professor is enthusiastic, though. Lyon College's syllabus contains one of the more memorable pieces of AI skepticism in the dataset:
"Use AI for things you really don't care about, and if you don't care how much time you waste."
Even here, the professor is not banning AI. They are expressing skepticism about its efficiency while leaving the choice to students. In Computer Science, the floor is conditional permission. The ceiling is enthusiastic integration. There is no basement.
View Georgia Tech's full policy breakdown
Law: The Most Divided Discipline
Law is the most internally divided discipline in the dataset. The split is stark: 40% Restrictive, 40% Mixed/Conditional, and 20% Embracing. No other discipline comes close to that level of polarization.
The division reflects a genuine intellectual disagreement within legal education about what AI means for the profession. On the embracing side, Georgetown Law's coding-for-lawyers course makes a compelling case:
"LLMs are rapidly changing the practice of computer programming... We think it would be a mistake to not equip you with the ability to leverage this new technology."
Georgetown bans AI for traditional legal essays but embraces it for programming assignments. The distinction is not arbitrary. Legal writing requires precisely the kind of original analysis and argumentation that professors worry AI will shortcut. But legal technology is a growing field, and a law student who cannot use AI coding tools is arguably unprepared for practice.
Howard University Law takes a more broadly permissive stance:
"Generative AI tools can be invaluable for generating ideas, identifying sources, synthesizing text."
On the restrictive side, Grand Rapids Community College treats any AI use in legal coursework as a straightforward violation of academic integrity. No conditions. No exceptions. The framing is identical to how plagiarism has been treated for decades.
The 40/40/20 split in Law mirrors a profession in genuine transition. Some legal educators see AI as a tool their students must master. Others see it as a threat to the analytical skills that define legal thinking. Both positions are defensible, and the profession has not resolved the tension. Until it does, law students face the most unpredictable policy landscape of any discipline.
View Georgetown's full policy breakdown | Law school AI policies
Writing: 48 Syllabi Tell 48 Different Stories
Writing and composition courses make up the largest single discipline in the dataset, with 48 syllabi. This is also the discipline most directly relevant to anyone using GradPilot, since writing is the skill most immediately affected by generative AI. The distribution: 83% Mixed/Conditional, 12% Restrictive, and 4% Embracing.
That 83% Mixed number is deceptively tidy. "Mixed" covers an enormous range of actual policies, from professors who allow AI for brainstorming but not drafting, to professors who require AI use in specific assignments, to professors who permit it only with paragraph-level disclosure of every interaction. The spectrum within "Mixed" is almost as wide as the spectrum between "Restrictive" and "Embracing."
On the restrictive end, Old Dominion University's Freshman Composition syllabus leaves no room for interpretation:
"Submitting work containing any content generated by artificial intelligence (AI) when not explicitly directed to do so by the instructor will be considered an act of academic dishonesty."
This is a bright-line rule. AI-generated content equals academic dishonesty. Full stop. There is no exception for brainstorming, outlining, or grammar checking. If the instructor did not explicitly tell you to use AI, using it is cheating.
On the embracing end, the University of Pennsylvania's Writing Seminar takes the opposite approach:
"You are welcome to use GAI (e.g., ChatGPT, Copilot, Gemini, Claude, etc.) in your Writing Seminar."
No conditions attached to that sentence. No "only for brainstorming." No "with disclosure." Just: you are welcome to use it. For an Ivy League writing course, that is a remarkably permissive stance.
View UPenn's full policy breakdown
The vast middle ground is where most writing professors have landed, and the language they use reveals real thought about pedagogy. LaGuardia Community College's English 101 syllabus captures the conditional stance better than almost any other policy in the dataset:
"We will use AI to support our writing. We will not use AI to think for us."
That single sentence draws a line that most conditional policies spend paragraphs trying to articulate. AI as support tool: yes. AI as replacement for thinking: no. The challenge, of course, is that the line between "support" and "replacement" is not always obvious.
Some writing professors frame their policies in terms that go well beyond pedagogy. Biola University's syllabus opens with a statement grounded in religious conviction:
"An Affirmation of Humanity: God created humans in his image, and gifted us with creativity and language."
The policy that follows treats AI use as a diminishment of that divine gift. Whether you share the theological premise or not, it is a reminder that AI policies are not just about academic integrity. They are about what professors believe it means to be human, to create, and to learn.
The University of Washington introduces an angle that almost no other syllabus in the dataset addresses -- the environmental cost of AI:
"a conversation with ChatGPT can consume 16 ounces of fresh water, the size of the water bottle that you brought to class."
This is not a metaphor. Large language models require enormous amounts of energy and water for cooling. By grounding the policy in physical resource consumption, the UW professor reframes the question entirely: using AI is not just an academic integrity issue. It is an environmental one.
View UW's full policy breakdown | ChatGPT vs real college essays
The "Consensus" Disciplines: Where Everyone Agrees (Sort Of)
Several disciplines in the dataset show near-total agreement among faculty. But "agreement" does not mean "permissive." It means "conditional."
History stands out: 9 syllabi, and every single one is Mixed/Conditional. Not one history professor chose to ban AI. Not one chose to embrace it. The unanimity is remarkable. History professors, as a group, have converged on the same basic position: AI is a tool that can be used under specific conditions, with attribution, and never as a substitute for primary source analysis or original argumentation.
Business tells a similar story on the surface -- 5 out of 5 syllabi are classified as Mixed/Conditional. But the actual language varies enormously. At the Wharton School, the syllabus reads more like an embrace:
"I expect you to use AI. In fact, some assignments will require it."
That is a professor mandating AI use in a course classified as "Mixed" rather than "Embracing" because the broader policy still includes restrictions on certain types of submissions. The classification captures the conditions. It does not capture the enthusiasm.
Sociology, Literature, Management, and Research Methods all show similar patterns of near-unanimous conditional policies. In every case, the consensus position is the same: AI is permitted with conditions, and the conditions vary by assignment.
It is worth emphasizing what "Mixed/Conditional" actually means in practice. It does not mean "anything goes." It means the professor has thought about AI, decided it has legitimate uses, and drawn lines around those uses. The lines differ from syllabus to syllabus, but the basic framework is consistent: disclose your use, follow the assignment-specific rules, and understand that different tasks have different AI permissions.
View business school AI policies
The Hardliners: Arts, Physical Sciences, Philosophy
At the other end of the spectrum, three disciplines stand out for their high restrictive rates.
Arts leads the restrictive category at 67% Restrictive. The Fashion Institute of Technology contributes both of the Arts entries in the dataset, and both ban AI entirely. The reasoning is intuitive: arts education is fundamentally about developing a personal creative voice. A fashion design student who uses AI to generate concepts is not learning to design. They are learning to prompt. For disciplines where the process of creation is the entire point of the education, AI restrictions make pedagogical sense.
Physical Sciences matches Arts at 67% Restrictive. The reasoning here is different but equally direct. Physical sciences courses emphasize problem-solving methodology. The value is not in the answer but in demonstrating the steps to reach it. AI can produce correct answers to physics or chemistry problems without demonstrating any understanding of the underlying principles. For professors who grade process as much as product, AI use undermines the assessment itself.
Philosophy contains one of the most intellectually rigorous objections to AI anywhere in the dataset. UC Berkeley's syllabus for a course on Later Wittgenstein articulates a position that treats AI not as a neutral tool but as a fundamentally problematic one:
"LLMs are not a neutral tool like computers, internet searches, or word processing software, but essentially a highly sophisticated form of plagiarism. Their operation relies on the uncredited intellectual work of the authors whose texts are used in their training data."
This is not a knee-jerk reaction. It is a carefully constructed argument: LLMs are built on training data created by other humans, those humans are not credited, and therefore using an LLM's output is functionally equivalent to using someone else's intellectual work without attribution. The professor is not saying AI is useless. They are saying it is structurally unethical -- a claim that has nothing to do with whether the student learns and everything to do with the rights of the people whose work trained the model.
Whether you agree with this position or not, it represents the most philosophically serious objection to AI in the entire dataset. It deserves engagement, not dismissal.
View UC Berkeley's full policy breakdown | Do colleges use AI detectors?
The Best Line in Any Syllabus
Out of 210 policies, one stands above the rest for sheer voice. It comes from Donna Lanclos, an anthropology professor at UNC Charlotte:
"I prefer that to the smooth and certain bullshit that is extruded by GAI tools... I can't stop you, and I'm not a cop, so I won't be using detection software. But these tools extrude highly mediocre and bland and often very wrong content. None of you are mediocre, and you deserve better."
There is a lot happening in those sentences. First, the honesty: "I can't stop you, and I'm not a cop." Most professors who dislike AI pretend they can enforce a ban. Lanclos admits she cannot. Second, the word "extruded" -- used twice, deliberately. AI does not write or create or compose. It extrudes. Like plastic through a mold. The metaphor reduces AI output to an industrial process, stripped of intention or craft.
And then the final line: "None of you are mediocre, and you deserve better." That is not a policy. It is a statement of faith in her students. It is the single best argument against AI use we found in any syllabus, and it contains no rules, no penalties, and no enforcement mechanisms. Just a professor telling her students they are worth more than what a machine can produce.
What This Means for Students
If you are reading this as a current or prospective college student, here is what the data tells you.
Check the syllabus, not the university website. Institutional AI policies are often vague, outdated, or nonexistent. The document that actually governs your academic life is the course syllabus. That is where the rules live. If the syllabus does not mention AI, ask the professor directly before assuming anything.
The hierarchy matters. When there is a conflict between institutional policy, department policy, and syllabus policy, the effective hierarchy in practice is: syllabus first, department second, institution third. Professors have wide latitude in setting course policies, and most academic integrity proceedings are handled at the course level. The syllabus is the document that will be cited if something goes wrong.
"Mixed" does not mean "free." Seven in ten professors allow some AI use, but almost all of them attach conditions. Those conditions vary by assignment, by task type, and sometimes by week of the semester. "My professor allows AI" is almost never the full story. The full story is: "My professor allows AI for these specific activities, under these specific conditions, with these specific disclosure requirements."
When in doubt, ask. This is the most boring advice and the most important. Professors who wrote conditional policies have thought carefully about what they permit and what they do not. They are usually willing to answer questions. A two-minute conversation before an assignment is worth infinitely more than a post-hoc argument about what you thought the policy meant.
The pattern we documented in our analysis of same school, different rules applies just as strongly at the course level. Two sections of the same course, taught by different professors, can have opposite AI policies. Do not assume that what your friend's section allows is what your section allows.
For guidance on AI disclosure in applications specifically, see our posts on should you tell colleges you used AI and how to write an AI disclosure for college applications.
Methodology and Data Credit
This analysis is based on the Syllabi Policies for Generative AI Repository, a publicly available dataset of 210 course-level AI policies collected from 181 institutions across 75 academic disciplines. The dataset was assembled by faculty contributors who voluntarily submitted their own syllabus language.
Limitations are important to note. This is a self-selected dataset, not a representative sample of all U.S. higher education. Professors who choose to write and share an AI policy are, by definition, professors who have thought about AI. The dataset likely overrepresents thoughtful, engaged policies and underrepresents the large number of courses that have no AI policy at all. It should not be read as a census. It should be read as a window into the range of positions faculty are taking.
Our classification of policies into Restrictive, Mixed/Conditional, and Embracing involved judgment calls in ambiguous cases. Policies that restrict AI for some assignments but permit it for others were classified as Mixed/Conditional. Policies that technically permit AI but impose conditions so restrictive that practical use is near-impossible were evaluated case by case.
For admissions-level AI policy data, visit the GradPilot AI Policy Observatory, which tracks policies at 174 institutions using our L/D/E classification framework. For the companion analysis of how professor language reveals attitudes toward AI, see our post on what professors actually say about AI. For the analysis of how admissions policies and syllabus policies contradict each other at the same universities, see the AI policy gap.
Worried About AI Detection?
170+ universities now use AI detection. Check your essays before submission.