Law, Biology, and Arts: The College Disciplines Most Divided on AI

Some disciplines have reached consensus on AI. Law, Biology, and Arts have not. We analyzed 210 syllabus policies to find where professors disagree most — and why.

GradPilot TeamFebruary 22, 202612 min read
Check Your Essay • 2 Free Daily Reviews

Law, Biology, and Arts: The College Disciplines Most Divided on AI

The Disciplines That Cannot Agree

If you walk into a history department and ask ten professors about their AI policies, you will get the same answer ten times. Conditional use. Cite your sources. Learn the material. History has consensus.

Business departments have consensus too. So does computer science. So does research methods. In these fields, the debate is effectively over. Professors have landed on a shared position, even if they arrived there independently.

But walk into a law school, a biology department, or an art studio, and you will find something very different. Two professors down the hall from each other -- teaching in the same discipline, sometimes at the same institution -- have adopted completely opposite AI policies. One says AI is a professional skill students must develop. The other says AI output is plagiarism.

We measured it.

Using data from the Syllabi Policies for Generative AI Repository, a publicly maintained dataset of 210 course-level AI policies across 181 institutions and 75 disciplines, we analyzed which fields agree on AI and which are still fighting about it. The results reveal three disciplines where the internal disagreement is so stark that students cannot predict what rules they will face from one course to the next.

How We Measured Disagreement

Not all disciplines are created equal when it comes to internal consensus. To measure disagreement, we used a distribution analysis inspired by Shannon entropy -- a method that quantifies how spread out a set of stances is.

Here is the logic. Every policy in the dataset is classified as Restrictive, Mixed, or Embracing. For each discipline with three or more entries, we looked at how those entries distribute across the three categories.

A discipline where every single entry falls in the same category -- say, all Mixed -- has zero disagreement. Professors in that field have reached consensus, whether they coordinated or not. A discipline where entries split evenly across all three categories -- roughly equal shares of Restrictive, Mixed, and Embracing -- has maximum disagreement. No shared position. No center of gravity. Just professors making independent, contradictory choices.

Most disciplines fall somewhere between those poles. But three stood out as dramatically more divided than the rest: Law, Biology, and Arts. Each one for a different structural reason.

For more on how we classify and analyze AI policies, see our methodology page.

Law: 40/40/20 -- The Most Divided Discipline

Law is the most internally divided discipline in the entire dataset. Roughly 40% of law course policies are Restrictive, 40% are Mixed, and 20% are Embracing. There is no center of gravity. No majority position. Just a discipline arguing with itself.

The disagreement is not just between institutions. It exists within single institutions.

Georgetown Law provides perhaps the most vivid example. Its Computer Programming For Lawyers course takes a clear embrace stance:

"LLMs are rapidly changing the practice of computer programming... We think it would be a mistake to not equip you with the ability to leverage this new technology." -- Georgetown Law, Computer Programming For Lawyers

This is a law school telling students that refusing to learn AI would be professional malpractice. But walk across the hall to another Georgetown Law course, and AI-generated output is treated as a violation of academic integrity. Same school. Same law building. Opposite rules.

Howard University School of Law takes the embrace position for legal writing:

"Generative AI tools can be invaluable for generating ideas, identifying sources, synthesizing text." -- Howard University, Advanced Legal Writing

This is a professor in the discipline most obsessed with original analysis saying AI is "invaluable." That word choice is deliberate. Not "sometimes helpful." Not "acceptable if cited." Invaluable.

Meanwhile, Grand Rapids Community College's Business Law course goes full restriction:

"Academic dishonesty includes taking content from...AI technology such as ChatGPT (either directly or with modification) and representing it as your answer."

Note the parenthetical: "either directly or with modification." This policy forecloses the common student workaround of using AI to generate a draft and then editing it. Even modified AI output counts as dishonesty.

Why is law so torn? Three forces are pulling in three directions simultaneously.

First, the Socratic tradition and bar exam preparation demand original legal analysis. A student who cannot construct an argument from first principles will fail the bar. AI shortcuts that produce passing coursework may produce failing bar results.

Second, legal research tools already incorporate AI. Westlaw and LexisNexis have rolled out AI-powered features. Practicing attorneys use AI for contract review, case research, and brief drafting. A law school that bans AI is training students for a profession that uses it daily.

Third, the legal profession itself is in the middle of its own reckoning. Courts are issuing contradictory rulings about AI-generated filings. Bar associations are publishing guidance that ranges from permissive to prohibitive. Law professors are not just teaching in a divided discipline. They are teaching a discipline that is watching its own profession divide in real time.

The classroom is caught between "this is cheating" and "lawyers need this skill," with no resolution in sight.

View law school AI policies | Georgetown vs Caltech: Two Models for AI in Admissions

Biology: The Same Professor, Two Opposite Policies

The single best illustration of the entire AI policy debate does not come from two rival institutions. It does not come from two professors in different wings of the same building. It comes from one person.

Wendy St. John teaches two biology courses at College of Marin.

Her policy for BIOL110, the lecture course:

"The use of generative artificial intelligence (AI) tools...is an emerging skill, and throughout the semester, I will provide basic tutorials about how to leverage it for our work."

Her policy for BIOL110L, the lab section:

"Due to the hands-on, exploratory nature of the content in this laboratory course, I do not allow any use of generative artificial intelligence (AI) tools."

Same professor. Same semester. Same discipline. Opposite policies. This is not hypocrisy. It is one of the clearest articulations of why AI policy is inherently contextual.

The lecture/lab distinction maps perfectly to the difference between conceptual and procedural knowledge. In the lecture, students are learning frameworks, terminology, and relationships between ideas. AI can scaffold that learning without undermining it. In the lab, students are developing physical skills, observational habits, and experimental intuition. AI cannot pipette for you. It cannot observe a stained slide through a microscope. The skill is the doing, and outsourcing the doing eliminates the learning.

Overall, biology splits roughly 50% Embracing, 25% Restrictive, and 25% Mixed. That is a discipline where the majority has embraced AI, but a significant minority has not, and neither side shows signs of conceding.

Fairfield University represents the embrace camp:

"AI tools can be valuable for learning when used thoughtfully and responsibly. In this course, you are encouraged to apply concepts you are learning with AI in a guided step-wise fashion."

The phrase "guided step-wise fashion" is worth noting. This professor is not saying "use AI however you want." They are integrating AI into the pedagogical structure of the course. AI becomes part of the curriculum, not a supplement to it.

The University of Missouri's Microbiology course represents the restrictive camp:

"Do not use GAI to generate answers for graded assignments or exams... Violations will be considered academic dishonesty."

"GAI" -- generative artificial intelligence, reduced to an acronym, treated as a known hazard. The phrasing "will be considered academic dishonesty" leaves no ambiguity. There is no discussion, no conditional framework, no space for judgment. AI on graded work equals dishonesty.

Biology's divide is structural. Lab sciences will always need hands-on skill development that AI cannot replace. But the conceptual and analytical dimensions of biology -- the parts taught in lectures, papers, and problem sets -- are exactly the kind of work AI handles well. As long as biology remains a discipline that combines both, its professors will disagree.

Arts: No Middle Ground

Arts is the only discipline in the dataset with zero policies in the Mixed category. That number deserves emphasis. Every other divided discipline has at least some professors attempting a conditional approach. Arts has none.

The distribution: 67% Restrictive, 33% Embracing, 0% Mixed.

Professors in creative disciplines have decided the question is binary. Either AI belongs in the creative process or it doesn't. There is no "use it for brainstorming but write the final draft yourself." No "cite your AI use and we'll evaluate the result." The middle ground that most other disciplines have occupied simply does not exist here.

The Fashion Institute of Technology illustrates the restrictive majority. Two separate FIT courses each take an uncompromising stance:

"Submit your own work... Academic dishonesty includes taking content from... AI technology such as ChatGPT."

"For this course, all your work is your own and original, so stay clear of Chat GPT and other AI that creates any work, or part of the work for you."

"Stay clear." Not "use carefully" or "use with attribution." The language is avoidance language: don't touch it, don't let it touch your work, keep distance.

Michigan State represents the embracing minority:

"You are welcome to use generative AI tools (e.g. ChatGPT, Dall-e, etc.) in this class as doing so aligns with the course objectives." -- Michigan State

Note the explicit mention of DALL-E alongside ChatGPT. This professor is not just permitting text generation. They are permitting image generation in a creative discipline. The course objectives have been built around AI as a creative instrument.

The binary split in Arts makes structural sense. Creative disciplines carry an implicit promise: this is your work, your vision, your voice. The "your own original work" imperative is not an arbitrary integrity rule. It is the point of the discipline. When a student submits a painting, a film, a garment, or a photograph, the assumption is that the student made aesthetic choices, developed technical skills, and expressed something personal. AI disrupts that assumption more directly than it disrupts the assumption in, say, a statistics course.

But the embracing minority is not wrong either. Every artistic movement has grappled with new tools, from photography threatening painting to synthesizers threatening orchestras. The professors who embrace AI in art are not ignoring the originality question. They are arguing that originality is about vision and direction, not about whether a human hand held the brush.

The Consensus Disciplines (For Contrast)

The divided disciplines become more striking when you see what consensus looks like.

History: 100% Mixed (9 entries). Every single history professor in the dataset takes a conditional approach. None ban AI outright. None embrace it without conditions. Nine professors, nine independent decisions, one shared position: use it carefully, cite it, understand the limits.

Business: 100% Mixed (5 entries). Business departments have landed in the same place as history, though Wharton's policy reads more like a quiet embrace than a true middle ground. The field that teaches pragmatism has adopted pragmatic AI policies.

Computer Science: 0% Restrictive (8 entries). Not a single CS professor in the dataset bans AI entirely. The split is between Mixed and Embracing, but the restrictive position is completely absent. The people who build the technology refuse to prohibit it.

Research Methods: 100% Mixed (6 entries). The discipline most focused on methodology has reached methodological consensus on AI. Every policy says the same thing: AI is a tool, tools require documentation, documentation requires rigor.

What consensus looks like across these fields is not "anything goes." It is a shared conditional framework: use AI, acknowledge it, understand its limitations, and take responsibility for the output. The consensus disciplines have figured out that this middle position works for their pedagogical goals. The divided disciplines have not, because their pedagogical goals are themselves in tension.

View business school AI policies

Why Disagreement Matters for Students

The practical consequence of disciplinary disagreement is that your transcript reflects course-level policies, not department-level consensus.

If you are a pre-law student, the AI rules that govern your academic record are set by individual professors in individual courses. One law professor may require you to use AI for legal research exercises. Another may flag you for academic dishonesty for doing the same thing. Both outcomes appear on the same transcript, governed by the same honor code, at the same institution.

If you are a biology major taking both a lecture and a lab with the same professor, as Wendy St. John's students do, you need to mentally switch AI frameworks between sessions.

If you are an art student, you are navigating a discipline where the conditional middle -- the "use it but cite it" approach that protects students in other fields -- does not exist. You are either in a course that welcomes AI or one that treats it as a fundamental violation.

The safest strategy is also the most tedious: read each syllabus individually. Do not assume that a policy you encountered in one course in a discipline applies to another course in the same discipline. Ask when unclear. And understand that your professor's position on AI may have more to do with the structural logic of their field than with any personal stance on technology.

The disagreement documented here is not a sign that academia is failing to adapt. It is a sign that different disciplines face genuinely different pedagogical tradeoffs, and some of those tradeoffs do not have clean answers yet.

Same school, different rules: program-level contradictions | Policy contradictions in depth | The full discipline breakdown

Data Source and Methodology

Data source: All data in this analysis comes from the Syllabi Policies for Generative AI Repository, a publicly maintained spreadsheet of 210 voluntarily submitted course-level AI policies from 181 institutions across 75 disciplines.

For admissions-level data: Visit the GradPilot AI Policy Observatory, which tracks institution-level and program-specific AI policies for admissions at 174 universities.

Limitations: This dataset is self-selected. Faculty who submit policies to a public repository are likely more engaged with AI policy than the average professor. The patterns documented here reflect the positions of thoughtful, intentional policy-writers, not necessarily the full range of academic opinion.

View our full methodology

Worried About AI Detection?

170+ universities now use AI detection. Check your essays before submission.

Related Articles

Submit Essays That Get Accepted

Join thousands of students who verify their essays pass AI detection before submission

No credit card required

170+

Universities Tracked

99.8%

Detection Accuracy

0.004%

False Positives