The AI Policy Gap: When Your Admissions Office and Your Professors Disagree

We cross-referenced 174 university admissions AI policies with 210 course-level syllabi. At 52 schools, the admissions office and the faculty are telling students completely different things about AI.

GradPilot TeamFebruary 22, 202614 min read
Check Your Essay • 2 Free Daily Reviews

The AI Policy Gap: When Your Admissions Office and Your Professors Disagree

Two Datasets, One Question

We have spent the past year building the GradPilot AI Policy Observatory, a database of 174 university admissions AI policies classified under our L/D/E framework -- Permission (L), Disclosure (D), and Enforcement (E). We have written extensively about how we classified those policies, what the data reveals, and where contradictions hide within the same institution.

But admissions policies only govern the front door. Once a student is enrolled, the rules that actually shape their daily experience come from somewhere else entirely: individual course syllabi.

That is why we turned to a second dataset. The Syllabi Policies for Generative AI Repository is a public collection of 210 course-level AI policies drawn from 181 institutions across 75 academic disciplines. It captures what individual professors are actually telling students about AI -- not in admissions FAQs, but in syllabi, on the first day of class, in the document that governs what counts as cheating for the next sixteen weeks.

We cross-referenced the two datasets. Fifty-two institutions appear in both. And at school after school, the admissions office and the faculty are telling students completely different things.

The central question is simple: when admissions says one thing and the classroom says another, which version do students actually experience?

The answer, overwhelmingly, is the classroom version. And the gap between the two is wider than most applicants realize.

The Biggest Contradiction: University of Michigan

If you follow our coverage, you already know Michigan is complicated. We documented how the Law School requires AI on one essay while banning it on others, creating a situation where applicants must track essay-by-essay which tool is permitted and which is prohibited.

But the admissions-to-classroom gap at Michigan is even more dramatic.

On the admissions side, Michigan's institutional posture relies on the Common Application fraud policy. Under our framework, this translates to L4/D0/E0 -- AI use is effectively treated as fraud, with no specific disclosure mechanism and no stated enforcement beyond standard review. The message to applicants is clear: AI-generated content has no place in your application.

Now consider what a doctoral student encounters once enrolled. The School of Information's syllabus for a doctoral-level course states:

"This course encourages the unrestrained and wholesale ethical/responsible use of AI/ML, Generative AI, and Agentic tools."

Unrestrained. Wholesale. Not "cautious use." Not "with attribution." The professor is telling doctoral students to use every AI tool available, as much as they want, for the entirety of the course.

The same university that treats AI use in applications as fraud is, in its own classrooms, encouraging students to use AI without restraint. The admissions office and the School of Information are not just on different pages. They are reading different books.

View Michigan's full policy breakdown

Silent Admissions, Loud Classrooms: The L0 Problem

The majority of universities in our database -- 67% -- are classified L0 at the admissions level. They have published no specific guidance on AI in application materials. No ban. No permission. No disclosure form. Just silence.

But the professors at those same schools are anything but silent. And their stances range from total prohibition to enthusiastic adoption, often within the same department.

UIUC: The Philosophical Ban

The University of Illinois Urbana-Champaign is L0/D0/E0 on admissions. No AI policy. No disclosure. No enforcement. If you are an applicant, the school has said nothing about whether you can use ChatGPT on your essays.

Once enrolled, at least one professor has a very clear opinion:

"AI-generated content is not your own creation, both being generated as a result from the training data (created by others) and the algorithm (also created by others)."

This is not just a rule. It is an argument -- a philosophical position about what it means to create something. The admissions office has no stance. This professor has a worldview.

UMass Amherst: One Campus, Two Opposite Stances

UMass Amherst is also L0/D0/E0 at the admissions level. Total silence on AI.

But cross the campus and you encounter two professors who have landed in opposite corners.

One takes a strict prohibition approach, banning AI outright from coursework. The other acknowledges the university-level ban and then explicitly overrides it:

"The University prohibits the usage of AI for schoolwork. However, I have made an exception and I expect you to use generative AI tools."

Read that again. The professor is openly acknowledging that the university has a prohibition and is choosing not to follow it. The university bans AI. The professor mandates it. And the admissions office, meanwhile, has said nothing at all.

For applicants evaluating UMass Amherst, none of this is visible. The admissions website is silent. You would have no way to know that the school you are applying to contains both a university-level AI ban and a professor who explicitly requires the tools the university bans.

Harvey Mudd: Cautious Engagement

Harvey Mudd College is L0/D0/E0 on admissions. In the classroom, an engineering professor takes what might be called a "cautious engagement" approach, allowing students to experiment with AI while noting its limitations. The professor observes that ChatGPT writes "really bad" SystemVerilog -- a pointed reminder that AI proficiency varies dramatically by domain and that blanket policies often miss the technical reality.

Skidmore: Documentation Without Disclosure

Skidmore College is L0/D0/E0 on admissions. No disclosure framework whatsoever for applicants.

In the classroom, a writing professor has built an elaborate accountability system: students must take screenshots of all AI queries and include in-text ChatGPT citations in their submitted work. Every interaction with AI must be documented and attributed.

The contrast is stark. Admissions has no disclosure mechanism. A professor on the same campus has built a detailed citation and documentation protocol for AI use. The institution asks nothing of applicants. The professor asks everything of students.

The pattern across L0 schools is consistent: professors are filling the policy vacuum that admissions offices leave. Where institutions are silent, individual faculty members are making their own rules -- and those rules are wildly inconsistent, even within the same school.

Read more: Most Colleges Have No AI Policy. Here's What That Means for You.

Permissive Admissions, Strict Classrooms

Some schools have gone out of their way to tell applicants that AI is acceptable. Their professors did not get the memo.

Duke: "Not Inherently Bad" vs. "Academic Dishonesty"

Duke University is L1/D0/E0 in our framework -- explicitly permissive on AI in admissions. Dean of Admissions Christoph Guttentag has publicly stated:

"We don't think of [AI] as an inherently bad tool for students to use."

Duke's admissions office has positioned itself as one of the more open institutions in our database. They have adjusted their essay evaluation process in response to AI, moving away from numerical essay scores. The message to applicants: AI is a tool, and using it responsibly is fine.

Now walk into an Ocean Justice course at Duke. The professor explicitly prohibits AI for summarizing, drafting, or presenting course material. The syllabus lists specific "Examples of Prohibited AI Use" and states that violations are treated as academic dishonesty.

Not "discouraged." Not "use with caution." Academic dishonesty. The same tool that admissions says is "not inherently bad" is, in this classroom, grounds for a disciplinary case.

View Duke's policy page

University of Washington: The Environmental Argument

The University of Washington is L2/D0/E1 under our framework. Admissions permits line-level editing with AI tools and uses soft manual review.

In the classroom, a writing instructor has a very different message:

"Outsourcing your writing to generative artificial intelligence services like ChatGPT, Gemini, Copilot, Perplexity, and Claude violates academic integrity policies, slows your development as a college-level writer, and silences your voice."

The professor goes further, adding an environmental dimension that is rare in AI policy discourse:

"A conversation with ChatGPT can consume 16 ounces of fresh water."

This is not just an academic integrity argument. It is a moral one. The admissions office allows you to use AI for editing. This professor says using AI silences your voice and wastes water. The frameworks are not just different in degree. They are different in kind.

View UW's policy page

UC Berkeley: "A Highly Sophisticated Form of Plagiarism"

UC Berkeley is L2/D3/E2 under our framework. Admissions allows line-level editing. The school requires disclosure and uses screening tools.

In Berkeley's Philosophy department, professors have taken a much harder line -- banning not just ChatGPT but even Grammarly:

"LLMs are not a neutral tool like computers, internet searches, or word processing software, but essentially a highly sophisticated form of plagiarism."

Consider the gap. Berkeley's admissions office allows applicants to use AI for line-level editing -- rephrasing sentences, polishing grammar, suggesting word choices. Its philosophy professors reject the premise that AI is a neutral editing tool at all. They categorize it as plagiarism. These professors would ban the exact level of AI use that the admissions office explicitly permits.

An applicant to Berkeley is told: you can use AI to edit your application essays. A Berkeley student is told: using AI at all is plagiarism. The applicant and the student are being addressed by the same institution. The messages are irreconcilable.

View UC Berkeley's policy page

The Same Campus, Two Completely Different Worlds

The admissions-to-classroom gap is revealing. But the classroom-to-classroom gap is, in many ways, more consequential for enrolled students. At several schools, the AI policy you experience depends entirely on which building you walk into.

Michigan State

Michigan State is L0 at the admissions level. In the arts department, students are welcomed to experiment:

"You are welcome to use generative AI tools."

In Spanish Writing, the policy is the opposite: AI use earns a grade of zero. No exceptions. No appeals. Same campus. Same semester. One course says explore freely. The other says a single use earns a failing grade.

University of Delaware

Delaware is also L0 on admissions. In a Languages course, professors permit AI use within defined rules -- specific tools, specific tasks, specific citation requirements. In Film Studies, AI is banned entirely. A student taking both courses in the same semester must maintain two completely different relationships with the same technology.

UPenn: Wharton vs. the Writing Seminar

The University of Pennsylvania is L0/D0/E0 at the institutional admissions level. But inside the institution, the range is extraordinary.

At Wharton, Professor Ethan Mollick does not merely allow AI. He requires it:

"I expect you to use AI (ChatGPT and image generation tools, at a minimum), in this class. In fact, some assignments will require it."

In the Writing Seminar, the approach is more measured -- AI is permitted for certain tasks but bounded by guidelines that emphasize the development of the student's own writing ability. The contrast with Wharton's mandate is significant.

A Penn student could walk out of a Wharton class where AI is required, cross Locust Walk, and enter a Writing Seminar where the same tools are used under carefully constrained conditions. The technology is the same. The institutional context is the same. The expectations are not.

View UPenn's policy page | Read more about policy contradictions

Where Both Sides Agree (And It Is Still Messy)

Not every school is a contradiction. At a few institutions, admissions and the classroom tell a broadly consistent story. But even alignment does not always mean clarity.

Harvard: Consistent Restriction

Harvard is L4/D3/E1 at the admissions level -- one of the most restrictive schools in our database. AI is prohibited, disclosure is required in the form of attestation, and manual review is in place.

In the Gender Studies department, course-level policy aligns: AI is categorized as "Don't." The restriction is consistent from application to classroom. But Harvard's Writing Center takes a more nuanced position, exploring how AI can be used as a pedagogical tool for revision and feedback. Even at a school that is aligned between admissions and the classroom, internal complexity persists.

View Harvard's policy page

Georgetown: Maximum Consistency

Georgetown is L4/D3/E2 at the admissions level -- a full prohibition with attestation and screening tools. In the Law School, course syllabi maintain the same restrictive posture. Georgetown may be the most consistently restrictive institution in both datasets, with the admissions office and the classroom reinforcing the same message: AI is not permitted.

This is the exception. Most schools do not achieve this level of consistency, and even Georgetown's alignment is partly a function of how unusually strict its admissions policy is. When you start at L4, there is less room for a classroom to be more restrictive.

View Georgetown's policy page

UW-Madison: Consistent Openness

On the other end of the spectrum, UW-Madison is L1 at the admissions level and may be the most aligned case in the permissive direction. The admissions office has stated:

"We will not disqualify an applicant found to have used or suspected of using AI in their admissions essays."

In the School of Information Studies, course syllabi welcome AI use as a learning tool. The message from application to enrollment is the same: AI is part of the landscape, and we are not going to penalize you for engaging with it. Whether this approach produces better outcomes is a separate question. But at least the student experience is coherent.

What This Means for Applicants

Three practical takeaways from this analysis.

1. Check both levels before you apply. The admissions AI policy tells you what the front door looks like. It does not tell you what the house is like inside. Before committing to a school, look beyond the admissions page. Search for course syllabi in your intended major. Look at departmental academic integrity pages. The classroom policy will govern the next four years of your life. The admissions policy governs one application cycle. Both matter, but only one will define your daily experience.

2. The policy you experience will be set by individual professors, not institutions. This is the most important finding in the data. At the vast majority of schools, AI policy is not set at the institutional level in any meaningful, enforceable way. It is set by individual professors, in individual syllabi, for individual courses. Two professors in the same department at the same school can -- and regularly do -- take diametrically opposed positions. Your experience with AI policy will be determined by your schedule, not your school.

3. Develop your own AI literacy now. The inconsistency is not going away. If anything, it will deepen as AI tools become more capable and faculty responses diverge further. The students who navigate this landscape best will be the ones who understand what AI can and cannot do, who can use it when permitted and write without it when required, and who can articulate their own relationship with these tools clearly. Our AI disclosure guide and our analysis of when and how to tell colleges about AI use can help you build that literacy.

The gap between admissions policy and classroom policy is one of the defining features of the current moment in higher education. Admissions offices are writing rules for applicants. Professors are writing rules for students. And almost no one is making sure those rules agree.

About the Data

This analysis cross-references two datasets:

  1. GradPilot AI Policy Observatory: 174 university admissions AI policies classified under our L/D/E framework across three dimensions -- Permission (L0-L4), Disclosure (D0-D3), and Enforcement (E0-E3). Data collected from official admissions websites, application portals, and institutional communications.

  2. Syllabi Policies for Generative AI Repository: A public collection of 210 course-level AI policies from 181 institutions across 75 disciplines. Data source: Syllabi Policies for Generative AI Repository.

Fifty-two institutions appear in both datasets. All quotes from syllabi are verbatim from the repository. All admissions classifications are drawn from our own database.

Limitations: Course syllabi represent the decisions of individual professors, not institutional positions. A single syllabus from a single course does not characterize an entire university's classroom posture. We have highlighted cases where multiple syllabi from the same institution point in different directions, but the syllabi dataset is not comprehensive -- it is a convenience sample. The admissions data, by contrast, represents systematic review of official institutional communications. The two datasets have different levels of completeness, and conclusions about "the classroom experience" at any given school should be understood as illustrative rather than definitive.

Worried About AI Detection?

170+ universities now use AI detection. Check your essays before submission.

Related Articles

Submit Essays That Get Accepted

Join thousands of students who verify their essays pass AI detection before submission

No credit card required

170+

Universities Tracked

99.8%

Detection Accuracy

0.004%

False Positives