Skip to main content

11 Universities That Actually Let You Use AI in Essays

Most coverage names schools that ban AI. Here are the 11 universities that explicitly permit it — no attestation, no enforcement.

Nirmal Thacker, CS, Georgia Tech · Cerebras Systems AIMay 13, 202614 min read
Free Essay ReviewAI detection + scoring

11 Universities That Actually Let You Use AI in Essays

Every other article about AI in college admissions is written from the same angle: which schools ban it, which schools detect it, which schools will revoke your offer if they catch you. That story has been told. It's also incomplete.

We pulled the full text of 174 US university AI admissions policies. Most schools are restrictive. A handful are flatly prohibitive. But a small cluster — 11 schools — sits at the opposite end. They explicitly permit AI use in application essays, they don't ask you to disclose it, and they don't claim to enforce any rule against it. No attestation. No detection. No "we will rescind your offer."

Six percent of the dataset. That's the entire permissive cluster, and most of it is invisible in mainstream coverage. This post is that list, with the actual quotes from each school's admissions page.

What "permitted" means here

Two definitions you need before the list:

L1 — Permissive / integrative. "AI-generated text may be included." The school accepts that the words in your essay may have been produced with AI, as long as you take responsibility for the content. Only two schools in the entire 174-school dataset hit this bar: Duke and the University of Wisconsin-Madison.

L2 — Line-level help. "AI can paraphrase, edit, and refine your own words." You can use AI substantively as a writing collaborator — for editing, refinement, rewording, idea generation — but the final text has to be something you produced. Twenty-eight schools hit this bar in the dataset. Only nine of them avoid disclosure and enforcement signals. Those nine are below.

You can read the full methodology at /ai-policies/methodology. The short version: levels describe what the school allows, D0 means there's no AI disclosure asked anywhere in the application, and E0 means the school states no detection process, no AI-screening tool, no special review signals for AI-flagged essays. L1/L2 + D0 + E0 is the cleanest "AI is allowed, no strings" combination available.

Eleven schools clear that bar. Here they are.


The two L1 schools: AI-generated text is fine

1. Duke University — L1 / D0 / E0

Duke is the most permissive school in the dataset, and the policy is recent. In 2024, Duke removed numerical essay scoring in undergraduate admissions, citing AI as a key factor. In 2025, Interim Dean Kathy Phillips told The Duke Chronicle:

"We don't think of [AI] as an inherently bad tool for students to use."

"AI can only write about what the students put into it."

"We're just no longer assuming that the essay is an accurate reflection of the student's actual writing ability."

The admissions website itself contains no AI restrictions for undergraduate applicants. Duke went a step further and introduced an optional AI-themed supplemental essay prompt for the 2025-2026 cycle.

Practical takeaway for an applicant: Duke is the rare school where AI assistance — including AI-drafted text — appears to be genuinely tolerated. Read Phillips's full position via The Duke Chronicle or see our full breakdown at /ai-policies/duke. Note: this is undergraduate-only. Duke Law and Duke Nursing have much stricter rules.

2. University of Wisconsin-Madison — L1 / D0 / E0

UW-Madison is the only school in the dataset that explicitly says, in writing, on the admissions site, that it will not penalize applicants for AI use:

"We will not disqualify an applicant found to have used or suspected of using AI in their admissions essays."

"nor are we running essays through any system to detect if AI was used."

The school does discourage one specific behavior:

"we strongly discourage students from simply feeding AI a prompt for their essay."

But "discourage" is the strongest word the policy uses. No detection. No enforcement. No disclosure question.

Practical takeaway for an applicant: UW-Madison has put the absence of AI enforcement in writing, which is rarer than you'd think. See the policy at admissions.wisc.edu/essays or our summary at /ai-policies/uw-madison. The La Follette School of Public Affairs graduate program has an opposite policy — full attestation required — so this only applies to undergraduate admissions.


The nine L2 schools: line-level help, no strings attached

3. Boston College — L2 / D0 / E0

BC's undergraduate admissions Apply page contains a clear "Note on Use of Artificial Intelligence":

"Generative artificial intelligence tools like ChatGPT may also serve as a resource, but must only be used as a guide."

"content must never be directly copied from AI or other sources."

"Personal statements, supplemental essays, and free responses provide students with a valuable opportunity to have an authentic voice."

What's missing from BC's policy is just as important: no disclosure prompt anywhere in the application, and no mention of detection tools.

Practical takeaway for an applicant: BC treats AI like a tutor — fine for guidance, not fine if you copy. See the policy at bc.edu/bc-web/admission/apply.html or /ai-policies/boston-college.

4. Carnegie Mellon University — L2 / D0 / E0

CMU's undergraduate admission FAQ permits substantive AI use:

"AI should never replace your unique voice, experiences and personal expression."

"should only serve as a supplementary tool to enhance your writing."

The policy explicitly names grammar checks, spelling, structural improvements, and vocabulary improvements as allowed uses. No disclosure is asked. No detection is described.

Important caveat at CMU: the undergraduate university-level policy is permissive, but CMU's Heinz College graduate programs and the Entertainment Technology Center have explicit AI bans. The L2/D0/E0 classification applies to the undergraduate Pittsburgh-campus admissions pipeline.

Practical takeaway for an applicant: CMU undergrad is one of the most permissive tech-flagship policies. See the FAQ at cmu.edu/admission/admission/admission-faq or /ai-policies/carnegie-mellon.

5. Colorado School of Mines — L2 / D0 / E0

Mines goes further than most in actively recommending AI:

"We encourage you to utilize genAI tools to brainstorm, edit, and refine your ideas."

"While they are especially useful for generating and organizing ideas, your final submission should represent your own work."

"Please do not copy and paste content you have not created directly into your application."

The policy frames AI as a "helpful collaborator." No disclosure. No detection. The page even names specific tools — ChatGPT, Google Gemini — and treats them as legitimate aids.

Practical takeaway for an applicant: Mines is one of only a few schools where the admissions office is actively recommending you use AI. See mines.edu/undergraduate-admissions/admissions-considerations or /ai-policies/colorado-school-of-mines.

6. Columbia University — L2 / D0 / E0

Columbia's undergraduate admissions stance was updated in late 2025:

"Columbia expects that all submitted application materials are the applicant's own original work and reflect their authentic voice."

"Students may utilize a wide range of resources, tools, and support during the college admissions process."

"Applicants are encouraged to familiarize themselves with Columbia's Honor Pledge & Honor Code and Generative AI Policy."

The current policy is L2/D0/E0 for undergraduate (Columbia College and Columbia Engineering) admissions. Worth flagging clearly: every Columbia graduate school — GSAS, School of the Arts, the Law School, SIPA — has explicit L4 prohibitions on AI use in application materials. The Business School is the only graduate exception, and it has its own L2 policy.

Practical takeaway for an applicant: Columbia undergrad permits substantive AI assistance and asks no disclosure. Read the policy at undergrad.admissions.columbia.edu/apply/firstyear or /ai-policies/columbia.

7. Georgia Institute of Technology — L2 / D0 / E0

Georgia Tech wrote the playbook. Its first public AI admissions policy went up in July 2023, before nearly any peer school. The Application Review page still hosts the clearest statement of any school in this list:

"AI tools can be powerful and valuable in the application process when used thoughtfully."

"Use it to brainstorm, edit, and refine your ideas."

"your ultimate submission should be your own...you should not copy and paste content you did not create."

A September 2025 admissions blog post by Rick Clark adds:

"you may lean on ChatGPT for brainstorming or initial idea generation, but your voice, your thoughts, style and convictions" should dominate the essay.

GT is the only school in this cluster that is simultaneously the earliest mover, the most thorough policy, and L2/D0/E0. The policy explicitly treats AI as analogous to other forms of human collaboration.

Practical takeaway for an applicant: If you want a model AI policy to understand, read GT's. See admission.gatech.edu/first-year/application-review or /ai-policies/georgia-tech.

8. Olin College of Engineering — L2 / D0 / E0

Olin's "Acceptable Use of AI in Your Olin Application" section uses the same trusted-adult analogy that GT and others lean on:

"Tools for the review of the grammar and spelling in your Olin essays are examples of acceptable uses of AI."

"Copying and pasting directly from an AI generator is not permitted."

"Consider how a parent, teacher, counselor, or other trusted adult might support you in writing your college application essays."

Brainstorming, grammar, spelling, and initial topic research are explicitly permitted. The Olin admissions FAQ does not ask for disclosure. No detection mechanism is described.

Practical takeaway for an applicant: Olin is one of the smallest schools on this list and the policy is one of the clearest. See olin.edu/admission/apply/admission-process or /ai-policies/olin.

9. University of Connecticut — L2 / D0 / E0

UConn's Apply page hosts a dedicated AI guidance section that mirrors Georgia Tech's framing almost word-for-word:

"We realize tools like ChatGPT, and other AI-based assistance programs have value, and could provide some benefits."

"use it to brainstorm, edit, and refine your ideas."

"your ultimate submission should be your own."

"do not copy and paste content you did not create directly into your application."

"your unique and authentic writing style is extremely valuable as we consider your application."

The policy is repeated verbatim on UConn's CEIN Nursing program page, suggesting it's now the institutional house style.

Practical takeaway for an applicant: UConn is one of the few flagship state schools to publish a permissive AI policy with no disclosure prompt. See admissions.uconn.edu/apply or /ai-policies/uconn.

10. University of Georgia — L2 / D0 / E0

UGA's First-Year FAQ uses the human-analog framing that runs through this whole cluster:

"AI based writing assistance programs should be treated like any other form of assistance, whether it is a parent, counselor or friend."

"The writing you submit on your application must be your own."

"The essay prompts we have chosen help us to better understand you as a person, and the only one who can really share this is you."

No disclosure prompt. No enforcement. The UGA Graduate School has a separate restrictive policy on AI in theses and dissertations — but that doesn't touch admissions writing.

Practical takeaway for an applicant: UGA is one of two SEC schools in the permissive cluster. See admissions.uga.edu/admissions/first-year/first-year-faq or /ai-policies/uga.

11. Vanderbilt University — L2 / D0 / E0

Vanderbilt's admissions FAQ uses the trusted-adult framing:

"ChatGPT and other forms of AI may be viewed as one of these sources of assistance."

"AI should never be used to replace independent thinking on the part of the applicant."

"they should always use their own voice and write about their own life experiences."

Brainstorming, grammar checking, proofreading, and feedback on clarity and structure are all explicitly permitted. Vanderbilt asks no disclosure question and describes no enforcement mechanism. Notable: Vanderbilt is the second SEC school in this cluster.

Practical takeaway for an applicant: Vanderbilt is one of the more selective schools in the permissive cluster, which itself is unusual. See admissions.vanderbilt.edu/faq or /ai-policies/vanderbilt.


What you'll notice reading all 11 in a row

A few patterns emerge that are obvious only when you read these policies back-to-back:

The L1 tier is collapsing. Only two schools in the entire 174-school dataset met the L1 bar — and one of them (Wisconsin) explicitly says it won't penalize wholesale AI generation but will discourage it. The default permissive position has moved to L2: "substantive help, but write your own final words." If you're hoping a school will just say "AI is fine, do whatever," that is now a list of two.

The shared language. Re-read the GT, UConn, UGA, Vanderbilt, and Olin quotes. The phrasing converges: AI is fine "as one of these sources of assistance" or "like any other form of assistance," compared to "a parent, counselor, or friend." This is not coincidence. The trusted-adult analogy is becoming the standard institutional frame for what AI does for an applicant.

Tech-flagships dominate. Four of the 11 are heavy-engineering institutions: Georgia Tech, Carnegie Mellon, Colorado School of Mines, and Olin. These are schools whose faculty and students engage with AI as a subject of study. Their admissions policies reflect that fluency. Schools that don't engage with AI seriously tend to default to suspicion.

The SEC tilt. UGA and Vanderbilt are the only two SEC schools on this list. Both are large research universities. Most of the SEC is either L3 (brainstorming only) or L4 (banned outright), so the inclusion of two SEC flagships is notable rather than typical.

Permissive at undergrad, restrictive at the graduate level. Almost every school here has at least one graduate or professional program that contradicts the undergraduate policy. Duke Law bans AI completely. CMU's Heinz College threatens automatic denial. Columbia's GSAS, Law, Arts, and SIPA programs all ban AI. UW-Madison's La Follette School requires an AI-free attestation. The permissive cluster is an undergraduate phenomenon, not an institutional one.

What this list does NOT mean

A few things to be clear about before you celebrate.

Common App certification still applies. Every school here uses the Common Application. When you submit through the Common App, you certify your application is your own work. That certification interacts with AI use in ways most applicants don't think through. If you want to understand exactly what you sign and how it interacts with AI use, read /news/ai-attestation-college-application-what-you-sign. It applies even at schools with the most permissive institutional stance.

Fabrication is still fraud at all 11. Every policy quoted above explicitly prohibits submitting AI-generated content that misrepresents who you are. AI can help you edit, draft, polish, refine. AI cannot invent experiences you did not have. The line is not "did AI touch this," it's "is this still a truthful representation of you."

Other schools detect, and your essay goes to all of them. The Common App sends the same essay to every school on your list. If you write with AI assistance and apply to UW-Madison (which won't check) and BYU (which has explicit detection and rescission policies), your essay is the same essay at both schools. The strictest policy on your list effectively governs your behavior. See /ai-policies/insights/strictest-ai-policies for the inverse of this article.

Permissive undergraduate is not permissive forever. Several schools on this list have shifted in the past 24 months — Boston College, Carnegie Mellon, and Olin all softened their enforcement language between the 2024 and 2026 reviews. Duke moved from L2 to L1 in February 2026. Policies can also tighten. The verbatim quote you read today is the policy that applies today; check before you apply.

Common App's own position is a constraint. The Common App's fraud policy treats submitting "the substantive content or output of an artificial intelligence platform" as application fraud. This sits on top of every institutional policy and creates a tension the platform has not fully resolved. We covered that contradiction in /news/common-app-ai-policy-paradox.

How to use this list

Eleven schools have publicly stated they accept substantive AI help on essays. That's a smaller list than "schools that won't catch you" — it's the schools that have written down their willingness to accept the practice. The L2 cluster's signature phrases — "brainstorm, edit, and refine," "your own work," "ultimate submission" — are also how you'll spot permissive policies at schools we didn't cover here.

For the full searchable directory of policies at all 174 schools we've reviewed, see /ai-policies. For the methodology behind the L1-L4 scale, /ai-policies/methodology. For the opposite of this article, see /ai-policies/insights/strictest-ai-policies.

Six percent of the dataset will let you use AI without strings. Now you know which six percent.

Quick AI Check

See if your essay will pass university AI detection in seconds.

Related Articles

Your Essay Deserves a Second Look

Professional AI detection and comprehensive scoring before you submit

No credit card required