Skip to main content

Common App AI Policy Decoded — What You're Signing in 2026

The Common App's fraud certification covers AI-generated content. Here's exactly what you sign — and which schools enforce it on top.

Nirmal Thacker, CS, Georgia Tech · Cerebras Systems AIMay 13, 202614 min read
Free Essay ReviewAI detection + scoring

Common App AI Policy Decoded — What You're Signing in 2026

Short answer: When you submit through the Common Application, you sign a fraud certification that explicitly names "the substantive content or output of an artificial intelligence platform, technology, or algorithm" as a form of misrepresentation. That language is the floor. Fourteen schools in our 174-policy dataset add their own AI-specific attestation on top of the Common App's clause, and the consequences cascade — a fraud finding at one school can be reported to every other school on your list. This is a clause-by-clause walk through what the certification actually covers, what it leaves ambiguous, and which schools enforce a stricter pledge in addition. If you've read our companion post on AI attestations and what schools make you sign, this is the platform-level decoder that sits underneath all of that.

What the Common App's fraud certification actually says

The Common Application's Fraud Policy is the governing document for every applicant who submits through the platform — more than 1,000 member colleges use it. The core paragraph that applies to AI was added in August 2023 and remains the operative text for the 2025-26 and 2026-27 cycles. The verbatim definition of fraud, as quoted directly on Harvard College's admissions FAQ and Brown's Integrity in the Application Process page — both of which cite it as the Common App's standard:

"Submitting plagiarized essays or other written or oral material, or intentionally misrepresenting as one's own original work: (1) another person's thoughts, language, ideas, expressions, or experiences or (2) the substantive content or output of an artificial intelligence platform, technology, or algorithm."

This is the language every Common App applicant agrees to when they certify the application before submission. It is referenced directly by Harvard, Brown, Johns Hopkins, and dozens of other member institutions as the governing standard for AI use. Eighteen percent of the universities in our dataset (32 of 174) cite the Common App's language explicitly in their own AI policies, which means the Common App is functionally upstream of school policy for those schools.

That sentence is doing three distinct things. Each one matters when you sit down to decide what is and isn't allowed.

Clause 1: "Intentionally misrepresenting as one's own original work"

"…intentionally misrepresenting as one's own original work…"

The certification is not a flat ban on AI; it is a ban on misrepresentation. Three pieces of that phrasing matter. "Intentionally" — the policy contemplates intent; in practice it is inferred from the artifact, so if your essay reads as AI-generated and you submitted it, the inference is that you knew. "As one's own original work" — the standard is originality, not authorship-only-by-you. You can have human readers, revise on feedback, use a spell checker. The line is whether the final submitted text is presented as something you originated. "Misrepresenting" — the violation is misrepresentation, not assistance.

The Common App did not invent a new category of misconduct for AI — it folded AI-generated content into the same misrepresentation standard that already covers ghost-writing, copy-pasting from the internet, and paying someone else to write your essay.

Clause 2: "Another person's thoughts, language, ideas, expressions, or experiences"

"…(1) another person's thoughts, language, ideas, expressions, or experiences…"

This is the human-authored prong. It catches the classic plagiarism cases — copying from a published essay, using a sample SOP, paying a consultant. The list — thoughts, language, ideas, expressions, experiences — is deliberately broad. "Experiences" is the kicker: it covers the case where a parent or consultant invents an internship for you to write about. You did not write the words; you also did not live the experience. Both are misrepresentation. This clause matters for AI because it sets up the parallel structure of clause 3: AI authorship is treated as category-equivalent to human ghost-writing.

Clause 3: The AI prong

"…or (2) the substantive content or output of an artificial intelligence platform, technology, or algorithm."

Three words do the heavy lifting.

"Substantive" is the single most important word in the policy. The Common App did not write "any use" or "any output." It wrote substantive. That word is the bright line. Substantive output contributes to the meaning, argument, voice, or structure of your essay — a paragraph drafted by ChatGPT and pasted in, a sentence-level rewrite the AI generated, an AI-generated opening line, even an AI-suggested topic you build the whole essay around. Non-substantive output is mechanical — spelling corrections, grammar fixes, a flagged typo, the same things a human proofreader would catch. Brown stitches this together explicitly: their policy permits "artificial intelligence to assist with spelling and grammar review, in the same way as any other platform that supports basic proofreading," followed by the floor: "the content of all essays, short-answer questions and any other material submitted by an applicant must be the work of that individual."

"Content or output" covers both finished text ("output") and the underlying ideas the AI generates ("content"). This closes the brainstorming loophole. If you ask ChatGPT for ten essay topics and write your essay around topic #7, the content of that topic is AI-generated — even though no AI-generated words appear in your final draft. The strictest reading catches that case; the pragmatic reading depends on how far your final draft is from the AI's suggestion. Most admissions offices have not staked out a clear position on AI-as-brainstormer. Some, like NC State, explicitly allow it. Others, like Brown, do not. The Common App's text leaves room for either interpretation.

"Platform, technology, or algorithm" is intentionally broad. It covers ChatGPT, Claude, Gemini, Llama, and every other LLM by name. It covers Grammarly's premium features when they go beyond mechanical correction. It covers AI-powered autocomplete in your text editor. The "algorithm" wording is a backstop: even if a future tool isn't called "AI," if it's algorithmic content production, the clause catches it.

What the Common App has NOT said publicly about AI

The fraud clause is the policy. There is no separate "AI guidance for applicants" FAQ walking through scenarios like "is Grammarly OK?" — those interpretations are left to member colleges. The Common App does not run AI detection itself; detection is a school-level decision. And the fraud reporting flow is not automated: a school that suspects fraud reports it to the Common App, which can then flag the applicant's account and notify every other school on the applicant's list. The trigger is a reader's suspicion at a member college.

Our disclosure landscape analysis shows that only 14 of 174 schools require an AI-specific attestation on top of the Common App — 158 of 174 (91%) are D0, relying entirely on the Common App's floor.

The platform layer vs the school layer

Think of the Common App's fraud certification as the floor. Every member college inherits it automatically. On top of the floor, 14 schools build their own policy layer — a school-specific AI attestation (a separate "I confirm" pledge naming AI by name — the D3 tier in our policy rubric), a published school policy that the Common App certification is read against, or both. The 14 D3 schools have decided the floor isn't enough.

Decoder table: the 14 schools that enforce on top of the Common App

These are the schools, as of the 2026 cycle, that require an AI-specific attestation in addition to the Common App's general fraud certification.

SchoolPermissionWhat the school adds on top of the Common App
BrownL4Undergrad: "the use of artificial intelligence by an applicant is not permitted under any circumstances in conjunction with application content." Master's applicants certify "the content of all essays… is my own work."
BYUL4"You may not use generative AI tools (like ChatGPT) as you compose your responses." Software-based screening + admission rescission.
CaltechL2"Failure to comply with the Ethical Use of AI guidelines may result in the rescission of your admission to Caltech." Grad essays "must be written entirely by the applicant."
GeorgetownL4Portal attestation: "use of artificial intelligence (AI) tools to complete any portion of the application, including essays, is prohibited."
HarvardL4Quotes the Common App fraud clause verbatim and adds AI essays "violate the Common Application and Coalition on Scoir Application standards as well as the Harvard College Honor Code."
Johns HopkinsL3Admissions FAQ cites the Common App's clause; limits permitted use to "basic proofreading… such as spelling and grammar review." Brainstorming explicitly allowed.
NC StateL3Signed: "My application materials were not created by another person or by a generative artificial intelligence system."
SMUL4 (grad)The clearest D3 in the dataset: "I confirm that I have completed my admission application… without any artificial intelligence (AI) assistance. Any violation of this commitment may result in the disqualification of my application."
UC BerkeleyL2UC system: "generative artificial intelligence software to assist with readability, but content and final written text must be their own." UC runs plagiarism checks.
UC Santa CruzL2Inherits UC system attestation.
UCLAL2Inherits UC system attestation; scope covers five distinct application elements.
UC Santa BarbaraL2Inherits UC system attestation.
UVAL2Application-level attestation: affirm work is your own and AI was not used substantively.
William & MaryL3Brainstorm-only permission with a school-specific affirmation.

Half of these schools (Caltech, the four UC campuses, UVA) are L2 schools that permit line-level AI editing AND require an AI-specific attestation. That's the "engaged trust" stance — we'll let you use it, but we want you to sign that you know the rules. A D3 pledge is not always a hardline ban; sometimes it's a permission-management tool. The other half (Brown, BYU, Georgetown, Harvard, Johns Hopkins, NC State, SMU, William & Mary) are L3 or L4 schools that use the attestation as a hardline ban backstop.

What "fraud" actually means — the ambiguity is real

Clearly covered (fraud): Copy-pasting a ChatGPT-generated paragraph into your personal statement. Asking Claude to "write me an essay about resilience" and submitting it. Using a paid service that uses AI to draft your essay. Running your finished draft through an AI "humanizer" tool to disguise AI-generated content — fraud, with the deception as an aggravating factor.

Clearly permitted: Microsoft Word's spell checker, Google Docs' grammar suggestions, Grammarly's basic spell/grammar features (Brown explicitly permits this), a human friend pointing out a typo on your draft — proofreading is not misrepresentation of another person's thoughts.

Ambiguous. These are the hard cases. AI-assisted brainstorming (NC State explicitly allows it; Brown explicitly does not). AI as a "second reader" giving you feedback — none of the AI's words make it into your draft, but your revision was influenced; the Common App doesn't say. Translation tools for non-native English speakers (Caltech prohibits AI translation; the Common App's text doesn't address it). AI-powered autocomplete in your text editor (cleanest answer: turn the feature off). Sentence-level "rewrite for clarity" features in Grammarly Premium — by the substantive standard, over the line; by the "I clicked accept" reading, arguably yours. The safer reading is the former.

The honest truth is that the Common App wrote a standard, not a rulebook. The standard is substantive misrepresentation. The cases above don't have clean answers because the standard requires judgment, and the judgment depends on facts only you know.

What happens if you violate the certification

The consequences cascade — the structural feature that makes a Common App violation more dangerous than a standalone application. The detecting school can rescind admission or deny the application (Caltech: "Failure to comply with the Ethical Use of AI guidelines may result in the rescission of your admission to Caltech." BYU: "BYU may rescind the admission offer of any student whose essay is found to have been generated by AI." SMU: "disqualification of my application."). The Common App can flag your account with a fraud notation that may persist across cycles. The Common App can notify every other school on your My Colleges list — one school's finding becomes visible to every other school. And some schools treat violation as grounds for expulsion even after enrollment: Yale's published policy states "submitting personal statements composed by text-generating software may result in admission revocation or expulsion." A finding can remove a student who has already started classes.

The cascade is why the Common App's fraud certification is more consequential than a single school's policy. One slip travels.

How to read your certification before you sign it

  1. Read the certification text in full before you click "I agree." The phrase "substantive content or output of an artificial intelligence platform, technology, or algorithm" should appear in any current version.
  2. For each school on your list, check whether they add a D3 attestation. Our /ai-policies directory is filterable by disclosure tier. If a school is D3, you are signing two things — the Common App's fraud clause and the school's specific AI pledge.
  3. Audit your writing process against the substantive standard. The test is not "did I use AI at all?" — it's "did AI contribute substantively to what I'm submitting?" Mechanical proofreading is fine; substantive drafting, rephrasing, or restructuring is not.
  4. Save your drafts. If a school questions authenticity, your draft history, revision marks, and notes are the strongest defense. We covered this in our deep dive on whether colleges actually use AI detectors — the false-positive rate is non-trivial, so contemporaneous drafts matter.
  5. If you've already used AI substantively, fix it before submission. Rewrite from scratch in a different document with no reference to the AI version. The further your final draft is from what the AI produced, the lower the risk.
  6. If a school is in the 14 D3 schools above, take its specific pledge seriously. "I didn't know" is not a defense against a clause you signed.

How the Common App compares to other application systems

For applicants applying through multiple systems, the rules are not the same. The Common App uses the misrepresentation standard described above ("substantive" is the key word). The Coalition on Scoir Application has substantively similar fraud language. AMCAS (medical school) allows AI for brainstorming, proofreading, and sentence-level editing but bans wholesale drafting — more permissive than the Common App's strictest readings. CASPA (PA school) is the strictest of any centralized US application system on substantive AI; our CASPA AI certification decoded post walks through the verbatim language. The UC application has its own AI language separate from the Common App, permitting AI "to assist with readability, but content and final written text must be their own" — broadly compatible with the Common App's substantive standard. Across all these systems, the principle converges: AI can polish, not draft. The Common App's phrasing is the most economical version. It leaves more interpretation to individual schools than CASPA or AMCAS do.

What this means for you in 2026

The Common App fraud certification is real, binding, and underread. The single most important word is substantive. If your final submitted essay reflects your own thinking, written in your own voice, with no more AI help than a spell-checker would provide, you are clearly compliant. If your essay was drafted by an LLM and you cleaned it up, you are clearly not. The hard cases — brainstorming, AI feedback, sentence-level rewrites — are not addressed cleanly in the policy text, so individual schools fill the gap. For 14 schools, that gap is filled by an explicit additional attestation. For the other 160, it's filled by reader judgment when an essay looks off.

Sign nothing you haven't read.


Quick AI Check

See if your essay will pass university AI detection in seconds.

Related Articles

Your Essay Deserves a Second Look

Professional AI detection and comprehensive scoring before you submit

No credit card required