Colleges Ban Student AI but Use AI to Read Your Essays

Virginia Tech uses AI to score essays. UNC has since 2019. Meanwhile Georgetown and Brown ban all AI. The double standard, explained with data.

GradPilot TeamFebruary 15, 202610 min read
Check Your Essay • 2 Free Daily Reviews

Colleges Ban Student AI but Use AI to Read Your Essays

The admissions double standard no one is talking about -- until now.

Here is the situation: Georgetown tells applicants that "use of AI tools to complete any portion of the application is prohibited." Brown says AI is "not permitted under any circumstances in conjunction with application content." Yale calls submitting AI output "application fraud" that can lead to expulsion.

Meanwhile, Virginia Tech deployed an AI system to score the very essays it receives. UNC Chapel Hill has been running essays through AI analysis since 2019. NC State uses AI for transcript processing.

Schools are telling you not to use AI while using AI themselves. This post maps the double standard with verified data from 150+ university policies.

The Schools Using AI on Your Applications

Virginia Tech: AI Replaces One Human Reader

In the 2025-2026 admissions cycle, Virginia Tech became one of the first major U.S. public universities to publicly deploy AI essay scoring for all undergraduate applicants. The AI system works alongside one human reader, effectively replacing the second human reader that previously evaluated every essay.

VP for Enrollment Management Juan Espinoza stated: "You roll this out, we're watching you" -- acknowledging the scrutiny the decision would face. The university emphasized that "AI is being utilized to confirm the human reader essay scores, not make any admissions decisions."

Here is what makes this notable: Virginia Tech has no published AI policy for student use in admissions essays (L0/D0/E0 in our data). The school uses AI to read your essays but has not told you whether you can use AI to write them. The university's general honor code suggests unauthorized AI use "may fall under several definitions of academic dishonesty," but there is no admissions-specific guidance.

Source: VPM News, December 2025

UNC Chapel Hill: AI Essay Analysis Since 2019

UNC Chapel Hill has been using AI to analyze submitted essays for longer than most schools have been thinking about AI policies. The system evaluates "writing style, grammar, and academic rigor assessment" according to admissions FAQ documentation.

UNC's student-facing policy? The school tells applicants to "be yourself and sound like yourself." That is the extent of it. UNC is coded L0/D2/E0 in our database -- no restriction level, a D2 for screening disclosure, and no formal enforcement mechanism for student AI use. The school screens your work with AI (D2) but does not restrict or enforce rules about your AI use.

NC State: AI for Transcript Summaries, Attestation for Students

NC State presents an interesting contrast. The university uses AI tools internally for transcript processing and evaluation. But for students, NC State requires a formal D3 attestation: "My application materials were not created by another person or by a generative artificial intelligence system."

NC State does allow brainstorming with AI tools but draws a hard line at submitting AI-generated language. The school is coded L3/D3/E3 -- one of the strictest in our database. Students are held to a formal certification; the institution itself uses AI freely.

Georgia Tech: Permissive for Students, Silent on Institutional AI

Georgia Tech takes a different approach. The school is coded L2/D0/E0 -- relatively permissive. Georgia Tech's admissions blog states: "AI tools can be powerful and valuable in the application process when used thoughtfully." The school allows brainstorming, editing, idea generation, and even "constructing resumes in Activities section" with AI, while requiring that "your ultimate submission should be your own."

Georgia Tech has not publicly disclosed whether it uses AI in its own review process. Among the schools we examined, this is the most consistent approach: permissive for students without known institutional AI deployment.

The Strictest Bans: What These Schools Say About Student AI

While some schools quietly deploy AI on their side of the process, others have published the harshest possible language banning student AI use.

Georgetown University (L4/D3/E3)

"use of AI tools to complete any portion of the application is prohibited"

Georgetown's undergraduate admissions carries the maximum restriction level (L4), requires formal attestation (D3), and uses formal verification (E3). The ban covers "all application materials." This is a complete prohibition -- not even grammar checking with AI is permitted.

Brown University (L4/D0/E1)

"not permitted under any circumstances in conjunction with application content"

Brown's language -- "under any circumstances" -- is among the broadest in higher education. The school does allow "basic grammar and spelling review" but prohibits any AI involvement in content creation. Brown does not require a formal attestation (D0) and relies on standard enforcement (E1), but the policy language itself is absolute.

Yale University (L3/D0/E3)

"Submitting the substantive content or output of an artificial intelligence platform constitutes application fraud."

Yale classifies AI use as fraud and backs it with the strongest possible consequence:

"Submitting personal statements composed by text-generating software may result in admission revocation or expulsion."

Yale allows "grammar and spelling review" and "general writing advice" but draws the line at any substantive content. The school uses formal verification systems (E3). Note that Yale does not publicly disclose whether it uses AI in its own review process.

Mapping the Double Standard: The E-Dimension

Our enforcement dimension (E) tracks how schools verify compliance. Here is what the data reveals across 150+ schools:

  • E0 (no enforcement): The majority of schools. No detection tools, no formal verification.
  • E2 (screening tools): 46 schools (28% of our database) use AI detection or screening tools to evaluate submitted essays. These include Duke, Carnegie Mellon, Columbia, Stony Brook, BYU, Bates, and many others.
  • E3 (formal verification): 17 schools (10% of our database) have formal verification systems. These include Georgetown, Yale, Stanford GSB, SMU, NC State, Villanova, Penn State Smeal, Carnegie Mellon Heinz, UC Santa Cruz, UC Merced, UC Riverside, Stony Brook Creative Writing, and Bates Environmental Studies.

This means 38% of schools in our database use some form of AI or detection technology to evaluate or screen student essays -- while many of those same schools restrict or prohibit students from using AI.

Notable E2 Schools (Screening Tools)

These schools actively screen essays using AI detection or similar tools:

SchoolL/D/EWhat They Screen For
Carnegie MellonL2/D0/E2AI-generated content detection
Columbia (GSAS)L4/D0/E2AI-generated application materials
Duke LawL4/D0/E2AI-generated written products
Stony BrookL2/D0/E2AI-generated essays
BYUL4/D3/E2Software detection + attestation
Penn (Wharton)L4/D0/E2AI-authored content
USC LawL4/D0/E2AI in application completion

Notable E3 Schools (Formal Verification)

SchoolL/D/EVerification Method
GeorgetownL4/D3/E3Attestation + formal verification
YaleL3/D0/E3Fraud classification + verification
SMUL4/D3/E3Signed attestation + disqualification
NC StateL3/D3/E3Signed statement + verification
VillanovaL4/D0/E3Denial or rescission
Stanford GSBL4/D0/E3Denial of application
Penn State SmealL4/D0/E3iThenticate software verification

The Teacher Double Standard

The asymmetry is not limited to institutions. A 2024 study by foundry10 found that 31% of teachers use AI to help write recommendation letters -- the same letters that vouch for a student's character and academic ability. Yet many of those same teachers would consider it dishonest for a student to use AI on application essays.

The study also found that counselors are increasingly concerned about AI in the application process, but the concern is overwhelmingly directed at student use rather than institutional or educator use.

This creates a layered double standard:

  1. Schools use AI to evaluate essays but ban students from using AI to write them.
  2. Teachers use AI for recommendation letters but expect students to write essays without AI assistance.
  3. Detection tools used by schools have documented error rates (Turnitin's 4% false positive rate) that disproportionately affect ESL students and non-native English speakers.

The Philosophical Question

Is it fair to use AI to judge essays while banning AI to write them?

The institutional argument is straightforward: schools need efficiency tools to process tens of thousands of applications. Virginia Tech receives over 40,000 applications annually. Human-only review at that scale is expensive and slow. AI scoring as a confirmation layer is a practical solution.

The student argument is equally straightforward: if the school does not trust its own ability to evaluate essays without AI assistance, why should students trust their ability to write essays without AI assistance?

The deeper issue is about what the essay is supposed to measure. If it measures writing ability, and the school is not even reading it with human eyes first, does the measurement still mean what it used to? If it measures authenticity and voice, can an AI system reliably evaluate those qualities?

Duke's response -- eliminating essay scores entirely because they are "no longer assuming that the essay is an accurate reflection of the student's actual writing ability" -- may be the most honest acknowledgment of this tension.

What This Means for Applicants

  1. Know the specific policy. A school's E-level tells you whether they are actively screening your work. Check your target schools on our policy directory.
  2. Do not assume reciprocity. A school using AI in its own process does not mean it permits student AI use. Virginia Tech is the clearest example of this asymmetry.
  3. The attestation schools mean it. If a school requires D3 attestation (Georgetown, SMU, BYU, NC State, and others), the pledge is binding regardless of what the school does on its end.
  4. Watch the E2 schools. The 46 schools with screening tools are actively looking for AI-generated content. Even if they do not require attestation, they are checking.
  5. Your best protection is authentic writing. The strongest safeguard against both AI detectors and institutional scrutiny is writing that sounds like you -- with your specific details, your actual experiences, and your genuine voice.

Explore the Data

Worried About AI Detection?

170+ universities now use AI detection. Check your essays before submission.

Related Articles

Submit Essays That Get Accepted

Join thousands of students who verify their essays pass AI detection before submission

No credit card required

170+

Universities Tracked

99.8%

Detection Accuracy

0.004%

False Positives