Skip to main content

The Letter of Recommendation AI Blind Spot Colleges Ignore

Zero of 174 universities address AI in recommendation letters. Teachers use ChatGPT to draft them. No school's policy covers it.

Nirmal Thacker, CS, Georgia Tech · Cerebras Systems AIMay 13, 202612 min read
Free Essay ReviewAI detection + scoring

The Letter of Recommendation AI Blind Spot Colleges Ignore

Here is the asymmetry at the heart of the 2026 application cycle. A high school senior signs an attestation — sometimes a one-checkbox affirmation on the Common App, sometimes a full pledge embedded in a supplement — swearing that no AI wrote their essays. Their teacher, in another tab, opens ChatGPT, pastes in three bullet points from the student's brag sheet, and asks for "a warm, specific letter of recommendation, around 500 words, for a student applying to a competitive engineering program."

Both documents arrive in the same admissions file. Only one of them comes with rules attached.

We analyzed the AI policies of 174 U.S. universities for our AI policies directory. Of those 174 schools, zero name "letters of recommendation" or "LOR" in the formal scope of their AI policy. Two — Duke and the University of Minnesota–Twin Cities — even mention recommendation letters in passing in their policy text. The other 172 have written policies that cover applicant-authored work exclusively: the personal statement, supplements, short answers. Nothing about what the recommender is doing.

This is the largest unaddressed surface in the admissions integrity conversation. And the teachers are already using AI.

The Finding: 0 of 174

Our dataset, derived from the methodology described here, classifies each school's AI policy on three axes: permission level (L0–L4), disclosure requirement (D0–D3), and enforcement posture (E0–E3). Each policy also gets a scope field listing which application components the rule applies to: "personal statement," "supplemental essays," "short answers," "portfolio statements," and so on. The rubric explicitly allows "letters of recommendation" as a scope tag.

It is unused. Across all 174 institutions in the dataset:

  • 0 schools include "letters of recommendation" or "LOR" in their scope.
  • 2 schools mention recommendation letters anywhere in their quote text or notes — Duke and UMN-Twin Cities, and one of those mentions is about not requiring them.
  • 172 schools have written policies that are silent on what recommenders may or may not do with AI.

The Duke note is the most striking. Buried in the file, our researcher recorded that Duke's admissions office "told schools they 'don't have an objection' to teachers/counselors using AI for recommendation letters." That single sentence is the closest thing the dataset contains to an institutional position on AI-in-LORs from a top-15 university — and it is permissive, undocumented in the public policy, and recorded as a notes-field aside rather than as a formal scope tag.

UMN-Twin Cities mentions LORs only to say they don't require them: "No letters of recommendation or essays are required for freshman or transfer admissions." The school's stated AI policy is L0 — silent — because there's nothing in the application for a policy to cover. That is the entire universe of LOR-adjacent policy language across 174 universities.

For comparison: 142 schools have rules about the personal statement. 88 cover supplemental essays. 31 cover short answers. The recommender side of the application — the part the applicant has zero control over — is the blind spot.

Teachers Are Already Using AI to Draft Letters

This isn't a hypothetical risk surface. It's a documented, on-the-record practice.

In October 2024, Education Week surveyed teachers about their letter-writing workflows and reported that roughly one in three teachers had used AI tools to help draft a letter of recommendation. Their LinkedIn survey found that of 397 teachers who responded, 130 said they used AI for the task. About half of those did so to "take the stress out of" letter writing, and roughly a third said they believed AI tools improved letter quality. The publication framed the trend as a "high-tech hack" — common enough to merit a feature.

The workload context is real. Education Week noted that nearly 40 percent of teachers who responded to an earlier survey wrote more than 10 LORs a year, and 10 percent wrote more than 30. In EdSurge's 2025 reporting on AI in college counseling, one counselor estimated he writes 120 to 150 letters per year. The national counselor-to-student ratio is 385-to-1, well above the recommended 250-to-1. Letters of recommendation are some of the most time-consuming non-instructional work a teacher or counselor does. ChatGPT eats that workload alive.

The Edutopia piece on using AI for LORs reads like a workflow guide: paste in the student's brag sheet, ask for a draft, edit for voice, finalize. The Government Technology writeup of one teacher's use case is similar. None of these articles describe a fringe practice. They describe a coping mechanism that became standard within about eighteen months of ChatGPT's launch.

The American School Counselor Association told EdSurge in 2025 that they do not track counselor AI use. No professional association is keeping numbers on this.

Why the Policy Side Stayed Quiet

Universities have spent two years writing AI policies. We have documented the detection-tool spending, the attestations, the rescission language. The omission of LORs is not an oversight a careful policy writer could have missed. It's structural.

A few reasons the recommender gap persists:

Recommenders aren't applicants. A college's contract is with the student. The attestation a student signs binds the student. There's no parallel agreement with the high school teacher who writes the letter. Universities have no signing party on the recommender side to attach a rule to.

Enforcement is hard at the recommender layer. AI detectors on student essays are already controversial because of the 4 percent sentence-level false-positive rate Turnitin admits. Running detection on LORs adds a second layer of problems: teachers write in a more formulaic register than 17-year-olds, schools see fewer samples per author, and the consequence of a false flag falls on the student, not the teacher.

The system runs on trust. LORs work because admissions officers assume the named teacher actually wrote the letter and means what it says. A school questioning that assumption publicly — by requiring teacher attestations or running detection on faculty submissions — would unwind something the whole genre depends on.

Volume. A top-25 admissions office reads tens of thousands of letters per cycle. Adding any kind of screening or follow-up at the recommender level is operationally enormous.

None of those reasons make the gap correct. They explain why it exists.

The Three Scenarios

What does the gap mean for a real applicant? Three scenarios, in increasing complexity:

Scenario 1: Student complies, teacher uses AI. The applicant writes every word of their essay themselves. They sign the attestation honestly. Their teacher, separately, drafts the LOR in ChatGPT and edits lightly. The submitted file has mixed AI provenance — some AI, some not — with no disclosure on the part that wasn't covered by any policy. This is the most common case under current conditions. The student has not violated any rule. They also have less control over the AI footprint of their own file than they think.

Scenario 2: Both sides comply. Student writes their essay. Teacher writes the letter. No AI in either. This is the implicit baseline most policies assume — and the case the public language describes. It is increasingly rare on the recommender side, per the EdWeek and Edutopia reporting.

Scenario 3: Both sides use AI. The student drafts in ChatGPT and signs an attestation that says they didn't. The teacher does the same. The file is end-to-end machine-assisted. No school's current policy directly addresses this case, because each piece of the policy was written as if the other piece of the file came from a human.

The honest case is scenario 1: a student who follows the rules whose teacher quietly doesn't, with neither party informing the school. That scenario is invisible to every attestation the student signs.

What Common App and Coalition Say

Common App is the rule-setter for the recommendation infrastructure used by most of these 174 schools — it processes the LOR submissions, manages the recommender invitations, and defines what counts as "application fraud."

Common App's fraud policy explicitly names "the substantive content or output of an artificial intelligence platform, technology, or algorithm" as material that cannot be misrepresented as one's own original work. The pronoun there is "one's" — the applicant's. Common App's published recommender resources cover what teachers should include in a strong letter; they do not address AI use by the recommender.

This is the Common App AI policy paradox at the LOR layer: the fraud framework binds the student to a stricter standard than the policy applies to the recommender writing on the student's behalf.

The Coalition App's language is similar — applicant-side, not recommender-side. NACAC, the national professional association for college admissions counseling, updated its ethics guide in fall 2025 to add an AI section urging colleges to align AI usage "with our shared values of transparency, integrity, fairness and respect for student dignity." That language describes how colleges should use AI in evaluation. It is silent on how recommenders should use AI in submission.

The combined effect: there is no major institutional voice — not Common App, not Coalition, not NACAC, and none of 174 university policies — that has written down rules for what a recommender can do with ChatGPT before clicking "submit."

What Applicants Can Actually Do

Most of this is outside any applicant's control. Some of it isn't.

Talk to your recommenders. The single most useful thing you can do is have a real conversation with each of your recommenders about how they draft letters. Not as an accusation — as a planning conversation. Ask: "Will you be drafting from notes, or do you use any AI tools to help with letters?" Most teachers will tell you straight.

Provide a strong brag sheet. The single best protection against an AI-generic letter is to give your recommender so much specific, vivid, hard-to-paraphrase detail that no AI prompt could produce a comparable letter from the same inputs. Specific stories. Specific quotes you've said. Specific projects. The more granular your brag sheet, the more your letter will read as yours regardless of drafting workflow.

Choose recommenders who know you well. A teacher who's had you for two years, who's coached an extracurricular you led, or who's read your writing through a research project will produce a letter whose specificity protects you — both from generic AI drafts and from honest human teachers who simply don't have the texture to write a memorable letter.

Ask, if it matters to you, whether the words in the letter are theirs. Most teachers will be honest about this. Some will say they brainstorm with AI and then write the letter themselves. Some will say they don't use AI at all. A few might say they draft in ChatGPT and edit. None of those answers violates a school's published policy. All of them matter to you.

Don't ask your recommender to disclose AI use to the school. Schools haven't asked for that disclosure. Volunteering it creates more risk than it resolves.

For our broader take on the applicant-side attestation regime, see AI Attestation in College Applications.

What Universities Could Do

This section deliberately doesn't endorse a path. The policy choices here are genuinely contested.

Add LOR to scope. The narrowest change. A school could update its public AI policy to say which standards apply to recommenders, even without an enforcement mechanism. The point is the public statement.

Ask recommenders to attest. Common App could add a checkbox to the recommender interface. Schools that require D3 attestations from students could mirror that requirement for teachers. The mechanism exists; no major platform has used it.

Review teacher-side AI use as part of integrity work. Detection tools already process LORs at some schools for plagiarism. Extending that posture to AI detection would be a heavier change, with the false-positive concerns above.

Stop pretending. The most honest option, and the most uncomfortable. A school could say: "AI assistance is permitted in recommendation letters provided the content accurately reflects the recommender's knowledge of the applicant." That's roughly Duke's de facto position. Writing it down would resolve the asymmetry by acknowledging the AI use that's already happening.

Any of these is a real change. None of the 174 schools in the dataset has made it.

Why This Matters

The applicant-side AI policy infrastructure of 2026 — the attestations, the detectors, the rescission clauses — was built to govern a model of the application where the student writes every part of their own file and the recommender writes the other half. That model was already breaking before ChatGPT. ChatGPT broke it faster.

What replaces it is unclear. What's not unclear is the current state: an application file in which roughly half the documents are governed by formal AI policy and the other half are governed by nothing. Students absorb the entire compliance burden. Recommenders absorb none of it. The teacher's ChatGPT tab is invisible to the policy framework that the student is required to read.

This is the LOR blind spot. We are tracking it as a category in the dataset, and we will update this post when any of the 174 schools — or Common App, or NACAC — publishes the first piece of language that names it.

If you want to track which schools have AI policies and which don't, our AI policies directory lists all 174. The methodology page explains how the scope tags work and what's missing. We'll add an lor scope tag to a school's record the first time a school addresses it. As of May 2026, that file is empty.


Sources on teacher AI use in LORs:

  • Education Week, "Teachers Use This High Tech Hack to Knock Out Recommendation Letters" (October 2024) — survey of 397 teachers, 130 of whom (about 33%) reported using AI tools to draft recommendation letters.
  • EdSurge, "Are High School Counselors Encouraging AI for College Applications?" (July 2025) — counselor caseload context, ASCA non-tracking of AI use.
  • Edutopia, "Using AI for Letters of Recommendation for High School Students" — workflow-style guide describing AI drafting as standard practice.
  • Government Technology, "A Teacher's Use Case for AI: Writing Recommendation Letters" — practitioner case study.
  • NACAC, "Writing and Evaluating Letters of Recommendation for Character" — fall 2025 ethics guide AI section.
  • Common App, "Understanding the recommendation process" (2025 resource PDFs) — recommender-side documentation, no AI-specific language.

Quick AI Check

See if your essay will pass university AI detection in seconds.

Related Articles

Your Essay Deserves a Second Look

Professional AI detection and comprehensive scoring before you submit

No credit card required