31% of Teachers Used ChatGPT for College Rec Letters
Foundry10 surveyed 425 teachers: 31% used AI for college rec letters. They rate their own use as more ethical than students do.
31% of Teachers Used ChatGPT for College Rec Letters
Almost a third of US high-school teachers used a generative AI tool to help write a college recommendation letter, according to a Foundry10 survey of 425 teachers fielded in early 2024. The most common reason teachers gave: stress reduction, cited by 53% of them (Foundry10, July 2024). Recommendation letters are unpaid labor, written in the same months teachers are also grading, coaching, and running their own classes — so the appeal of AI assistance is, on its face, completely understandable.
But the same survey contains a second finding that is harder to wave away. Teachers rate their own AI use on rec letters as more ethical (3.26 on a 5-point scale) than they rate students' AI use on personal essays (2.91). Students rate teacher AI use on rec letters at 2.91 — the same number students give their own essay AI use. So teachers think one thing about teacher AI use and students think another, and the gap is large enough to matter (Foundry10, July 2024). This post is about that gap: what the numbers actually show, why a stress-reduction rationale doesn't fully resolve it, and what it means for the student whose name is on the letter.
Updated: May 10, 2026. We revise this post as new research emerges.
The numbers
The Foundry10 white paper "Navigating College Applications with AI" surveyed two parallel panels in February and March 2024: 425 US high-school teachers and 523 US teens aged 16–18 who had applied to college that cycle (Foundry10, July 2024). On the teacher side, the headline numbers are:
- 31% of teachers report using a generative AI tool to help write at least one college recommendation letter.
- 53% of those teachers cited reducing stress as a reason.
- 64% of teachers say they worry AI will impair students' problem-solving.
- 72% of teachers say AI tools enable student cheating.
The last two numbers sit awkwardly next to the first one. A clear majority of teachers worry AI is enabling students to cheat, even as nearly one in three is using AI to write the letter that vouches for the student's character. The Foundry10 authors note this asymmetry without resolving it. We'll come back to it.
A few caveats. This is self-report data, and self-report on something teachers know is contested almost certainly undercounts actual use. It's a single survey of 425 teachers — informative, not the last word. And Foundry10 is an advocacy-leaning research org, not a peer-reviewed journal. The numbers are a directional signal. (See our pillar post on the AI-in-admissions research base for a full caveat list across the studies in this space.)
Why teachers say they reach for AI
The 53% stress-reduction figure deserves to be taken seriously. US teachers write recommendation letters as uncompensated labor, often for dozens of seniors per cycle, in the same eight-week window where Common App deadlines, midterm exams, and college visits collapse on top of each other. A strong individualized letter takes 60 to 90 minutes. Multiplied by 30 students, that's a 30–45 hour off-the-clock project on top of a full teaching load.
Teachers in the Foundry10 sample described AI as an outline-and-phrasing assistant — a way to get from a brag sheet to a working draft faster, then revise from there. That's a reasonable way to use it.
The harder question is what the median AI-assisted rec letter actually looks like. Did the teacher use AI to outline and then rewrite in their own voice? Or did they generate a draft, swap in the student's name and two anecdotes, and call it done? The Foundry10 survey doesn't ask. The 31% headline includes both behaviors and treats them as equivalent — which they aren't, from the student's perspective.
The ethics double-standard, in numbers
Foundry10 asked both teachers and students to rate the ethics of two different AI uses on a 1–5 scale: students using AI on their own application essays, and teachers using AI on rec letters. The pattern is striking:
| Behavior | How teachers rate it | How students rate it |
|---|---|---|
| Students using AI on essays | 2.91 | 2.91 |
| Teachers using AI on rec letters | 3.26 | 2.91 |
(Foundry10, July 2024)
Read across the rows: teachers and students agree that students using AI on essays sits at 2.91. They disagree about teacher AI use — by 0.35 points on a 5-point scale, which is meaningfully large in survey terms. Teachers think their own use is more ethical than student use; students think they're equally ethical (or equally not).
Why might teachers rate their own use higher? A few candidate explanations:
- Authority effects. People generally rate their own behavior as more ethical than they rate others' identical behavior — well-documented across professional ethics surveys.
- Volume framing. A teacher writing 30 letters faces different time pressure than a student writing one essay, and teachers may be implicitly weighting that.
- Output framing. A rec letter is a third-party document; an essay is first-person. Teachers may consider AI-assisted third-party text less identity-violating.
None of these explanations make the asymmetry go away. The part that should give teachers pause is the second column: students don't share the higher rating. From the student's chair, "my teacher used ChatGPT to write my rec letter" sits at the same ethics level as "I used ChatGPT to write my essay."
The 72% of teachers who say AI enables student cheating, sitting next to the 31% of teachers who used AI to write a rec letter, is the cleanest version of the double-standard. Foundry10 doesn't editorialize about it, but the numbers are right there.
Does an AI-assisted rec letter hurt the student?
This is the most uncertain part of the post, and we want to frame it honestly. There is no published study we're aware of that directly measures whether AI-assisted recommendation letters help or hurt the student in admissions. No A/B test where readers grade matched letters with and without AI assistance. No regression on admit rates as a function of letter AI-detection score.
But there's adjacent evidence worth thinking about.
The Cornell team's January 2026 paper found that classifiers trained on ~30,000 real Common App essays plus 87,696 synthetic LLM essays could separate human from AI text at F1 = 0.998 (Cornell, Jan 2026). They also identified specific lexical fingerprints — LLM-generated essays disproportionately favor abstract prompt-keywords like "challenge," "growth," "journey," and "resilience," while human essays lean on temporal and personal words like "year," "time," "friend," and "would." We've broken those findings down in the lexical-fingerprint study post.
There's no reason to think rec letters would have a meaningfully different fingerprint than essays. If anything, rec letters are easier for an LLM to generate well — they're a known genre with conventional structure (relationship + competence + character + endorsement), exactly the structured task LLMs excel at. An AI-drafted rec letter is likely to show the same tells as an AI-drafted essay: high abstraction, low specificity, formulaic transitions, the "challenge / growth / leadership" cluster.
If admissions readers learn to spot those tells — and there's good reason to think they will, given the perception research on AI-assisted essays — an AI rec letter becomes a liability. For whom? The student. The teacher's name doesn't go on the application; the student's does. A reader who concludes a recommender phoned it in may downgrade their estimate of how distinctive the student is to their teachers.
This isn't guaranteed. It's a structural risk: the fingerprint exists, AI rec letters likely carry it, and the consequences accrue to the student rather than the teacher.
What students can do
This is awkward to write and awkward to do, but worth saying clearly.
1. Pick recommenders who know you well. A teacher who can write from memory has less reason to reach for AI than one drawing a blank. Ask teachers who can name specific moments — the lab where you got stuck, the essay you rewrote four times, the question from March they still remember. Detail-density is the best protection against AI fingerprint, and it comes from knowing the student.
2. Submit a brag sheet with very specific stories. Make yours specific in a way an LLM couldn't backfill — name the project, the date, the mistake, the conversation. The more granular detail the recommender has, the less generic the letter will sound regardless of how it's drafted.
3. You can ask. This is the most awkward suggestion and we don't recommend it lightly. You are within your rights to politely ask a recommender how they'll be drafting the letter, including whether they plan to use AI. Phrase it as concern about the application, not about them: "I've been reading about how admissions readers evaluate AI-assisted writing — would you be open to talking about how we can make the letter sound as much like your voice as possible?"
For more on the evolving etiquette around AI disclosure on the student side, see our note on whether the absence of an AI policy means anything is allowed.
What teachers might consider
Three things we'd ask teachers to consider before reaching for AI.
1. Use AI for outline and phrasing, not first draft. The student-specific moments — the actual stories, the actual character claims — should be in your own words. Use AI to smooth the connective tissue, not to invent the substance.
2. Preserve specifics in their original form. The 60-second story you remember about a student is the highest-value content in the letter. If you outsource that story to an LLM, the LLM will smooth the specifics into generalities. The detail dies. The detail was the point.
3. Remember the lexical fingerprint. "Resilience," "growth," "challenge," "journey," "leadership" — the LLM keyword cluster from Cornell will pull your letter toward exactly the words that signal AI to a trained reader (Cornell, Jan 2026). If you do use AI, edit those words out.
We'd add: the syllabus-side professor literature on AI policy suggests teachers have varied, thoughtful positions on student AI use. Holding yourself to the same line you hold students to is one defensible posture.
What schools and admissions offices are doing
Almost nothing, as far as we can tell. The vast majority of institutional AI policies — at the high school level and at the receiving college — focus on student-side AI use. Disclosure rules, attestation language, honor-code applications all assume the student is the actor.
Recommendation letters live in a policy gap. The Common App and most institutional disclosure forms don't ask recommenders about AI use. We don't know of any college admissions office that has published a recommender-facing AI policy. Some have informally said reader teams are "alert to" AI patterns in letters, but that's anecdote, not policy.
For context on how few colleges have addressed AI in admissions at all — even on the student side where it's much more visible — see the data on most colleges having no AI admissions policy. The recommender-side gap is even wider. There's also an enforcement-side dynamic: many faculty have publicly said they're unwilling to police AI use through detection tools (see the trust-versus-detection post). It's not obvious those same faculty would welcome being asked to disclose their own AI assistance on letters. The asymmetry compounds.
What we'll update next
This post is a living document. Three specific things we're tracking and will update when they happen:
- Whether any college admissions office publishes a recommender-facing AI policy. As of May 2026, we're not aware of one. The first to do so will likely shape the norm for everyone else.
- Whether a study isolates the admissions-outcome effect of AI-assisted rec letters. The Cornell team's lexical-fingerprint methodology could be applied to a recommender corpus; we'll cover it the day it's published.
- Whether the Foundry10 31% number replicates or moves in a follow-up survey. If teacher AI use on rec letters jumped from 31% in 2024 to, say, 50% by 2026, that's a much louder story than the one this post is telling.
If you're a teacher, school counselor, or admissions reader with a perspective on this — especially data on what AI-assisted rec letters look like in practice — we'd genuinely like to hear from you.
See also: What the Research Says About AI in College Admissions · The Words That Make a College Essay Sound AI-Written · How Admissions Readers Evaluate AI-Assisted Essays · Most Colleges Have No AI Policy · Can You Use ChatGPT in College? Syllabus Policies by Discipline · Why Professors Refuse to Police AI Use
Quick AI Check
See if your essay will pass university AI detection in seconds.