Who Uses ChatGPT for College Essays? Not the Poorest Kids
22% of under-$50K applicants used AI on essays vs. 40% of $75-100K (Foundry10). Lower-SES users are penalized 1.85x more per unit of AI use (Cornell).
Who Uses ChatGPT for College Essays? Not the Poorest Kids
The story everyone told themselves in late 2022 was that generative AI would democratize the college essay. A free tool. A patient tutor. A 24/7 ghost-coach for the 88% of applicants whose families could not write a $349 hourly check to a private admissions consultant. ChatGPT, the argument went, would close the gap.
The data says it did the opposite. In a national survey of 523 US teen applicants, only 22% of students from households under $50,000 used AI on their personal application essays, compared with 40% of students from $75,000 to $100,000 households -- a 150% increase in odds of AI use as you move up one income bracket (Foundry10, July 2024). The applicants with the most to gain from a free essay assistant used it the least.
And among the lower-SES students who did use AI, a separate analysis of 81,663 Common App essays found something stranger: their essays showed a mean estimated LLM-use proportion (alpha-hat) of 0.102 versus 0.080 for higher-SES applicants -- 28% more reliance on the model among the students who could least afford the downside. The downside is real: per unit of estimated AI use, lower-SES applicants saw their admission odds drop 86%, versus 60% for higher-SES applicants. The penalty is 1.85x larger for the students with fee waivers (Cornell, Feb 2026).
Updated: May 10, 2026. We revise this post as new research emerges.
This post is the income-and-SES read of the AI-in-admissions evidence. It is the companion to our pillar synthesis of three major AI-admissions studies, and it is distinct from the reader-perception experiment (which is about how AI essays are scored, not who writes them). Here we are looking at one question: what happened to the equalizer narrative.
The income cliff in adoption
Foundry10's 2024 survey is the cleanest adoption number we have. They polled 425 high-school teachers and 523 teens aged 16-18 who had applied to college that cycle, fielded through Sago's national panel between February and March 2024 (Foundry10, July 2024). The headline numbers are stark.
| Household income | Used AI on personal essay |
|---|---|
| Under $50,000 | 22% |
| $50,000 to $75,000 | ~30% (overall sample average) |
| $75,000 to $100,000 | 40% |
That is not a small drift. Going from the lowest bracket to the upper-middle bracket nearly doubles the probability that a student used a generative AI tool on their personal statement. The overall adoption number across all incomes was 30% -- and even that is almost certainly an undercount, because it relies on self-report from teenagers who have just submitted essays under an honesty pledge. (More on the undercount caveat in the pillar post.)
Why does adoption track income upward? A few candidates: device and bandwidth access (ChatGPT requires a usable computer, a stable connection, and an hour of uninterrupted time); awareness and exposure (the ChatGPT release happened in office Slack channels and dinner-table conversations long before it reached every classroom); permission and modeling (a counselor at a well-resourced high school may walk students through brainstorming with AI; a counselor with a 500:1 caseload often does not); and existing essay support (higher-income students often already had an editor, a parent, a tutor -- AI gets added to a stack of help, not used in isolation).
The "AI as equalizer" framing assumed the bottleneck for lower-income students was lack of access to a writing helper. The data suggests the bottleneck was something else -- ambient familiarity, time, and trust in the tool -- and AI did not fix any of those.
Among users, lower-SES students lean on AI more
Here is where it gets uncomfortable. Among the students who did use AI, the Cornell team's second paper (looking at 81,663 essays from a single highly selective US engineering school over 2019-2024) found that lower-SES applicants -- proxied by Common App fee-waiver status -- had a higher mean estimated LLM-use proportion than their higher-SES peers: alpha-hat = 0.102 versus 0.080, a 28% gap, p less than 0.001 (Cornell, Feb 2026).
The fraction of applicants in the "high use" tier (alpha-hat above 0.13, roughly equivalent to substantial AI involvement in drafting) was 22.7% among lower-SES applicants versus 18.7% among higher-SES applicants.
This is the inversion of the adoption story. Fewer lower-SES students use AI overall -- but the ones who do use it more heavily.
The most plausible read: a higher-SES applicant who used AI almost certainly had other support too -- a parent who edits, a paid coach who actively teaches restraint ("don't let it ghost-write, just brainstorm"). A lower-SES applicant who used AI may have been using it as their primary essay collaborator. Different relationship to the tool entirely.
The Foundry10 use-case data is consistent: of AI-using students, 50% used it for brainstorming, 47% for outlining, 33% for phrasing, 32% for first drafts, 20% for final drafts, 10% for translation (Foundry10, July 2024). Lighter touches cluster at the top; heavier touches tail off. Coaching teaches the lighter touches. Students with the least access to coaching are most likely to slide into the heavier ones.
Why heavier AI use hurts lower-income applicants more
Now stack that on the outcome side. The Cornell Feb 2026 paper estimates how admission odds change per unit of alpha-hat. They find:
- Higher-SES applicants: OR = 0.40 per unit of alpha-hat (a 60% reduction in admission odds)
- Lower-SES applicants: OR = 0.14 per unit (an 86% reduction)
The penalty for AI use, per unit, is about 1.85x larger for lower-SES applicants. Combined with the fact that lower-SES applicants who use AI use it more heavily, the cumulative effect on admit rates is sharper than the per-unit number suggests.
The aggregate result: the SES admit gap at this school widened by roughly 31% after ChatGPT's release. Pre-ChatGPT (2019-2022), higher-SES applicants were admitted at 23.6% versus 12.9% for lower-SES (a 10.7 percentage-point gap). Post-ChatGPT (2023-2024), it widened to 26.3% versus 12.3% (a 14.0 percentage-point gap). The difference-in-differences interaction term yields beta = -0.170, OR = 0.844, p = 0.025 -- meaning lower-SES applicants saw an additional 15.6% reduction in admission odds in the post-ChatGPT era beyond what the pre-existing trend would have predicted (Cornell, Feb 2026).
A few critical caveats before anyone runs with these numbers:
- The DiD is descriptive, not causal. The authors say so explicitly -- their parallel-trends assumption fails in 2022. COVID test-optional and the 2023 SFFA v. Harvard ruling are real confounds. The right framing is "the gap widened in the same window AI use exploded," not "ChatGPT caused the gap to widen."
- One school, one program type. A single highly selective US engineering school. The same Cornell team is responsible for both arXiv papers in this batch, so they are not independent corroboration. Findings may not generalize to liberal arts colleges, less selective schools, or graduate admissions.
- Fee waiver is an imperfect SES proxy. It conflates household income, first-generation status, and high-school resources.
- Only ~7% of the SES penalty is explained by surface stylometry. The widening gap is not simply that lower-SES essays "look more AI-written." Something else is doing the work in reviewer judgment.
Why the "AI levels the field" theory failed
If you held the equalizer view in 2022, what does the actual data force you to update? A few overlapping mechanisms have to be on the table:
Coaching teaches restraint, and coaching is paid. The most important AI-use skill, per the Foundry10 data, is not using AI for the parts that matter. Stop at the brainstorm. Stop at the outline. Hand-write the draft. That instruction sounds obvious to someone who has paid an admissions consultant $200 an hour. It is not obvious to a 17-year-old alone with a chatbot at midnight. Wealth buys the meta-instruction about how to use the tool, not just access to the tool.
Cultural and linguistic mismatch. The Cornell Jan 2026 paper found that LLM essays disproportionately favor abstract prompt-keywords -- "challenge," "growth," "journey," "resilience" -- while human essays favor temporal and personal words: "year," "time," "friend," "would" (Cornell, Jan 2026). We unpack the lexical fingerprint in the words that make essays sound AI-written. A student whose authentic story is rooted in family, place, and the texture of their actual week loses something irreplaceable when AI rewrites it into the abstract-noun register.
Reviewer skepticism falls hardest on thin paper trails. Lower-SES applicants often have less of the ambient evidence that backs up an essay's claims: fewer named research mentors, fewer coded internships, fewer prestige extracurriculars. When a reader has a strong essay and a thin activity list, "this writing seems too polished" is a more available skeptic-thought than when the same essay sits next to a track record that vouches for it.
Identity prompting backfires. When students try to use AI to write "as a first-gen Latina applicant," the Cornell Jan 2026 paper found identity-prompted LLMs over-use demographic terms verbatim -- "parent," "Asian," "first-generation," "immigrant" -- in ways the actual demographic group does not. Identity prompting for the Black subgroup actually reduced alignment with real Black-applicant writing (t = 2.327, p = 0.020). See why identity prompting fails. The students most likely to want this kind of help are the ones the tool serves worst.
The convergence is bleak: AI is most useful as a brainstorm partner and most dangerous as a ghost-writer; the students most likely to use it as a brainstorm partner are the ones who already had brainstorm partners; and the students most likely to lean on it as a ghost-writer are the ones whose voice is hardest for the model to mimic and whose writing carries the least benefit-of-the-doubt from a reader.
What lower-income applicants can actually do about it
This is where most takes on this data go bleak. We are going to be specific instead. Here is the practical, evidence-grounded version.
1. Use AI for brainstorming and outlining only. The lightest-touch use case in the Foundry10 data and the one with the smallest stylometric signature in Cornell. Ask ChatGPT to help list possible essay topics from a journal entry, or outline an essay you have already roughly drafted in your head. Stop there. Write the prose by hand.
2. Do not use AI to write a draft and then "revise it to sound like you." Once an LLM has set the draft, your revisions inherit the model's word choices, sentence shapes, and abstract-noun habits. The lexical fingerprint is sticky. Writing your own draft, then asking AI for line-level edit suggestions you accept or reject one by one, preserves voice much better. We unpack that workflow in our comparison of ChatGPT versus real college essays.
3. Get one trusted human reader if you possibly can. A teacher, a librarian, a college access program counselor, an older sibling. You do not need a $349-an-hour consultant; you need one human who reads your draft and tells you which paragraph sounds the most like you. That single read is worth more than ten AI revision passes.
4. Lean on the specific. The Cornell finding that humans use temporal/personal words ("year," "time," "friend") and AI uses abstract prompt words ("challenge," "growth," "resilience") is a free editing rubric. Search your draft for the abstract words. Replace each with a specific moment, date, or person.
5. Do not dumbcraft. Lower-SES students sometimes overcorrect by deliberately weakening prose to "look less AI." That is the dumbcrafting trap and it makes essays worse and more likely to be flagged, not less.
6. Think carefully about disclosure. If you used AI for brainstorming or outlining, most schools' policies (and the Common App's terms) do not require disclosure. If you used it more substantially, the question is not "should I tell them?" -- it is "is this still my essay?" See our disclosure decision guide.
None of these close the SES admit gap by themselves. But the gap that emerges from heavy AI use is downstream of decisions a student can actually make. The brainstorm-only workflow, executed well, costs zero dollars and meaningfully reduces the per-unit penalty Cornell measured.
What we'll update next
This is a fast-moving evidence area. The questions we are watching:
- Does the SES gap widen further with the 2025 application cycle, or stabilize as schools and students adapt? The Cornell paper ends with 2024 data. The 2025 cycle is the first where ChatGPT was a default tool rather than a novelty.
- Do the income-cliff adoption numbers hold in a non-engineering, non-elite-selective sample? Foundry10 is national, but Cornell's outcome data is one engineering school. We need outcome data from a liberal arts college and a less-selective university to know whether the 1.85x penalty differential is real or institution-specific.
- What share of the SES penalty is reviewer skepticism about thin paper trails versus genuine voice mismatch? Cornell rules out simple stylometric explanations (~7% accounted for). The remaining mechanism is the open question, with very different policy implications depending on the answer.
When fresh data lands on any of these, this post gets revised.
See also: What the research says about AI in college admissions -- our cross-study synthesis pillar. How admissions readers evaluate AI-assisted essays -- the perception experiment. Why "I'm First-Gen Latina" doesn't make ChatGPT sound like you -- on identity prompting.
Quick AI Check
See if your essay will pass university AI detection in seconds.