Colleges Have Heard of ChatGPT. They Haven't Heard of Claude.
ChatGPT appears in 64 admissions policies. Claude appears in 0. How universities map the AI landscape — and what they miss.
Colleges Have Heard of ChatGPT. They Haven't Heard of Claude.
When U.S. universities write down what students can and cannot do with AI in admissions, there is exactly one tool they have learned to say out loud.
It is ChatGPT.
Across 174 schools in our AI admissions policy dataset, the word "ChatGPT" appears in policy text for 64 schools. The word "Claude" appears in policy text for zero. Gemini appears twice. Bard once. Copilot once. Grammarly once. Llama once. Every other major AI tool a student might plausibly open this admissions cycle is a ghost in the policy corpus.
That gap is not a footnote. It is the most revealing pattern in the whole dataset, because it tells you how universities actually think about the AI landscape — and how often they don't.
The headline finding
- ChatGPT is named in 64 of 174 admissions policies (37%)
- Claude, the second-most-used AI assistant in the U.S. consumer market, is named in 0
- 110 of 174 schools (63%) avoid naming any specific tool, hiding behind "generative AI" / "AI tools" language
- 1 school still names "Bard" — a product Google renamed Gemini in February 2024
The full tool-naming tally
Here is every named AI tool that appears anywhere in the trigger_quotes or notes fields of our 174-school dataset, counted with a case-insensitive word-boundary match:
| Tool | Mentions | Share of 174 schools |
|---|---|---|
| ChatGPT | 64 | 36.8% |
| Turnitin | 8 | 4.6% |
| Gemini | 2 | 1.1% |
| Bard | 1 | 0.6% |
| Copilot | 1 | 0.6% |
| Grammarly | 1 | 0.6% |
| Llama | 1 | 0.6% |
| Claude | 0 | 0% |
| Generic "AI detector" | 0 | 0% |
Eight named tools. One of them carries 64 of the 76 total mentions. The rest live in the long tail of one or two callouts apiece, and most of those callouts come from a single school. Brown is the only institution that names Bard, Gemini, and Llama in the same attestation. Colorado School of Mines is the only other school that names Gemini. Michigan is the only school that names Grammarly. Florida is the only school that mentions Microsoft Copilot, and only in passing about its own enrolled-student tools — not as an admissions rule.
The shape of this list is not "universities are tracking the AI tool landscape." It's "universities have heard of ChatGPT."
What a tool-naming oligopoly says about how universities think
Three things are happening at once when a policy office writes "do not use ChatGPT" instead of "do not use generative AI":
1. ChatGPT is being used as a synecdoche. When an admissions dean writes "ChatGPT" in 2026, they almost always mean all chat-based AI — including Claude, Gemini, Copilot, and tools the dean has never heard of. The word is doing double duty as both a product name and a category label, the same way "Kleenex" stands in for tissue or "Xerox" for photocopying.
2. Most schools haven't done the work to think model-by-model. 63% of schools (110 of 174) skip product names entirely and just write "generative AI," "AI tools," or "artificial intelligence." That's abstraction by default, often abstraction by avoidance. Policy lawyers prefer category language because it is harder to be technically wrong about. Communications teams prefer it because it doesn't pin them to today's market leader. The cost is clarity for the student trying to figure out whether their actual workflow is okay.
3. The schools that do name a tool overwhelmingly pick the one their applicants have heard of. ChatGPT is the brand a 17-year-old applicant has seen on the news, in their teacher's complaints, and on their phone's home screen. Naming it is a communication choice as much as a policy choice. It is also a freshness signal — schools writing in 2025 still mostly write "ChatGPT" because that is still the name students recognize, even though Claude's user base has been growing rapidly and the enterprise market has tilted noticeably toward Anthropic.
Why the Claude blank matters
Anthropic launched Claude in March 2023. By early 2026, third-party trackers put Claude's monthly active user count in the range of roughly 18.9 million on the web app — about 4.5% of global chatbot market share and a substantial slice of the U.S. student market. Claude is not a fringe research curiosity. It is the second-most-recommended AI tool on most "best AI tools for students" lists alongside ChatGPT, Grammarly, Gemini, Microsoft Copilot, NotebookLM, and Perplexity.
In that landscape, the policy corpus saying "Claude" zero times is not neutral. It creates three specific risks for applicants:
The literalist trap. A student reads a policy that says "do not use ChatGPT." They are using Claude. They reason, accurately and naively, that their tool is not on the list. They submit the essay. The school's intent — almost certainly to ban all chat AI for drafting — was never communicated in language the student could verify against their workflow.
The Office 365 trap. A growing share of high schools and universities provide Microsoft Copilot to their students through institutional Office 365 licenses. The IT department hands it out as a sanctioned tool. The admissions office, working from a separate policy, doesn't list it as a banned one. A student opening Copilot inside their school-issued Word document has no plausible way to know whether that violates the admissions policy at the same university.
The "they didn't say Gemini" trap. A Google Workspace user with a free Gemini student plan is in a similar bind. Two of our 174 schools name Gemini. The other 172 leave it to inference.
Universities are not deliberately exploiting these gaps. They are mostly just not paying attention to model proliferation. But the burden of inference falls entirely on the applicant.
The "Bard" anachronism
One of the 174 schools still names "Bard" in its attestation language. Brown's master's-program AI pledge reads, in part, "including but not limited to ChatGPT, Gemini, Bard, and Llama."
Google renamed Bard to Gemini on February 8, 2024. The product called Bard no longer exists. A policy that bans "Bard" in 2026 is enforceable in spirit but vestigial in letter — it is naming a product that has been off the market for more than two years. That is not a fatal flaw. It is a freshness signal: this language was written in late 2023 or early 2024 and has not been touched since.
For context: of the dated quotes in our dataset, most policy writing happened in 2025, not 2023 or 2024. Schools that still name "Bard" are at the lagging edge of that wave. They wrote their policy when Bard was the second-most-named AI assistant in the press, and they haven't revisited it since the product was renamed.
It is also the reason policy lawyers prefer abstract category language: product names age. "Generative AI" doesn't.
What "generative AI" actually means in policy text
When a school writes "no generative AI" or "no AI tools" without naming products, the legal intent almost always covers the full set: ChatGPT, Claude, Gemini, Copilot, Llama-based tools, Perplexity, character-AI assistants, the Grammarly Generative AI feature, and any future entrant. Category language is a wider net than product language by design.
That intent, however, is not visible to the applicant.
A 17-year-old reading "do not use generative AI" has to do the categorization work themselves. They have to know that "generative AI" includes Claude. They have to know that Microsoft Copilot is a generative AI tool and not just a smarter version of Word's spellcheck. They have to know that "Apple Intelligence rewriting your sentence" probably counts. They have to know that Grammarly's "Improve It" button now invokes a large language model rather than a rule-based grammar checker.
Some of these are not obvious to the average applicant. They are barely obvious to the average policy author.
The result: the policies that are most defensible from the school's side — broad, abstract, category-based — are the policies that ask the most of the student's interpretive judgment. That is the wrong place to put the work.
What schools should do
There are two coherent ways to write an AI tool name into policy. The current pattern, "ChatGPT and nothing else," is neither.
Option A: Use abstract category language consistently, but define it. "Generative AI tools, including but not limited to large language models, AI writing assistants, and AI image generators" is broader and more durable than "ChatGPT." If a school chooses this route, it should explicitly note that the category is intended to cover all such tools regardless of brand — including Claude, Gemini, Copilot, and any successor product.
Option B: Enumerate the major tools. "ChatGPT, Claude, Gemini, Microsoft Copilot, Perplexity, and similar large language model assistants" is longer but communicates intent clearly to a teenage applicant. Brown's master's pledge is the closest example in the dataset, and even it is missing Claude.
The current modal pattern — naming ChatGPT, naming nothing else, and trusting the student to fill in the rest — is a vocabulary trap. It is comprehensible to the policy author and ambiguous to everyone else.
What students should do right now
Until policies catch up, treat the rule on the page as broader than its words.
- Any rule about "ChatGPT" is, in practice, a rule about all chat-based AI. Don't try to thread the needle by switching to Claude or Gemini just because they aren't named. The intent is the rule.
- If a policy says "no generative AI," assume it covers Apple Intelligence's rewrite, Grammarly's Generative AI feature, Microsoft Copilot in Word, Google's "Help me write" in Docs, and any browser plug-in that finishes your sentences. All of those are generative AI by the working definition.
- When in doubt, check the school's individual policy page. Most schools' written policies say more than the public summary does, and most policies are linked from each school's listing in our AI policies directory. The same school's policy may be one of the stricter ones we ranked — or one of the silent majority that has written nothing down at all.
- If the school is one of the few that actually use AI to read essays, your choice of writing tool matters less than your choice of voice. Authenticity is the signal those systems are calibrated to look for.
- Treat AI tool naming as a freshness signal for the policy itself. A policy that names "Bard" was written in 2023–2024. A policy that names ChatGPT and Gemini together was written in 2024–2025. A policy that uses only abstract category language could have been written anytime. Knowing when a policy was last touched tells you something about whether the school is actively engaged with the problem.
The bigger picture
The tool-naming oligopoly is the closest thing the dataset has to a fingerprint of intent. Schools that name ChatGPT are doing the minimum work to communicate clearly. Schools that name multiple tools are acknowledging that ChatGPT is not the entire market. Schools that name "Bard" in 2026 stopped updating two years ago. Schools that say only "generative AI" are protecting themselves with abstraction.
None of them name Claude.
That is the cleanest single signal that the policy literature is not yet keeping pace with the tool literature. The first wave of schools to write "ChatGPT, Claude, and Gemini" together in an attestation will probably show up during the 2026–2027 cycle, and the rest will follow. Until they do, applicants are stuck mapping the abstract language of "generative AI" onto a tool landscape the policy writers haven't named. The students using Claude today are the canary for that gap.
Our AI policy directory tracks the rules at 174 U.S. universities. Our methodology page documents the rubric we use to classify each one — including how we decided what counts as a "tool mention." For more on what the policy corpus reveals about how universities think about AI, see our analysis of which colleges use AI to read essays, the AI detection tools colleges actually use, how top-10 colleges check for AI, and our breakdown of the Common App's own AI fraud policy.
Quick AI Check
See if your essay will pass university AI detection in seconds.