Skip to main content

Georgia Tech Wrote the First University AI Admissions Policy

Rick Clark's 2023 blog post is the earliest dated AI essay policy in our 174-school survey. Here's what Georgia Tech got right two years early.

Nirmal Thacker, CS, Georgia Tech · Cerebras Systems AIMay 13, 202611 min read
Free Essay ReviewAI detection + scoring

Georgia Tech Wrote the First University AI Admissions Policy

On July 27, 2023, Rick Clark — then Director of Undergraduate Admission at Georgia Tech — published a blog post titled "Seniors, Can We ChatGPT?" In it, he told applicants something almost no other admissions office was willing to say out loud at the time: yes, you can use ChatGPT. Just don't paste it into your essay.

That post is the earliest dated AI-specific admissions guidance in our 174-school university AI policy dataset. Not the earliest "we're thinking about it" statement. Not the earliest classroom AI rule retrofitted to applications. The earliest piece of admissions-authored writing that actually told applicants how to use an AI tool by name.

It still reads well in 2026. That is the more interesting fact.

The first-mover claim, with receipts

We tracked dated admissions-policy quotes across every school in our /ai-policies methodology. The earliest dated AI-specific source we could find at any university is Georgia Tech's "Seniors, Can We ChatGPT?" post, dated 2023-07-27. Howard University has a 2018 quote in the dataset, but it's a generic "the essay is yours" line — not AI-related.

The next-earliest dated AI policies came roughly a month later, and then trickled in over the rest of 2023:

DateSchoolPosture
2023-07-27Georgia TechL2 (line-level help OK)
2023-08UC Santa CruzL2 with attestation
2023-08-24NC StateL3 (brainstorm only)
2023-09-22Reed Collegevarious
2023-10-13UC Riversidevarious
2023-10-23DartmouthL3
2023-11-01UC IrvineL2

ChatGPT-3.5 launched in November 2022. Most universities responded to it on classroom syllabi in spring 2023 and waited another full admissions cycle before saying anything about essays. Georgia Tech's office wrote it down in July 2023, less than nine months after the model became a household name.

Even within the early wave, Clark's post stands out for two reasons. It treats AI as a writing tool with appropriate uses, not as a categorical threat. And it names the tool — ChatGPT, specifically — at a moment when almost everyone else was using polite abstractions like "generative AI." In our dataset, only 64 of 174 schools (37%) ever name ChatGPT by name. Most still hide behind "AI tools."

What Clark actually wrote in 2023

The "Seniors, Can We ChatGPT?" post is short. The argument has three moves, and the language has held up.

Move one: AI is a useful tool, and admissions officers know it.

"AI tools can be powerful and valuable in the application process when used thoughtfully."

Move two: the line is the line. Don't paste.

"you should not copy and paste directly out of any AI platform or submit work that you did not originally create"

Move three — the part most schools still won't write down — is a positive frame for what AI is for:

"Use it to brainstorm, edit, and refine your ideas."

The post even handles the deeper objection. ChatGPT, Clark writes, "can write an essay or supplemental response for you." But asked whether the result will "have any personal style, unique details, valuable specifics, or soul," he answers: "No." The grammar is "impeccable." The content is "sanitized and relatively boring."

This is the unusual move. Most admissions offices in 2023 were trying to scare students off AI by appealing to integrity. Clark argued from craft: AI prose is fine; you are interesting; the gap between those two facts is the whole point of an essay.

The L2/D0/E0 stance — and why it's rare among tech schools

Under our policy rubric, Georgia Tech classifies as L2/D0/E0: line-level AI editing is permitted; no disclosure is required; no enforcement is stated. Sub-paragraph polishing, idea generation, and resume framing for the activities section are all explicitly allowed. Copy-pasting AI output into the application is not.

That posture is rare among the country's tech-named flagships. In the dataset, 13 universities carry "Tech," "Institute of Technology," "Polytechnic," or "Mines" in their name. Only 5 of the 13 have any explicit AI admissions policy at all:

SchoolPermission levelHas explicit AI policy?
Georgia TechL2 / D0 / E0Yes — since 2023
CaltechL2 / D3 / E1Yes
Carnegie MellonL2 / D0 / E0Yes
Colorado School of MinesL2 / D0 / E0Yes
Olin College of EngineeringL2 / D0 / E0Yes
MITL0 / D0 / E0No
StanfordL0 / D0 / E0No
Virginia TechL0 / D0 / E0No
RPI, RIT, NJIT, Stevens, WPI, UT AustinL0 / D0 / E0No

The schools applicants assume have the loudest opinions about AI — MIT, Stanford, Virginia Tech, RPI, UT Austin — have published nothing admissions-specific as of 2026. They sit in the silent majority along with about 70% of the rest of the dataset.

Inside the small group of tech-flagships that did write something, only Georgia Tech and Caltech wrote anything at length. They picked different postures. Caltech is L2/D3/E1 — same permission as GT, plus an AI-specific attestation and a soft review signal. Georgia Tech is the only L2/D0/E0 tech-named flagship in the country: it permits AI line-editing, declines to make students sign a pledge about it, and declines to announce any enforcement mechanism.

In context, the choice reads as a values statement. Caltech says: use AI, but swear to us how. Georgia Tech says: use AI, and write your own essay. Neither is obviously right. But for an institution that admits roughly 17% of applicants from a national pool that lives on AI tools every day, the GT version asks for less performance and more honest work.

Two cycles, same line

The other thing Clark's office got right is continuity. A surprising number of universities published a single AI statement in 2023 or 2024 and never returned to the topic. Some have quietly dropped their pages. Georgia Tech kept restating its stance.

On September 10, 2025, Clark — now Executive Director of Strategic Student Access — published "What if Your College Essay is All About YOU?" The relevant line:

"you may lean on ChatGPT for brainstorming or initial idea generation, but your voice, your thoughts, your style, your convictions will be what is most important and dominant."

That is, almost word for word, the position from July 2023. The Application Review page on admission.gatech.edu carries the same Statement on AI. Same authorial voice. Same posture. Same named tool.

A documented two-cycle continuity trail is unusual in the dataset. Many schools' AI policies live on a blog post that could be quietly unpublished, and three of the schools we tracked have their only AI-policy citation on a blog with no permanent admissions page. Georgia Tech has both: a permanent Statement on AI on the Application Review page and a two-year editorial trail on the admission blog.

What other tech schools didn't do

The contrast is not abstract. Here is what Rick Clark's office did, and what most peers chose not to do:

  1. Wrote it down on a public page. GT's AI guidance sits on the Application Review page that every prospective student lands on — not buried inside an academic integrity policy or a student conduct manual. Compare this to schools whose AI rules live three clicks deep on /student-conduct/honor-code/ai/. Under our rubric source rules, academic integrity pages don't even qualify as admissions policy unless they reference applications by name.

  2. Named the tool. "ChatGPT" appears in both the 2023 and 2025 posts. Across the 174-school dataset, ChatGPT is named by only 37% of schools. Naming a tool is a small thing that shifts a policy from posture to instruction.

  3. Wrote the positive use. Most schools' AI policy is a list of prohibitions. GT's is a list of allowed uses with one prohibition attached. The structure tells the applicant what to do, not just what to avoid.

  4. Declined to require attestation. Among the 14 D3 schools that require an AI-specific pledge or checkbox, several are L4 (banned). Four — Caltech, UC Berkeley, UC Santa Cruz, UCLA — are L2. Georgia Tech is L2 without attestation. That's an honest acknowledgement that an attestation about AI use, in 2026, is mostly theater: students aren't going to certify away the everyday tool they use to draft texts to their parents.

  5. Trusted students enough not to announce enforcement. GT is E0 — no enforcement signal stated. The school neither claims to run AI detectors nor performs an authority gesture about extra writing samples. There is no Turnitin AI detection theater in the language. It says: we trust your final submission to be yours, and we will read it accordingly.

None of those moves is heroic. Each is the kind of choice that requires an admissions office to spend an afternoon writing instead of an afternoon hedging. Most offices, two and a half years later, still haven't.

What GT got wrong (or hasn't fixed yet)

This is not a hagiography. A few honest gaps.

The Statement on AI on the Application Review page is undated — useful for applicants now, but less useful as an evidentiary record of what GT said when. The two anchor blog posts carry dates; the application page itself does not.

The policy is undergraduate-only. GT's graduate admissions pages carry no AI-specific guidance, which is a pattern across most of the dataset — graduate admissions is 21 percentage points more silent on AI than undergraduate admissions — but it's still a gap. Doctoral applicants writing a statement of purpose for the College of Computing get less explicit guidance than first-year applicants writing the Common App essay.

And the post's framing of AI prose as "sanitized and relatively boring" is a 2023 reading. Two and a half years on, the line between "obviously AI-shaped" and "competently human" has narrowed enough that voice-as-detection is a less reliable instinct than Clark suggested. GT's underlying rule still works. The rhetorical premise has aged a little faster than the policy.

None of this changes the first-mover finding. It's worth flagging because the post is being held up as a model, and a model has limits.

Why this matters for applicants

If you are reading this because you are applying to Georgia Tech: the rule is the same one Clark wrote down in 2023. Use AI to think, edit, and refine. Do not paste AI output into your application. The Activities section can be drafted with AI help. The essay should pass through your own cognition before it hits the submission page.

If you are reading this because you are applying somewhere else: the GT framework is a useful default. Most schools that wrote a policy later wrote a stricter one — the L1 permission tier has nearly collapsed, and silent schools that break their silence almost always pick L3 or L4. GT's L2 stance is a reasonable centerline. If you write to that standard everywhere, you will be within most schools' rules.

If you are reading this because you compare schools' AI policies professionally, the Georgia Tech file is worth keeping in your back pocket. The pattern — name the tool, write the positive use, skip the attestation, hold the line across cycles — is what good policy looks like. Not aspirational, just specific. Specific enough that an 18-year-old reading it at 11pm in October knows what to do.

You can read the full breakdown at /ai-policies/georgia-institute-of-technology, see how it compares against the Ivy League's eight different stances or the strictest AI policies in the country, or browse the full 174-school directory.

Rick Clark wrote his post in July 2023 and called the question "Seniors, can we ChatGPT?" The answer he gave was the rare honest one: yes, carefully, here is how. Two and a half years later, the answer is still the most useful one in the dataset. That is what calling it early looks like.


GradPilot maintains the largest open-source university AI admissions policy dataset, covering 174 institutions with a documented classification rubric. The dataset is published under CC BY-NC 4.0 at gradpilot/university-ai-policies. See our methodology for source rules and rubric definitions, and Do Colleges Use AI Detectors? for the enforcement side of the picture.

Quick AI Check

See if your essay will pass university AI detection in seconds.

Related Articles

Your Essay Deserves a Second Look

Professional AI detection and comprehensive scoring before you submit

No credit card required