Can You Use ChatGPT for Your Medical School Application? AMCAS, AACOMAS, CASPA, and TMDSAS Policies Compared
AMCAS allows AI for brainstorming and editing. CASPA permits only spelling and grammar tools and prohibits substantive AI drafting. TMDSAS requires your voice. AACOMAS says almost nothing.
Can You Use ChatGPT for Your Medical School Application? AMCAS, AACOMAS, CASPA, and TMDSAS Policies Compared
Short answer: it depends entirely on which application system you are using. AMCAS allows AI for brainstorming and editing. CASPA allows AI only for spelling and grammar correction and prohibits substantive AI drafting. TMDSAS requires your authentic voice. AACOMAS barely addresses it.
If you are applying to MD, DO, and PA programs simultaneously -- which many pre-health students do -- you are navigating at least three different AI policies in a single cycle. Get one wrong and you risk certification fraud on an application you spent months building.
This guide breaks down exactly what each system says, what it means in practice, and where the lines are genuinely unclear.
The Quick Comparison
| AMCAS (MD) | AACOMAS (DO) | CASPA (PA) | TMDSAS (TX MD/DO) | |
|---|---|---|---|---|
| AI for brainstorming | Allowed | Unclear | Not addressed (avoid) | Allowed |
| AI for grammar/spelling | Allowed | Unclear | Explicitly permitted (new in 26-27) | Likely allowed |
| AI for rephrasing | Risky | Unclear | Prohibited | Prohibited |
| AI for drafting | Prohibited | Unclear | Prohibited | Prohibited |
| Certification language | Explicit | Generic | Strictest on substance | Explicit |
| AI detection used | Not centrally | Unknown | Reserved right; not sole basis for investigation | Unknown |
Now let's look at each one in detail.
AMCAS: AI Is Fine for Editing, Not for Writing
The AAMC updated the AMCAS certification statement starting with the 2024-2025 cycle and carried it forward into 2026. The language is the most specific of any centralized application system:
"Although I may utilize mentors, peers, advisors, and/or AI tools for brainstorming, proofreading, or editing, my final submission is a true reflection of my own work and represents my experiences."
This is surprisingly permissive. AMCAS explicitly places AI in the same category as mentors and advisors -- tools you can use for support, not substitutes for your own thinking.
What AMCAS allows
- Brainstorming topics. Asking ChatGPT "What are strong angles for a personal statement about rural medicine?" is fine.
- Proofreading. Running your draft through an AI tool to catch typos, grammar errors, or unclear phrasing is fine.
- Editing suggestions. Getting feedback like "this paragraph is too long" or "this sentence is passive" is fine.
What AMCAS prohibits
- Drafting. Having AI write any portion of your personal statement or activity descriptions.
- Outlining. Having AI generate the structure and content of your essay.
- Copy-pasting. Taking AI-generated text and submitting it as your own, even with minor edits.
The critical detail: AMCAS does not use AI detection
AMCAS does not centrally scan submissions with AI detection tools. The AAMC has been clear about this. But here is what matters: individual medical schools absolutely can and do run their own detection. The certification is a sworn statement. If a school decides to investigate, the attestation you signed becomes the basis for a fraud claim.
For a deeper look at what these certification statements mean legally, see our breakdown of what you are actually signing when you attest to AI non-use.
CASPA: The Strictest Policy in Health Professions Admissions
If AMCAS is the permissive end of the spectrum, CASPA is the strictest of the four systems on substantive AI use. The 2026-2027 CASPA Policies and Procedures document (published October 2025) softened the prior cycle's absolute language, but still draws the firmest line of any health professions application service on content generation:
"While consulting personal and professional resources, including artificial intelligence (AI) tools, for non-substantive changes such as correction of spelling and grammar is acceptable, my final submission accurately represents my own writing, work and experiences."
The key phrase is non-substantive. AI-assisted spelling and grammar correction is now explicitly permitted. What is still prohibited is substantive AI use — having ChatGPT rephrase a paragraph, restructure your argument, or generate new wording. The line sits between "fix my comma" (fine) and "make this sentence clearer" (not fine, because that changes the substance of your writing).
The certification you sign
The 2026-2027 CASPA certification reads:
"I certify that all written content in my CASPA application is my own work. This includes, but is not limited to, personal statements, essays, and descriptions of work, educational activities, and events. While consulting personal and professional resources, including artificial intelligence (AI) tools, for non-substantive changes such as correction of spelling and grammar is acceptable, my final submission accurately represents my own writing, work and experiences."
Two things to notice. First, unlike the 2025-2026 version, this language explicitly carves out grammar and spelling correction as acceptable — whether the correction comes from a human editor, a traditional proofreading tool, or an AI tool. Second, the standard for what is prohibited is now expressed as a positive requirement ("my final submission accurately represents my own writing") rather than the old absolute ban ("written or modified, in whole or part"). The practical bar is still high: your voice, your experiences, your wording. But grammar-level assistance is no longer a certification violation. For the clause-by-clause read, see our CASPA AI Certification Decoded article.
CASPA reserves the right to use AI detection — but PAEA will not investigate on detection alone
The 2026-2027 P&P also adds an important limit on how AI detection can be used: "Given the significant risk presented by current AI detection software as of the date this policy is being issued, PAEA will not initiate a CASPA investigation where the sole basis for the investigation request is that an AI detection tool tagged a personal statement or evaluation submitted in CASPA." In other words, a false positive from ZeroGPT or Turnitin is not, by itself, enough to open a case at the CASPA central level. Individual PA programs may still run detectors and weight them more heavily — MEDEX Northwest is the only program in our 20-program survey that explicitly reserves this right. PAEA-funded research now exists on how AI detectors actually perform on PA-style application essays, including a measured false-positive rate for human writing — see our breakdown of the PAEA AI detection research and what it means for false positives.
What this means for PA applicants
If you are applying to PA programs through CASPA, the cleanest approach is to write the substance of your essays yourself and use AI only for spelling and grammar correction, which the 2026-2027 policy explicitly permits. Traditional spell-check, Grammarly's basic underlines, and similar non-substantive corrections are fine. What crosses the line is anything that rewrites, rephrases, or restructures your text — that is substantive modification and is still prohibited. For the new 2026-27 CASPA AI and Technology essay specifically — which asks you to write about AI without using AI to draft it — see our methodology-grade walkthrough of How Readers Evaluate the CASPA AI and Technology Essay, which covers the six dimensions experienced PA admissions readers actually use.
TMDSAS: Your Voice, Your Intent
TMDSAS -- the Texas Medical and Dental Schools Application Service -- added new language for the 2025-2026 cycle that takes a middle-ground approach:
"Final responses submitted must reflect your own original thoughts, voice, and intent -- even if you use AI tools for brainstorming or editing assistance."
This is more nuanced than either AMCAS or CASPA. TMDSAS allows AI for brainstorming and editing (like AMCAS) but adds the requirement that your voice and intent must come through. The implication: if your essay reads like ChatGPT wrote it, you have a problem -- even if you only used AI for editing.
The "voice" standard is subjective
What does it mean for your writing to reflect your "voice"? TMDSAS does not define it. But the intent is clear: if an admissions reader cannot distinguish your personal statement from something any applicant could have generated with a prompt, you have not met the standard.
This is also the only system that explicitly prohibits AI use during interviews: "The use of AI tools or any other external resources during interviews is strictly prohibited." Given the rise of virtual interviews in Texas medical schools, this is a pointed addition.
AACOMAS: The Missing Policy
AACOMAS, the centralized application for osteopathic (DO) medical schools, has no clear standalone AI policy as of the 2025-2026 cycle. This is a genuine gap.
The AACOMAS application includes general certification language about the authenticity of your materials, but it does not specifically mention AI, generative AI, ChatGPT, or any related technology. There is no equivalent to the AMCAS "brainstorming, proofreading, or editing" framework. There is no CASPA-style prohibition.
Why this matters
If you are applying to DO schools, you are certifying that your work is authentic -- but you are doing so without clear guidance on where AI fits. Some applicants interpret this as permissive: if it is not explicitly prohibited, it is allowed. Others take a conservative reading: the spirit of the certification is that your work is your own, period.
Our recommendation: treat AACOMAS like AMCAS. Use AI for brainstorming and proofreading. Do not use it to draft or write. Until AACOM publishes explicit guidance, this is the safest approach that does not leave you exposed if policies change mid-cycle.
The bigger issue
The absence of a clear policy is itself a problem. Applicants deserve to know what is expected. We track AI policies for 72+ medical schools precisely because centralized systems leave so many questions unanswered, and individual programs often fill the gaps with their own rules.
The Grammarly Problem
Here is a question that trips up more applicants than any other: Is Grammarly allowed?
Traditional Grammarly -- spell-check, grammar correction, punctuation fixes -- is universally safe across all four systems. No application service prohibits basic proofreading tools.
But Grammarly is no longer just a proofreading tool. The current version includes generative AI features that can rewrite sentences, adjust tone, and restructure paragraphs. When you click "improve it" or "make it more concise," Grammarly is not fixing your grammar. It is generating new text.
Under AMCAS, using Grammarly's basic features is fine. Using its rewrite features is in the gray zone.
Under CASPA, Grammarly's basic spelling and grammar underlines are now explicitly permitted under the 2026-2027 policy. Its rewrite features (tone shifts, "make it more concise," sentence restructuring) are not — those are substantive changes and cross the line.
The safe rule across all systems: Use Grammarly for red and blue underlines (spelling and grammar). Do not use green or purple suggestions that rewrite your sentences. And definitely do not use GrammarlyGO to generate or rewrite content.
The Irony: AAMC Uses AI to Read Your Application
While AMCAS certifications ask you to disclose AI use, the AAMC itself has partnered with Thalamus to deploy AI in admissions. The Thalamus Cortex platform uses AI, machine learning, and natural language processing to process application data -- including transcripts and letters of recommendation -- for residency programs through ERAS.
As of 2025, more than 8,000 programs have access to Cortex for tech-assisted holistic application review. The AAMC states that AI does not replace human reviewers and does not automatically filter, sort, or reject applicants. But the technology is reading and summarizing your materials.
The double standard is hard to miss. You are asked to certify that your writing is authentically human. The institution reviewing it uses AI to process it. We have written extensively about this pattern across undergraduate admissions -- it is now reaching health professions too.
AI Detection: What Medical Schools Actually Do
The false positive problem
AI detection tools remain unreliable. Independent testing found ZeroGPT has a 20.51% false positive rate -- meaning roughly 1 in 5 human-written samples were incorrectly flagged as AI-generated. Turnitin performs better at around 1.28%, but that is still a nonzero risk applied to your medical school future.
For ESL and international applicants, the numbers are far worse. Stanford researchers found that AI detectors flagged 61% of TOEFL essays written by non-native English speakers as AI-generated. The reason: ESL writers naturally produce text with simpler sentence structures and more common vocabulary -- exactly the patterns that detectors associate with AI.
If you are an international student writing your personal statement in English, you face a disproportionate risk of being falsely flagged. We cover this bias in depth in our investigation of AI detection tools and international students.
The em dash panic
If you spend time on Student Doctor Network, you have probably seen the anxiety. One viral thread asked: "Will using em dash tank my application?" The concern: ChatGPT famously overuses em dashes, and applicants worry that using one in their personal statement will trigger suspicion.
The reality: admissions readers reviewing hundreds of applications do not have time to scrutinize your punctuation choices. As one admissions professional responded on SDN, "this is most certainly not going to get someone shut out of medical school." Use whatever punctuation serves your writing. Do not let AI-anxiety reshape your natural style.
What schools actually check
AMCAS does not centrally detect. CASPA reserves the right but has not confirmed active scanning. Most individual medical schools do not publicly disclose whether they use AI detection on application essays.
Some schools do run detection. Others rely on the honor system backed by the certification you sign. The practical risk is not a detection tool flagging your essay -- it is writing that reads generically, lacks specificity, or sounds like it could have been written by anyone. That is what admissions committees actually notice.
A Decision Framework for Every System
Before you open ChatGPT during your application cycle, run through this:
Step 1: Which system are you using?
- CASPA: Use AI only for spelling and grammar correction (explicitly permitted under the 2026-2027 policy). No brainstorming, rephrasing, or substantive drafting. Stop here.
- AMCAS, TMDSAS, or AACOMAS: Continue.
Step 2: What are you using AI for?
- Brainstorming ideas or topics: Allowed under AMCAS and TMDSAS.
- Grammar and spelling checks: Allowed everywhere.
- Rephrasing or rewriting sentences: Not allowed. This crosses the line under TMDSAS ("voice and intent") and is explicitly prohibited under CASPA.
- Drafting any content: Not allowed under any system.
Tools built specifically for admissions essays, like GradPilot, operate within these guidelines by providing feedback on your writing rather than generating content for you.
Step 3: Can you defend every sentence?
- If an interviewer asked you to explain the thinking behind any paragraph in your personal statement, could you? If the answer is no, that paragraph should not be in your application.
Step 4: Check your specific schools.
- Individual medical schools may have their own AI policies that are stricter than the centralized system. We track these at /ai-policies/medical-schools, covering 72+ schools with verified policy data.
What Happens If You Violate the Policy
The consequences vary by system but all are severe:
AMCAS: Making a false certification could constitute application fraud. The AAMC can flag your application, notify schools, and potentially bar you from future AMCAS submissions.
CASPA: PAEA can disqualify your application and notify PA programs. Given that CASPA's certification language is the most explicit, the evidentiary standard for a violation is arguably lower.
TMDSAS: Texas medical schools can reject your application or rescind an acceptance. The "voice and intent" standard means even technically-edited-by-you content could be challenged if it does not sound like you.
AACOMAS: The general certification still applies. Submitting inauthentic work is a violation of the application terms, even without AI-specific language.
The Bottom Line
The four major health professions application systems have taken four different approaches to AI:
- AMCAS drew a clear, permissive line: AI for brainstorming and editing, not for writing.
- CASPA drew the strictest line of any health professions application service on substantive AI use, but the 2026-2027 policy explicitly permits AI for non-substantive changes like spelling and grammar correction.
- TMDSAS added a subjective standard: your voice and intent must be present.
- AACOMAS has not drawn a line at all, leaving applicants to guess.
If you are applying across multiple systems -- MD, DO, and PA -- the safest strategy is to follow the strictest policy that applies to you. For most multi-system applicants, that means CASPA's standard: write the substance yourself, keep AI out of drafting and rephrasing, and limit AI use to spelling and grammar correction (which CASPA's 2026-2027 policy now explicitly permits) plus off-page research and idea generation.
Your personal statement is 5,300 characters. It is the single most important piece of writing in your medical school application. The risk of having it questioned -- whether by a detection tool, an admissions reader, or a certification review -- is not worth the marginal convenience of AI assistance.
Write it yourself. Make it yours. And if you want to stay current on what each school actually allows, GradPilot tracks AI policies for 72+ medical schools and 160+ universities, updated as policies change throughout the cycle.
Review Your Personal Statement
See how your AMCAS or secondary essay scores before you submit.