Can You Use ChatGPT for Your Medical School Application? AMCAS, AACOMAS, CASPA, and TMDSAS Policies Compared
AMCAS allows AI for brainstorming and editing. CASPA bans it entirely. TMDSAS requires your voice. AACOMAS says almost nothing. Here's what each policy actually means.
Can You Use ChatGPT for Your Medical School Application? AMCAS, AACOMAS, CASPA, and TMDSAS Policies Compared
Short answer: it depends entirely on which application system you are using. AMCAS allows AI for brainstorming and editing. CASPA bans it outright. TMDSAS requires your authentic voice. AACOMAS barely addresses it.
If you are applying to MD, DO, and PA programs simultaneously -- which many pre-health students do -- you are navigating at least three different AI policies in a single cycle. Get one wrong and you risk certification fraud on an application you spent months building.
This guide breaks down exactly what each system says, what it means in practice, and where the lines are genuinely unclear.
The Quick Comparison
| AMCAS (MD) | AACOMAS (DO) | CASPA (PA) | TMDSAS (TX MD/DO) | |
|---|---|---|---|---|
| AI for brainstorming | Allowed | Unclear | Prohibited | Allowed |
| AI for grammar/spelling | Allowed | Unclear | Prohibited | Likely allowed |
| AI for rephrasing | Risky | Unclear | Prohibited | Prohibited |
| AI for drafting | Prohibited | Unclear | Prohibited | Prohibited |
| Certification language | Explicit | Generic | Strictest | Explicit |
| AI detection used | Not centrally | Unknown | Reserved right | Unknown |
Now let's look at each one in detail.
AMCAS: AI Is Fine for Editing, Not for Writing
The AAMC updated the AMCAS certification statement starting with the 2024-2025 cycle and carried it forward into 2026. The language is the most specific of any centralized application system:
"Although I may utilize mentors, peers, advisors, and/or AI tools for brainstorming, proofreading, or editing, my final submission is a true reflection of my own work and represents my experiences."
This is surprisingly permissive. AMCAS explicitly places AI in the same category as mentors and advisors -- tools you can use for support, not substitutes for your own thinking.
What AMCAS allows
- Brainstorming topics. Asking ChatGPT "What are strong angles for a personal statement about rural medicine?" is fine.
- Proofreading. Running your draft through an AI tool to catch typos, grammar errors, or unclear phrasing is fine.
- Editing suggestions. Getting feedback like "this paragraph is too long" or "this sentence is passive" is fine.
What AMCAS prohibits
- Drafting. Having AI write any portion of your personal statement or activity descriptions.
- Outlining. Having AI generate the structure and content of your essay.
- Copy-pasting. Taking AI-generated text and submitting it as your own, even with minor edits.
The critical detail: AMCAS does not use AI detection
AMCAS does not centrally scan submissions with AI detection tools. The AAMC has been clear about this. But here is what matters: individual medical schools absolutely can and do run their own detection. The certification is a sworn statement. If a school decides to investigate, the attestation you signed becomes the basis for a fraud claim.
For a deeper look at what these certification statements mean legally, see our breakdown of what you are actually signing when you attest to AI non-use.
CASPA: The Strictest Policy in Health Professions Admissions
If AMCAS is the permissive end of the spectrum, CASPA is the opposite extreme. The 2025-2026 CASPA Policies and Procedures document is unambiguous:
"Applicants are strictly prohibited from using Generative AI to create, write and/or modify any content, in whole or part, submitted in CASPA."
Read that again. The word modify is doing enormous work. Under this language, even asking ChatGPT to rephrase a single sentence in your personal statement is a violation. The policy does not distinguish between "write my essay" and "make this sentence clearer." Both are prohibited.
The certification you sign
CASPA requires applicants to certify:
"All written passages within their CASPA application, including but not limited to personal statements, essays, and descriptions of work and education activities and events, are their own work, and have not been written or modified, in whole or part, by any other person or any generative artificial intelligence platform, technology, system or process, including but not limited to Chat GPT."
Notice "any other person" is included alongside AI. CASPA's policy is arguably stricter than AMCAS for human help too. Where AMCAS explicitly allows mentors and peers for editing, CASPA's certification language could be read to prohibit another person modifying your content at all.
CASPA reserves the right to use AI detection
The policy states that "PAEA and PA Programs reserve the right to use platforms, technology, systems and processes that detect content submitted in CASPA and/or provided to PA programs that was created, written and/or modified in whole or part using AI."
Whether they actively scan is another question. But the legal groundwork is laid.
What this means for PA applicants
If you are applying to PA programs through CASPA, the safest approach is to write entirely without AI assistance. Not even Grammarly's AI-powered rewrite features (more on that below). Traditional spell-check is almost certainly fine. Anything that rewrites, rephrases, or restructures your text crosses the line CASPA has drawn.
TMDSAS: Your Voice, Your Intent
TMDSAS -- the Texas Medical and Dental Schools Application Service -- added new language for the 2025-2026 cycle that takes a middle-ground approach:
"Final responses submitted must reflect your own original thoughts, voice, and intent -- even if you use AI tools for brainstorming or editing assistance."
This is more nuanced than either AMCAS or CASPA. TMDSAS allows AI for brainstorming and editing (like AMCAS) but adds the requirement that your voice and intent must come through. The implication: if your essay reads like ChatGPT wrote it, you have a problem -- even if you only used AI for editing.
The "voice" standard is subjective
What does it mean for your writing to reflect your "voice"? TMDSAS does not define it. But the intent is clear: if an admissions reader cannot distinguish your personal statement from something any applicant could have generated with a prompt, you have not met the standard.
This is also the only system that explicitly prohibits AI use during interviews: "The use of AI tools or any other external resources during interviews is strictly prohibited." Given the rise of virtual interviews in Texas medical schools, this is a pointed addition.
AACOMAS: The Missing Policy
AACOMAS, the centralized application for osteopathic (DO) medical schools, has no clear standalone AI policy as of the 2025-2026 cycle. This is a genuine gap.
The AACOMAS application includes general certification language about the authenticity of your materials, but it does not specifically mention AI, generative AI, ChatGPT, or any related technology. There is no equivalent to the AMCAS "brainstorming, proofreading, or editing" framework. There is no CASPA-style prohibition.
Why this matters
If you are applying to DO schools, you are certifying that your work is authentic -- but you are doing so without clear guidance on where AI fits. Some applicants interpret this as permissive: if it is not explicitly prohibited, it is allowed. Others take a conservative reading: the spirit of the certification is that your work is your own, period.
Our recommendation: treat AACOMAS like AMCAS. Use AI for brainstorming and proofreading. Do not use it to draft or write. Until AACOM publishes explicit guidance, this is the safest approach that does not leave you exposed if policies change mid-cycle.
The bigger issue
The absence of a clear policy is itself a problem. Applicants deserve to know what is expected. We track AI policies for 72+ medical schools precisely because centralized systems leave so many questions unanswered, and individual programs often fill the gaps with their own rules.
The Grammarly Problem
Here is a question that trips up more applicants than any other: Is Grammarly allowed?
Traditional Grammarly -- spell-check, grammar correction, punctuation fixes -- is universally safe across all four systems. No application service prohibits basic proofreading tools.
But Grammarly is no longer just a proofreading tool. The current version includes generative AI features that can rewrite sentences, adjust tone, and restructure paragraphs. When you click "improve it" or "make it more concise," Grammarly is not fixing your grammar. It is generating new text.
Under AMCAS, using Grammarly's basic features is fine. Using its rewrite features is in the gray zone.
Under CASPA, even Grammarly's rewrite features could violate the "modify" prohibition. If the tool changes your wording -- not just your commas -- you have crossed the line.
The safe rule across all systems: Use Grammarly for red and blue underlines (spelling and grammar). Do not use green or purple suggestions that rewrite your sentences. And definitely do not use GrammarlyGO to generate or rewrite content.
The Irony: AAMC Uses AI to Read Your Application
While AMCAS certifications ask you to disclose AI use, the AAMC itself has partnered with Thalamus to deploy AI in admissions. The Thalamus Cortex platform uses AI, machine learning, and natural language processing to process application data -- including transcripts and letters of recommendation -- for residency programs through ERAS.
As of 2025, more than 8,000 programs have access to Cortex for tech-assisted holistic application review. The AAMC states that AI does not replace human reviewers and does not automatically filter, sort, or reject applicants. But the technology is reading and summarizing your materials.
The double standard is hard to miss. You are asked to certify that your writing is authentically human. The institution reviewing it uses AI to process it. We have written extensively about this pattern across undergraduate admissions -- it is now reaching health professions too.
AI Detection: What Medical Schools Actually Do
The false positive problem
AI detection tools remain unreliable. Independent testing found ZeroGPT has a 20.51% false positive rate -- meaning roughly 1 in 5 human-written samples were incorrectly flagged as AI-generated. Turnitin performs better at around 1.28%, but that is still a nonzero risk applied to your medical school future.
For ESL and international applicants, the numbers are far worse. Stanford researchers found that AI detectors flagged 61% of TOEFL essays written by non-native English speakers as AI-generated. The reason: ESL writers naturally produce text with simpler sentence structures and more common vocabulary -- exactly the patterns that detectors associate with AI.
If you are an international student writing your personal statement in English, you face a disproportionate risk of being falsely flagged. We cover this bias in depth in our investigation of AI detection tools and international students.
The em dash panic
If you spend time on Student Doctor Network, you have probably seen the anxiety. One viral thread asked: "Will using em dash tank my application?" The concern: ChatGPT famously overuses em dashes, and applicants worry that using one in their personal statement will trigger suspicion.
The reality: admissions readers reviewing hundreds of applications do not have time to scrutinize your punctuation choices. As one admissions professional responded on SDN, "this is most certainly not going to get someone shut out of medical school." Use whatever punctuation serves your writing. Do not let AI-anxiety reshape your natural style.
What schools actually check
AMCAS does not centrally detect. CASPA reserves the right but has not confirmed active scanning. Most individual medical schools do not publicly disclose whether they use AI detection on application essays.
Some schools do run detection. Others rely on the honor system backed by the certification you sign. The practical risk is not a detection tool flagging your essay -- it is writing that reads generically, lacks specificity, or sounds like it could have been written by anyone. That is what admissions committees actually notice.
A Decision Framework for Every System
Before you open ChatGPT during your application cycle, run through this:
Step 1: Which system are you using?
- CASPA: Do not use AI for anything beyond spell-check. Stop here.
- AMCAS, TMDSAS, or AACOMAS: Continue.
Step 2: What are you using AI for?
- Brainstorming ideas or topics: Allowed under AMCAS and TMDSAS.
- Grammar and spelling checks: Allowed everywhere.
- Rephrasing or rewriting sentences: Not allowed. This crosses the line under TMDSAS ("voice and intent") and is explicitly prohibited under CASPA.
- Drafting any content: Not allowed under any system.
Tools built specifically for admissions essays, like GradPilot, operate within these guidelines by providing feedback on your writing rather than generating content for you.
Step 3: Can you defend every sentence?
- If an interviewer asked you to explain the thinking behind any paragraph in your personal statement, could you? If the answer is no, that paragraph should not be in your application.
Step 4: Check your specific schools.
- Individual medical schools may have their own AI policies that are stricter than the centralized system. We track these at /ai-policies/medical-schools, covering 72+ schools with verified policy data.
What Happens If You Violate the Policy
The consequences vary by system but all are severe:
AMCAS: Making a false certification could constitute application fraud. The AAMC can flag your application, notify schools, and potentially bar you from future AMCAS submissions.
CASPA: PAEA can disqualify your application and notify PA programs. Given that CASPA's certification language is the most explicit, the evidentiary standard for a violation is arguably lower.
TMDSAS: Texas medical schools can reject your application or rescind an acceptance. The "voice and intent" standard means even technically-edited-by-you content could be challenged if it does not sound like you.
AACOMAS: The general certification still applies. Submitting inauthentic work is a violation of the application terms, even without AI-specific language.
The Bottom Line
The four major health professions application systems have taken four different approaches to AI:
- AMCAS drew a clear, permissive line: AI for brainstorming and editing, not for writing.
- CASPA drew the strictest possible line: no AI involvement in content creation or modification.
- TMDSAS added a subjective standard: your voice and intent must be present.
- AACOMAS has not drawn a line at all, leaving applicants to guess.
If you are applying across multiple systems -- MD, DO, and PA -- the safest strategy is to follow the strictest policy that applies to you. For most multi-system applicants, that means CASPA's standard: write it yourself, proofread it yourself, and use AI only for research and idea generation that stays far from your actual application text.
Your personal statement is 5,300 characters. It is the single most important piece of writing in your medical school application. The risk of having it questioned -- whether by a detection tool, an admissions reader, or a certification review -- is not worth the marginal convenience of AI assistance.
Write it yourself. Make it yours. And if you want to stay current on what each school actually allows, GradPilot tracks AI policies for 72+ medical schools and 160+ universities, updated as policies change throughout the cycle.
Worried About AI Detection?
170+ universities now use AI detection. Check your essays before submission.