PA Program AI Policies — 10 Programs With Real Rules (2026)
Only ~10 PA programs publish their own AI policy. The rest defer to CASPA. Here's each rule, where it differs from CASPA, and what it means.
PA Program AI Policies — 10 Programs With Real Rules (2026)
Short answer: Of the roughly 300 ARC-PA accredited PA programs in the United States, only about ten have published their own AI policy beyond the CASPA central certification. The rest defer entirely to CASPA. This article names each program we found with a real program-specific rule, quotes the policy verbatim, explains where it differs from CASPA, and flags the three universities whose published AI policies look like they should apply to PA admissions but actually do not.
This is an honest aggregator, not a padded table. We considered writing a 100-row comparison that would have had 90 "defers to CASPA" rows, and we decided not to — the signal is in the programs that actually publish something, plus the three that publish policies people confuse for PA rules.
The CASPA central rule everyone else defers to
Before we get to the individual programs, the single most important sentence for any PA applicant in 2026 is the CASPA Applicant Certification, which you must sign before submission. It is the strictest published AI prohibition in any centralized US health-professions application service.
The certification language prohibits the use of generative AI to "create, write and/or modify any content, in whole or part" submitted in the CASPA application — personal statements, essays, the new AI and Technology essay, and the 600-character work and activity descriptions. It is materially stricter than AMCAS (which permits brainstorming and editing), AACOMAS (largely silent), and TMDSAS (which permits limited AI for brainstorming). We have a clause-by-clause decode of the CASPA AI certification with the verbatim text and a comparison table of all four application services. Bookmark it, read it once, and move on.
If you're new to the entire CASPA-vs-AMCAS distinction, our four-system comparison article on AMCAS, AACOMAS, CASPA, and TMDSAS is the right starting point. PA applicants are operating under the strictest rule in the US. Your friends applying through AMCAS to MD school can use ChatGPT for brainstorming and editing. You cannot.
For each of the programs below, the CASPA central rule is the floor. A program-specific policy adds clauses on top of that floor; it does not replace it.
Programs with published program-specific AI policies (6)
1. University of Washington MEDEX Northwest
Stance: BANNED_FOR_DRAFTING with a narrow "non-substantive editing" carve-out. Stricter than CASPA on detection. Slightly more lenient than CASPA on editing. The only program we found that explicitly reserves the right to use AI detection tools.
Verbatim policy (from the MEDEX Northwest applicants page):
"Applicants are responsible for following all institutional policies on the use of artificial intelligence (AI) tools, and all written content submitted through CASPA must be the applicant's own original work. Limited use of AI or other tools for non substantive editing — such as spelling or grammar correction — is permitted, but the final submission must accurately reflect the applicant's own writing, experiences, and voice. MEDEX may use tools that detect AI generated or AI modified content, and may use AI supported systems during admissions review."
Source: https://familymedicine.uw.edu/medex/applicants/, accessed 2026-04-13.
What this means for you as an applicant: MEDEX is the single most interesting program-level rule in PA admissions. Three things matter about it. First, MEDEX is more explicit than CASPA about what is allowed — spell check and grammar correction are explicitly fine, which removes some of the ambiguity CASPA leaves for applicants who use basic writing tools. Second, MEDEX reserves the right to run AI detection tools on your essay even though PAEA centrally has cautioned that current detection tools are unreliable. If MEDEX is on your list, plan as if detection will run. Third, MEDEX also tells you upfront that the program itself may use AI on its review side — an unusual level of transparency.
If MEDEX is your target, start with our dedicated deep dive on the MEDEX Northwest AI policy, which walks through the four distinct things this paragraph does and the PAEA detection contradiction at length.
2. University of the Pacific (Stockton, CA)
Stance: BANNED with automatic application denial. The most categorical enforcement posture we found at any PA program.
Verbatim policy (from the University of the Pacific Physician Assistant Program Admission Policies page):
"The program prohibits the use of Generative AI to complete application materials. Applicants who use generative AI, or those who use generative AI under the guise of assistive AI, will have their applications denied as not meeting application requirements."
Source: https://www.pacific.edu/healthsciences/programs/master-of-physician-assistant-studies/physician-assistant-program-admission-policies, accessed 2026-04-13.
What this means for you as an applicant: Pacific's policy is the strictest of any PA program we found on two dimensions. First, the consequence is automatic denial — "applications denied as not meeting application requirements" — rather than flagging for further review or leaving the consequence open. Second, Pacific explicitly anticipates the "I only used AI to help structure my thoughts" defense and forecloses it with the phrase "generative AI under the guise of assistive AI." If you use an AI tool at any point in your Pacific application and Pacific catches it, there is no discretionary penalty zone.
One important clarification: this is the University of the Pacific (Stockton, California), which should not be confused with Pacific University (Hillsboro, Oregon). The two are separate institutions and Pacific University's PA program does not publish a program-specific AI policy as of April 2026.
3. University of Missouri–Kansas City (UMKC)
Stance: BANNED with removal from consideration. Extends the prohibition to interview responses, which is rare.
Verbatim policy (from the UMKC PA apply page):
"Essays in applications and responses to interview questions must be provided without the assistance of any artificial intelligence software. Applicants who are found to be in violation of this policy will be removed from consideration."
Source: https://med.umkc.edu/pa/apply/ and the equivalent language at https://med.umkc.edu/academics/degree-and-certificate-programs/pa/apply.html, accessed 2026-04-13.
What this means for you as an applicant: UMKC is one of only two programs in our research that explicitly extends its AI rule to interview responses — not just written submissions. Most programs and CASPA itself focus on the written application. UMKC is signaling that if you reach the interview stage, your live answers are also supposed to be your own. This has an obvious practical implication: if you use AI to rehearse interview responses by memorizing rephrased versions of your stories, and then deliver those rehearsed versions live, a literal reading of the UMKC rule catches that. The more defensible approach is to practice with a human mentor or with flash cards of your own experience bullets.
UMKC's rule also does not explicitly carve out spell check or grammar tools, unlike MEDEX. The language is "without the assistance of any artificial intelligence software," which on a strict reading is narrower than MEDEX's carve-out and broader than CASPA's central rule.
4. Medical University of South Carolina (MUSC)
Stance: BANNED. Concise and unambiguous. Applies to both residential and hybrid PA programs.
Verbatim policy (from the MUSC College of Health Professions PA admissions pages):
"AI-generated personal statements will not be accepted, whether in part or whole."
Source: https://chp.musc.edu/academics/physician-assistant/admissions for the residential program and https://chp.musc.edu/academics/physician-assistant/hybrid-pa/admissions for the hybrid program, accessed 2026-04-13.
What this means for you as an applicant: MUSC is notable for being concise. The CASPA certification is a three-sentence paragraph with multiple clauses. MUSC's rule is eleven words. The "in part or whole" construction mirrors CASPA intentionally — you cannot defend yourself by saying only a single sentence came from an AI. MUSC publishes this language on both its residential and hybrid program pages, which is unusual: most programs that publish AI rules do so in exactly one place, which makes the rule easy to miss for applicants who only read the main program page.
MUSC's rule is silent on interview responses and silent on editing tools. The scope is "personal statements" specifically, not the whole application.
5. Union Adventist University
Stance: BANNED. Closest parallel to CASPA's central certification language plus a pre-committed disqualification clause.
Verbatim policy (from the Union Adventist PA apply page):
"The Union Adventist University PA Program expects applicants to submit applications that are their own work, including but not limited to, personal statements, essays, and descriptions of work and education activities and events. Any work not written by the applicant or modified, in whole or part, by any other person or any generative artificial intelligence platform, technology, system or process, including but not limited to ChatGPT, will be disqualified and removed from the applicant pool."
Source: https://uau.edu/pa/apply, accessed 2026-04-13.
What this means for you as an applicant: Union Adventist has done something subtle and worth noticing. The first sentence is almost a direct echo of the CASPA certification — same scope ("personal statements, essays, and descriptions of work and education activities and events"), same "including but not limited to, ChatGPT" phrasing. What Union Adventist adds is a pre-committed enforcement clause: "will be disqualified and removed from the applicant pool." CASPA central reserves a right to use detection tools without committing to a specific consequence. Union Adventist has told you what the consequence will be.
The upshot for an applicant is that Union Adventist has converted CASPA's open-ended enforcement posture into a bright-line rule. If you apply to Union Adventist and the program later believes your work was AI-assisted, the pre-committed consequence removes program discretion.
6. Dalhousie University (Canada) — the single outlier
Stance: PERMISSIVE. The only PA program we found that explicitly permits generative AI for limited support tasks. Important caveat: Dalhousie is Canadian and does not use CASPA.
Verbatim policy (from the Dalhousie University PA Studies admissions page):
"Applicants may use generative AI tools to support the preparation of their application, such as organizing ideas, refining grammar, or improving clarity. However, all application materials must reflect the applicant's own experiences, values, and voice, and over-reliance on AI or submitting materials that have not been critically reviewed and personalized by the applicant, may compromise the integrity of the application. Applicants are responsible for the accuracy and originality of everything they submit."
Source: https://medicine.dal.ca/departments/PAStudies/admissions.html, accessed 2026-04-13.
What this means for you as an applicant: Dalhousie is the exception that proves the rule. Every other program we found that published an AI policy is stricter than or aligned with CASPA. Dalhousie is materially more permissive — it explicitly allows AI for three defined tasks (organizing ideas, refining grammar, improving clarity) as long as the submission still reflects the applicant's "own experiences, values, and voice."
The critical caveat: Dalhousie is a Canadian PA program and does not use CASPA. Dalhousie has its own admissions process, and Dalhousie's policy is the rule that actually binds Dalhousie applicants. If you are a US applicant applying through CASPA to US programs, the CASPA certification still binds you regardless of what Dalhousie permits for its own applicants. Dalhousie is in this list because PA education is cross-border and many US applicants research Canadian programs, but the rules do not generalize.
Dalhousie also demonstrates that a permissive program-level AI policy is possible for a PA program when the program operates outside the CASPA ecosystem. No US-based PA program we found has published a rule more permissive than CASPA. That is not a coincidence: CASPA's certification is contractual, and a US program that tried to publish a more permissive rule would be contradicting the rule every one of its applicants already signed.
The one program with informal advisory language (1)
7. University of Iowa pre-health advising (not the PA program admissions office)
Stance: INFORMAL_ADVISORY. The Iowa PA program's own admissions page is silent. The University of Iowa's Academic Advising Center publishes pre-health personal-statement guidance that counsels pre-health students against using AI.
Verbatim text (paraphrased from indexed snippets of the Iowa Academic Advising Center pre-health page):
"Don't use generative AI (Chat GPT). Admissions committees will be able to tell."
Source: https://advisingcenter.uiowa.edu/pre-health/tips-writing-personal-statement, accessed 2026-04-13.
What this means for you as an applicant: This is half a policy, not a full one. The University of Iowa's pre-health advising office is a counseling service that helps Iowa's own students prepare for health-professions applications. Its "don't use ChatGPT" guidance is advice to students, not a rule published by the Iowa PA program's admissions office. The Iowa PA program's actual admissions page does not surface an AI policy. The rule that binds Iowa PA applicants is still CASPA central.
Iowa is on this list because it is the one example we found of a university publishing any AI guidance at all to its pre-PA students — most universities are silent at both layers. The claim "admissions committees will be able to tell" is also worth flagging: it asserts detection as a deterrent, which is out of step with PAEA's central caution that current AI detection tools are unreliable for exactly that purpose.
The contrast: 3 universities whose published AI policy does NOT govern PA admissions
These three are on the list because applicants find them first in search results and routinely mistake them for PA rules. They are not PA rules. Every PA applicant to each of the three is bound by CASPA, not by the university-level policies quoted below.
8. Yale University
Yale College has one of the most explicit anti-AI admissions statements in higher education — it names "submitting the substantive content or output of an artificial intelligence platform, technology, or algorithm" as application fraud and warns that it may result in admission revocation or expulsion. But this policy formally applies only to Yale College's undergraduate admissions. The Yale Physician Associate Program's admissions policies page does not publish a program-specific AI policy. Yale PA applicants are bound by CASPA. (The Yale Physician Associate Program Online is also winding down — the final class graduates in 2026 and no new students are being admitted.)
9. Stony Brook University
Stony Brook's first-year undergraduate admissions office publishes a short, explicit ban: "AI generated or assisted writing samples are not acceptable for consideration." This rule applies only to first-year undergraduate admissions. The Stony Brook ELPA Physician Assistant Program admissions page does not publish an AI policy. A search for "Stony Brook PA AI policy" surfaces the undergraduate ban as a top result, which is misleading because it does not govern PA admissions at Stony Brook. Stony Brook PA applicants are bound by CASPA.
10. University of Wisconsin–Madison
UW–Madison is the inverse case. Its undergraduate admissions office publishes an unusually permissive rule: "We will not disqualify an applicant found to have used or suspected of using AI in their admissions essays." This rule applies only to UW–Madison's undergraduate admissions. The UW–Madison PA program application page does not publish an AI policy. UW–Madison PA applicants are bound by CASPA, which is materially stricter than the undergraduate rule. The contrast is worth noting because it shows how variable university-wide policies are — and how irrelevant those policies are to what actually binds PA applicants.
The combined pattern across Yale, Stony Brook, and UW–Madison is the important one. University-wide AI policies span the full range from "strictest in higher ed" (Yale) to "explicitly permissive" (UW–Madison). None of them govern PA admissions. The only layer that governs PA admissions is CASPA central plus any program-specific rules layered on top. The three universities above are listed here to get them out of the way — so the next applicant who Googles "Stony Brook PA AI policy" has a source that tells them the truth.
The contradiction with PAEA central guidance
There is one layer above individual programs that matters for AI detection: PAEA's own published guidance to member programs. PAEA has been remarkably honest about the unreliability of current AI detection tools.
PAEA's published guidance (What Your Program Should Know About AI and Admissions) states explicitly that PAEA will not investigate an applicant where the sole basis for suspicion is AI detection software output, citing false-positive and false-negative concerns with current detection tools. PAEA recommends in-person essay-writing during interviews as a more reliable verification mechanism — have the applicant write a short essay in front of a reviewer and compare voice and structure to the submitted personal statement.
MEDEX Northwest is out of step with this central guidance. MEDEX explicitly reserves the right to use AI detection tools on submitted essays. That disagreement is the single most interesting policy contradiction in PA admissions for 2026. PAEA centrally cautions against detection; its most policy-active member program explicitly commits to using it.
For more on the false positive problem in AI detection generally — including how non-native English speakers and students with unusual writing styles are disproportionately affected — see our piece on flagxiety, the term we coined for the anxiety students feel about being flagged by detectors even when they wrote every word themselves. The false positive research is directly relevant to whether MEDEX's detection-forward stance is empirically justified.
Why so many programs are silent
The natural reaction to the six-programs-out-of-~300 statistic is "how is this possible in 2026?" The answer is that silence is a rational response to CASPA's central rule, not an oversight.
CASPA has done the policy work. Every applicant to every US PA program signs a certification statement that is already the strictest anti-AI rule in centralized US health-professions admissions. A program that publishes its own identical rule is being redundant. A program that publishes a stricter rule is adding language on top of an already-strict floor and assuming some enforcement burden. A program that publishes a more permissive rule is contradicting the certification its applicants have already signed — which is contractually awkward and legally unhelpful.
The result is that most programs reason backward from CASPA. CASPA is the rule. The program does not need to publish anything. The program's silence is not a vote of indifference; it is a deliberate policy choice to let CASPA do the work.
The six programs that broke the pattern each had a specific reason. MEDEX wanted to add a non-substantive editing carve-out and to announce its detection stance. Pacific wanted to pre-commit to automatic denial. UMKC wanted to extend the rule to interview responses. MUSC wanted a concise, standalone statement. Union Adventist wanted to pre-commit to a specific enforcement consequence. Dalhousie wanted a permissive rule that CASPA could not provide because Dalhousie does not use CASPA.
Every one of those reasons is a signal about what the program cares about, and a reason a prospective applicant might tailor their approach to that specific program. But "the program cares enough to publish" is itself the rare signal.
What changes if your target program is on this list
Practical implications if one of the six programs with an explicit policy is on your application list:
If you're applying to MEDEX Northwest: Treat the policy as binding on top of CASPA. Use spell check and basic grammar correction only. Do not use any AI tool that produces sentence-level rewrites. Plan as if AI detection will run on your submission — it is the one program in our research that has explicitly said it will. Preserve drafting evidence (Google Docs revision history, dated drafts) as a defense against false positives.
If you're applying to the University of the Pacific: Assume automatic denial is a live risk. The "assistive AI" language means you cannot defend yourself by claiming the AI only helped you organize thoughts. Write without AI at all for the Pacific application, supplemental included.
If you're applying to UMKC: Extend your "no AI" discipline to interview prep. Do not memorize AI-generated interview answers. Practice with human mentors or flash cards of your own experience bullets. UMKC is watching the interview room, not just the written application.
If you're applying to MUSC: The rule is about personal statements specifically. The rest of your application is still governed by CASPA, which means the same prohibition covers your work and activity descriptions and the new AI and Technology essay — but MUSC has put its specific weight on the personal statement.
If you're applying to Union Adventist: Union Adventist has pre-committed to disqualification as the consequence. This removes any discretion a reviewer might otherwise exercise in a borderline case. Be clean.
If you're applying to Dalhousie (as a Canadian applicant): You actually have more flexibility than CASPA applicants. Dalhousie explicitly allows AI for organizing ideas, refining grammar, and improving clarity. The authenticity test is "own experiences, values, and voice" — so keep the narrative and substance entirely yours and use AI only for the three permitted tasks.
If your program is not on any of the lists above: You are bound by CASPA. Period. Do not use AI to create, write, or modify any content in your application. There is no program-level carve-out. The safest practice is to write every draft from scratch in a plain text editor with AI-powered autocomplete disabled.
One universal note: all six programs plus CASPA central all include an "authenticity" test in their rules — the submission has to reflect the applicant's own writing, experiences, and voice. That is the substantive standard reviewers are trained to apply, regardless of whether a specific detection tool catches anything. If your essay doesn't sound like you, the rule has been broken, detection tool or not.
Also: the 2026-2027 CASPA application cycle introduces a new AI and Technology essay — the so-called Situational Decision-Making Question that replaces the COVID-19 Impact Essay. You have to write thoughtfully about AI as a clinical tool while CASPA simultaneously prohibits using AI to write the essay about AI. That essay is doubly fraught for MEDEX applicants because MEDEX is the program most likely to run detection on it. Our guide to the new essay covers the verbatim prompt and seven worked angles.
Methodology
Programs surveyed: This research builds on the earlier 20-program PA school AI policy survey (April 2026) and extends the search to additional programs through targeted keyword queries.
What we looked for: A published, admissions-page AI policy that goes beyond simply mirroring the CASPA central certification. A program that says "applicants must follow CASPA's rules" without any additional language counts as silent for our purposes — CASPA has already done that policy work. A program that published its own rule with any unique element (an editing carve-out, a specific enforcement clause, an extension to interview responses, a permissive element) counts as having a real rule.
What we excluded: University-wide AI policies that apply to enrolled students, not applicants. University-wide policies that apply to undergraduate admissions but not graduate or professional program admissions. Curricular AI policies in PA program student handbooks (those govern enrolled PA students, not applicants). Research-office AI policies. General academic integrity codes that do not name AI specifically.
Cutoff date: 2026-04-13. Policies may have changed since. If you see a published program-specific AI policy that is not on our list, send it to our medical content team via the GradPilot contact form and we will add it to the next revision.
Verification caveat: The quotes above were confirmed via indexed snippets from the programs' own admissions pages. Direct page fetches should be performed before treating any specific word as definitive, because programs occasionally tweak policy wording without version-dating the page. If precision matters for your application, also check the most recent version of the page directly.
Extending the count: We believe there are more than six US PA programs with published AI policies — this is a research floor, not a ceiling. The six we name are the six we were able to surface through keyword searches. There may be five to fifteen additional programs whose AI policies use non-standard wording and didn't show up in our search pattern. A future revision of this article will extend the list.
Related Reading
Medical school essays hub: Medical School Essays — The Complete Guide to AMCAS, AACOMAS, CASPA & TMDSAS — every medical school essay guide on the site, organized by application system, topic, and applicant profile.
The CASPA + AI policy cluster:
- PA School AI Policies 2026 — Why 18 of 20 Programs Are Silent — the 20-program survey this aggregator builds on
- MEDEX Northwest AI Policy — The One PA Program with Rules — the deep dive on the most-discussed program
- CASPA AI Certification Decoded — What It Actually Bans — clause-by-clause read of the central rule
- CASPA AI and Technology Essay 2026-2027 — Prompt + 7 Angles — the new AI essay this cycle
- Can You Use ChatGPT for Your Medical School Application? AMCAS, AACOMAS, CASPA, TMDSAS Compared — the four-system comparison
- What Is Flagxiety? The AI Detection Anxiety Reshaping How Students Write — false positives and the detection landscape
- Medical School AI Policies 2026: AMCAS Rules & 60+ Schools — our AI policy hub for medical admissions
The CASPA writing cluster:
- Sample CASPA Essay Analysis: How 40+ Applicants Got Into Top PA Programs
- The CASPA Life Experiences Essay: What the New Prompt Actually Asks
Review Your Personal Statement
See how your AMCAS or secondary essay scores before you submit.