How Readers Evaluate the CASPA AI and Technology Essay
How experienced PA admissions readers evaluate the new CASPA AI and Technology essay — what strong responses do that mediocre ones miss.
How Readers Evaluate the CASPA AI and Technology Essay
The new CASPA AI and Technology essay is the easiest essay in the 2026-27 PA application cycle to write badly and one of the hardest to write well. The trap is hidden in plain sight: the prompt invites you to talk about emerging healthcare technology, and most applicants oblige. They write about how AI is changing medicine, how telemedicine has expanded access, how wearables are democratizing data. Every sentence is technically true. Every essay reads identically to every other essay.
Experienced PA admissions readers are looking for something different. They are looking for clinical thinking, grounded in lived exposure, that holds a tradeoff and engages the equity clause that most applicants miss entirely. The applicants who get this essay right are not technology enthusiasts. They are observers who happened to be in a clinic when an AI tool actually mattered, and who can tell that story in 2,500 characters without reaching for the cliches.
This article is a methodology-grade explanation of what readers notice and what readers miss. It draws on PAEA's published 2026-27 cycle materials, the AAMC's AI in Medical Education principles, the AMA's Augmented Intelligence framework, recently published research from JAAPA and the Journal of Physician Assistant Education, the Wolters Kluwer survey of how practicing PAs actually use AI today, and the most credible advisor commentary in the PA admissions space. It does not reveal GradPilot's scoring mechanics. It tells you what an experienced reader is looking for, and why.
What this essay is and what it's testing
The Situational Decision-Making Question — PAEA's official internal label for what most applicants will know as the AI and Technology essay — was added to CASPA for the 2026-27 application cycle. It replaces the COVID-19 Impact Essay that ran in earlier cycles. It is optional, lives in the optional essays section of the CASPA application, and runs approximately 2,500 characters based on the structural parallel with the existing Life Experiences essay and the analysis published by HowMedWorks (the 2026-27 CASPA Applicant Guide PDF, expected around cycle open on April 27, 2026, will be the formal confirmation).
Here is the verbatim prompt as published by PAEA on its Admission Suite of Products Updates page:
"Emerging technologies like artificial intelligence, telemedicine, and wearable health devices are changing how clinicians deliver care. How should future PAs learn to use these tools thoughtfully while maintaining strong, human-centered relationships with patients, even in settings where access to technology may be limited?"
PAEA has not published a formal rationale for adding this essay, but the design philosophy is triangulable from adjacent moves. Donna Murray, the PAEA Senior Director of Admissions during the 2024-25 transition, has stated publicly that PAEA has a responsibility to admit applicants "who are not just aware of health inequities, but who have a strong desire to make a real impact" — and that CASPA should "consider including questions that address the candidate's ability for self-compassion and empathy." The equity clause in the new prompt — "even in settings where access to technology may be limited" — is downstream of this stated philosophy, not a throwaway.
The essay also sits structurally downstream of the AAMC's Principles for the Responsible Use of AI in and for Medical Education, published in January 2025 and updated in July 2025. Principle 1 is "Maintain Human-Centered Focus." Principle 3 is "Provide Equal Access to AI." The CASPA prompt's "human-centered relationships" and "limited access" language is structurally parallel to AAMC Principles 1 and 3. PAEA does not cite the AAMC document, but the alignment is too close to be coincidental.
The third piece of context is real professional uptake. A Wolters Kluwer survey of 203 practicing PAs published in December 2025 found that 56% of PAs already use AI in clinical practice, 61% use AI for documentation, and 87% of PAs feel they need more AI training. Forty percent of practicing PAs name "the rise of artificial intelligence" as a major factor reshaping the profession over the past three years. This is the data that makes the essay non-optional in spirit, even though it is technically optional in form: a majority of practicing PAs already use AI, most feel under-prepared, and PAEA is moving the conversation upstream to admissions.
The essay is testing whether you can articulate a mature, equity-aware, clinically grounded understanding of AI, telemedicine, and wearables as tools that enhance rather than replace the human PA-patient relationship. It is not testing whether you can talk about AI. Almost everyone can. It is testing whether you can think clinically about it.
The questions an experienced reader is asking
When an experienced PA admissions reader opens this essay, they are running through six questions in their head. Each one is a real evaluation lens that experienced readers, admissions consultants, and the AAPA House of Delegates have all converged on as the dimensions that distinguish a strong response from a mediocre one. The questions, in the order most readers process them:
- Is there a real clinical encounter behind this, or is the applicant speculating?
- Has the applicant noticed the equity clause, or did they only answer the "AI in healthcare" half of the prompt?
- Does the applicant hold the tradeoff between AI utility and AI risk, or do they default to enthusiasm or pessimism?
- Where is the patient in this essay, and what did the technology do to their experience?
- Does this read as PA-specific reasoning, or could it have been submitted to any health profession?
- What does the applicant plan to do to develop this judgment as a future PA?
The next six sections walk through each question in detail — what it is really asking, why it matters, what strong responses do, what mediocre responses miss, and what weak responses look like. Throughout, "strong" and "weak" are descriptive labels for applicant behavior. They are not scores.
Question 1: Is there a real clinical encounter behind this?
What this is really asking
The essay rewards lived clinical exposure to emerging technology. It penalizes abstract speculation. The reader is asking whether the applicant actually saw an AI tool, a telemedicine platform, or a wearable device in clinical use, with their own role in the encounter, or whether they wrote an essay about technology in healthcare without ever being in a healthcare setting where the technology was active.
Why it matters
PA admissions has always rewarded clinical grounding. The CASPA Personal Statement is judged on clinical exposure quality. The Life Experiences essay is judged on whether the experience feels lived rather than performed. The Situational Decision-Making Question is no different — the new prompt does not exempt applicants from the lived-experience standard just because the topic is technology.
The Wolters Kluwer survey numbers make the standard achievable. With 56% of practicing PAs using AI in clinical practice today, an applicant who has scribed in primary care, worked as a medical assistant in an outpatient clinic, served as an ED tech, or shadowed a PA in 2024-2025 has a better-than-even chance of having seen real clinical AI in use. The most common form is ambient AI scribes — products like Abridge, Nuance DAX Copilot, DeepScribe, Suki, Freed, Heidi, and Nabla — which embed inside the EHR and generate documentation from the in-room conversation. Medical scribes and pre-PAs in shadowing roles in 2024-2025 had front-row seats to this rollout.
Other realistic exposure includes EHR-integrated clinical decision support (the Epic Sepsis Model is widely deployed and widely discussed — see the Wong et al. 2021 JAMA Internal Medicine external validation study that found the model missed 67% of sepsis cases), retinopathy screening AI like IDx-DR (the first FDA-cleared autonomous diagnostic AI), algorithmic triage in emergency departments, telemedicine platforms ubiquitous in post-COVID outpatient care, and remote patient monitoring through continuous glucose monitors (Dexcom, Freestyle Libre), wearable ECGs (Apple Watch, KardiaMobile, Zio patch), and home BP cuffs.
The reader is not expecting the applicant to have used all of these. The reader is expecting the applicant to have used or watched at least one closely enough to write about it with the texture of someone who was there.
What strong responses do
- Name a specific tool. Not "AI documentation software" but Abridge, or Nuance DAX, or the specific sepsis prediction model that fired on a specific patient. Specificity signals exposure.
- Locate the encounter in a specific role and setting. "While scribing in a family medicine clinic that used Abridge inside Epic Haiku" is a different sentence than "in a clinical setting that used AI tools." The first one tells the reader who the applicant was, where they were, and what they had access to. The second one tells the reader nothing.
- Anchor the encounter in a small detail that could only come from being there. A glance, a tone, a screen behavior, a patient's reaction, a workflow moment. Something the applicant noticed that an outsider would not have predicted.
- Tie the encounter to the position the essay takes. The clinical anchor is structural — it is what the rest of the essay rests on, not a backdrop.
What mediocre responses miss
- Tools named in categories rather than by product. "An AI EHR alert" instead of "the Epic Sepsis Model." "An ambient AI scribe" instead of "Abridge in Epic Haiku." Categorical naming is a tell that the applicant has read about the tool but not seen it in use.
- Multiple shallow encounters in place of one anchored deeply. A 2,500-character essay does not have the budget for three different clinical anchors. The mediocre move is to mention three tools in three sentences. The strong move is to develop one in three paragraphs.
- Encounters described from a perspective the applicant could not have had. A CNA describing model architectures. A medical assistant discussing reinforcement learning from human feedback. The role and the observation have to match.
What weak responses look like
- Consumer technology examples substituted for clinical technology. ChatGPT for studying, smartwatch step counts, fitness app gamification. These conflate "AI in general" with "clinical AI" and signal that the applicant has not been in a clinical setting where the prompt-relevant technology was present.
- Tools named that the applicant could not plausibly have encountered. Listing IDx-DR, Aidoc, Viz.ai, and PathAI in a single sentence as examples the applicant has "studied" — when the applicant has no radiology or ophthalmology exposure — is performance, not substance.
- Pure abstraction. An essay that floats above any actual clinical setting and discusses "how AI is changing healthcare" without ever showing a healthcare moment.
Question 2: Has the applicant noticed the equity clause?
What this is really asking
The prompt's last twelve words are "even in settings where access to technology may be limited." This is the part most applicants skip. The reader is asking whether the applicant read the entire prompt and engaged with the equity layer, or whether they treated the essay as a generic "AI in healthcare" question and ignored the part that distinguishes it from every other AI essay in higher education.
Why it matters
This is the dimension where consultant consensus is sharpest. Phoebe Kubo at HowMedWorks, who authored an AAPA House of Delegates resolution on AI as Chief Student Delegate in 2024-2026, identifies "Equity and adaptability: What about settings with limited technology access?" as one of four mandatory layers the essay must address. Her exact framing in her February 2026 article on the new CASPA essay is: "This isn't throwaway language. Admissions committees want to know that you're also thinking about health equity in this conversation." PrePA Clinic, in their March 2026 breakdown, echoes this — equity is named as a failure mode if ignored.
The institutional anchor is AAMC Principle 3 — Provide Equal Access to AI — in the AAMC Principles for the Responsible Use of AI in and for Medical Education. The PAEA design philosophy anchor is Donna Murray's stated focus on health equity. And the structural anchor is the prompt itself: those twelve words are not garnish. They were placed there deliberately by the design team that wrote the essay.
What strong responses do
- Name a specific population or setting where access is limited and the technology in question would not deploy cleanly. Rural clinics with intermittent broadband. Federally Qualified Health Centers serving uninsured patients. Indian Health Service sites in Native American communities. Migrant and seasonal farmworker clinics. Correctional health facilities where personal devices are banned. Elderly populations without smartphones or comfort with patient portals. Language-discordant patients for whom EHR portals exist only in English. The naming has to be specific enough that the reader trusts the applicant has seen, worked with, or thought concretely about that population.
- Hold the equity clause throughout the essay, not just in the closing. The strong move is to weave equity into the discussion of the technology — to evaluate which tools fit which settings, and to hold the limited-access frame as a constant rather than mentioning it once and pivoting back to a high-tech setting.
- Frame as partnership, not rescue. "Working with patients on how to integrate a wearable they cannot afford to replace if it breaks" is a different framing than "bringing AI to the underserved." The first one centers the patient's agency. The second one centers the applicant's white-knight self-image.
What mediocre responses miss
- The equity clause appears in a single closing sentence and disappears. This is the most common mediocre pattern — the applicant writes most of the essay about AI in a high-tech setting, then adds "of course, not everyone has access to these tools" in the last paragraph, and treats the clause as discharged. The reader notices.
- Equity reduced to a generic "not everyone has technology" claim with no population, no setting, no constraint named. The grammatical structure is there but the substance is missing.
- Equity treated as an obstacle to AI adoption rather than a design consideration that shapes which AI tools fit which settings. The mediocre framing is "AI is great, but unfortunately some patients can't access it." The strong framing is "AI is great for some workflows and the wrong tool for others, and the right PA judgment is knowing which is which for the patient in front of you."
What weak responses look like
- The equity clause is ignored entirely. No reference to limited access anywhere in the essay. This essay is now indistinguishable from a generic "AI in medicine" essay that could have been submitted to any health profession application.
- Performative diversity language with no clinical or workflow specificity. Buzzword equity that does not survive the question "what specific population, what specific tool, what specific constraint?"
- Savior framing. The applicant casts themselves as the future PA who will bring technology to the deserving poor. This is the framing PA admissions readers most reliably distrust — it tells them more about the applicant's self-image than about their judgment.
Question 3: Does the applicant hold the tradeoff?
What this is really asking
The CASPA prompt explicitly asks how PAs should "use these tools thoughtfully." The load-bearing word is thoughtfully. The reader is asking whether the applicant can show what thoughtful use of emerging clinical technology actually looks like — which means holding both the real benefits and the real risks in the same sentence, not collapsing into either techno-enthusiasm ("AI will revolutionize healthcare") or techno-pessimism ("AI threatens the human touch").
Why it matters
This is the dimension where consultant consensus is strongest and where the failure mode is most visible. HowMedWorks names "being either overly pessimistic or naively optimistic about technology" as a top failure. PrePA Clinic gives the same warning. The cliche to avoid is the sentence "While AI is a powerful tool, the human connection between provider and patient is irreplaceable" — which is true, generic, and load-bearing in any application essay across any health profession. Every consultant warns against it.
The institutional framing for "thoughtful use" is the American Medical Association's Augmented Intelligence in Medicine framework, which deliberately rebrands "artificial intelligence" as "augmented intelligence" to underline that AI in clinical practice is an assistive tool that enhances rather than replaces clinical judgment. The AMA's 2023 updated AI principles cover oversight, transparency, disclosure and documentation, and liability — and they are the framework most aligned with the CASPA prompt's own vocabulary.
The clinical-mechanism vocabulary is also well established. The strongest essays use one or two of the following concepts with operational clinical meaning, not as buzzwords: automation bias (clinicians deferring to AI suggestions even when their own judgment would flag otherwise), deskilling (losing physical exam and clinical reasoning skills from chronic AI dependence), explainability ("black box" models versus clinician-interpretable output), human-in-the-loop (the AAMC Principle 1 framing), informed consent (whether patients know AI is being used in their care), algorithmic bias (well-documented in triage algorithms that under-triage women, patients of color, and patients whose presentation does not match training data). The Eric Topol Deep Medicine framework — "deep phenotyping, deep learning, deep empathy and connection" — is the most rhetorically elegant for this specific essay because Topol's "deep empathy and connection" is essentially the prompt's "human-centered relationships" restated, but it only works if the applicant has actually read Topol.
The most important point to internalize: thoughtful skepticism is not a weakness in this essay. An applicant who writes a clinically grounded case against over-reliance on the Epic Sepsis Model — citing the Wong et al. 2021 JAMA Internal Medicine external validation finding that the model missed 67% of sepsis cases and produced an 88% false alarm rate — is doing exactly the same intellectual work as an applicant who writes an enthusiastic case for ambient AI scribes freeing PA attention for patient eye contact. Both are clinical reasoning. Both are responsive to the prompt. Neither is a "right answer."
What strong responses do
- Name both a specific clinical risk and a specific clinical benefit, both tied to the technology in the clinical anchor. The risk and benefit are not abstract. They are workflow- and patient-level.
- Hold the tradeoff in the same paragraph, not in separate paragraphs. The sophisticated move is to show both at once — to write a sentence that acknowledges the value and the cost of the same tool in the same breath.
- Use one or two ethics framework concepts with operational meaning. "Automation bias" landed in a clinical scenario where the applicant saw a clinician defer to an AI suggestion. "Explainability" applied to a specific decision support tool whose output the supervising physician questioned. The reader does not want to see five framework names. They want to see one or two used precisely.
- Cite a specific failure mode of a named tool. The Epic Sepsis Model's published external validation problem. The IDx-DR referral threshold tradeoff. Algorithmic triage's documented bias against certain patient populations. Ambient scribe consent gaps. Specificity makes the reasoning defensible against a reader who knows the tools.
What mediocre responses miss
- Both benefits and risks are mentioned, but only one is developed. The applicant nods at the "of course there are risks" line and then writes 80% of the essay about the upside, or vice versa.
- Ethics framework concepts deployed as buzzwords with no clinical referent. "Transparency," "explainability," and "bias" appear in the essay without being tied to a specific clinical mechanism the applicant has actually thought about.
- The "AI is a tool, not a replacement" cliche asserted without grounding. The cliche is true. It is also worthless without a specific scenario backing it up.
What weak responses look like
- Pure techno-enthusiasm. "AI will revolutionize healthcare" boosterism that acknowledges no risks and reads more like a tech press release than clinical reasoning.
- Pure techno-pessimism. "AI threatens the human touch" fearmongering that acknowledges no benefits and treats clinical AI as inherently corrosive.
- Pretend technical expertise. Naming model architectures, training datasets, parameter counts, transformer details. HowMedWorks is explicit on this: "Admissions committees aren't expecting you to explain how machine learning algorithms work." The performance reads as defensive overcompensation.
- Generic AI ethics commentary that could appear in any health profession application. The essay needs to be a CASPA essay. Generic ethics is the failure mode that erases everything PA-specific from the response.
Question 4: Where is the patient in this essay?
What this is really asking
The prompt explicitly asks how PAs should use emerging tools "while maintaining strong, human-centered relationships with patients." The reader is asking whether the applicant integrated the patient experience into the essay as a structural element, or whether the patient is missing entirely — a technology essay with no humans in it.
Why it matters
This is the dimension where the cliche failure is most pervasive and most visible. The sentence "While AI is powerful, the human connection between provider and patient is irreplaceable" is the most common essay closer in this category, and every consultant flags it as a failure mode. It is the cliche the essay is testing for. An essay that closes on this sentence has not held the human-centered claim — it has asserted the cliche.
The institutional framing is AAMC Principle 1: Maintain Human-Centered Focus. The "maintain" verb is doing real work — human-centered care is not threatened by technology by default, it is only threatened when clinicians let it be. The strong essays show how a specific technology either enabled or threatened a specific human-centered moment, and what the PA's role is in protecting that moment.
The available subdimensions are well-documented in the bioethics literature: patient autonomy (the patient's ability to direct their own care), informed consent (in this context: do patients know AI is being used in their visit, and would they object if they did?), dignity, communication, attention, presence, trust. An essay does not have to address all of them. It should make at least one operational rather than abstract. The WHO Ethics and Governance of Artificial Intelligence for Health (2021) framework explicitly lists "protect human autonomy" and "promote human well-being and safety" as the first two of six ethical principles, and the Coalition for Health AI (CHAI) "Blueprint for Trustworthy AI in Healthcare" places fairness and patient experience at its center.
What strong responses do
- Show a specific patient interaction, with the precision of someone who was actually present. Not "patients want to feel heard" but "the elderly patient I scribed for had never made eye contact with the attending until the visit Abridge handled the typing." The patient is in the room. The reader can see them.
- Name the technology's effect on the interaction explicitly. The technology either enabled the human moment, threatened it, or both. The strong essay names which and grounds it in observable behavior — a glance, a question the patient asked, a comfort or discomfort the applicant noticed.
- Hold the tradeoff between technology utility and patient experience in the same paragraph. The same Abridge that gave the attending eye contact may have transcribed the visit without the patient understanding what "AI scribe" meant when the attending mentioned it in passing. Both things are true at once.
- Address at least one human-centered subdimension operationally. Consent, autonomy, attention, presence, dignity, trust. The applicant picks one or two and shows what it looks like in clinical practice, not in the abstract.
What mediocre responses miss
- Patients in third-person abstract. "Patients want to feel heard" or "patients value the human connection" describes patients as a category, not as people. The reader notices.
- The cliche sentence. "While AI is powerful, the human connection is irreplaceable" appears in the essay, undefended by any specific moment.
- Technology and human-centered care framed as opposites. The mediocre framing treats the two as zero-sum. The strong framing shows them as co-existing in tension that the PA's judgment resolves.
- Patients show up only in the closing sentence. The human-centered claim is asserted in the last paragraph after a body of essay that never showed a single patient.
What weak responses look like
- No patient anywhere in the essay. The applicant discusses technology without ever showing a patient interaction. This is more common than it should be.
- Savior framing about restoring humanity to medicine. The applicant casts themselves as the future PA who will bring back the human touch that has been lost to technology. The framing tells the reader more about the applicant's self-image than about their clinical observation.
- Patient consent or disclosure issues raised in the abstract and not connected to a specific tool or moment. The buzzword presence of "consent" or "autonomy" without any clinical mechanism behind them.
Question 5: Does this read as PA-specific or as a generic future-clinician essay?
What this is really asking
The reader is running a rewrite test in their head: would this essay read identically if "PA" were replaced with "physician," "nurse practitioner," "physical therapist," or "future health professional"? If the answer is yes, the essay has failed the PA-specificity test — it could have been submitted to any health profession application.
Why it matters
The CASPA Situational Decision-Making Question is a PA essay, not a health profession essay. The strongest responses make use of what is specific about the PA role — the collaborative practice model, the supervising-physician relationship, the generalist breadth of the PA scope, the lateral mobility across specialties, the team-based care framework — when reasoning about technology adoption.
This matters because experienced PA admissions readers can immediately identify essays that could have been pasted unchanged into a medical school application. They distrust those essays. The reader's question is not "does the applicant want to be a PA?" — every applicant says they do. The reader's question is: does the applicant understand what is specific about the PA role, and does the technology reasoning in this essay depend on that specificity, or could the same paragraphs be written by anyone who has never thought about whether they want to be a PA, MD, or NP?
The PA-specific structural features that matter for technology reasoning are concrete. Collaborative practice shapes who is responsible for AI-assisted decisions — a PA's accountability framework is different from a physician's because the supervising-physician relationship distributes liability and decision-making in specific ways. Lateral mobility across specialties shapes the PA need to use unfamiliar AI tools quickly when transitioning from internal medicine to dermatology to emergency medicine. Generalist breadth shapes which AI features are within reach of the PA scope of practice — autonomous diagnostic AI (FDA-cleared tools like IDx-DR for diabetic retinopathy) sits differently in a generalist primary care PA's workflow than in a specialist physician's workflow. PA training program structure shapes how AI competencies should be incorporated — the Gomes, Eiseman, and Joseph article in the Journal of Physician Assistant Education (June 2025) on integrating AI into PA education is the most current published treatment of how PA programs are responding.
The professional-society anchor is the AAPA. The AAPA House of Delegates passed its first resolution on AI in 2024-2026, authored by Phoebe Kubo as Chief Student Delegate. The JAAPA (Journal of the American Academy of Physician Associates) May 2025 article on AI in cardiovascular practice and the broader JAAPA AI coverage that has accelerated in 2024-2025 are the profession's published voice on the topic. An applicant who has read or referenced any of these is doing different work than one who has not.
What strong responses do
- Engage with at least one PA-specific structural feature operationally. The collaborative practice model, the supervising physician relationship, the generalist scope, the lateral mobility, the team-based care framework. The applicant shows how that feature shapes the technology question, not just that they know the feature exists.
- Identify at least one technology question that is genuinely different for PAs than for other clinicians. Accountability for AI-assisted decisions under the supervising-physician model. The challenge of using unfamiliar AI tools when changing specialties. The interaction between generalist breadth and specialist AI tools. The training-pipeline question of when AI literacy should be introduced.
- Reference the PA profession's own AI conversation. AAPA HOD resolutions, PAEA guidance, JAAPA articles, JPAE training-integration research, specific PA program AI policies. The reference does not have to be deep, but it has to be substantive — a name-only mention with no engagement reads as performance.
- Make the closing dependent on PA practice intent. The forward vision in the closing paragraph follows from the PA role specifically, not from a generic future-clinician aspiration.
What mediocre responses miss
- PA appears only as a label in the closing. The body of the essay reads as a generic future-health-professional response, and the word "PA" only enters in the last paragraph.
- One PA-specific concept is referenced but without operational meaning. "I want to be a PA because of the collaborative practice model" is a statement, not engagement. The strong version shows what collaborative practice looks like in the technology question being discussed.
- The essay would still read 90% the same if rewritten for medical school or NP school. This is the rewrite test. A mediocre essay survives the test by accident — by happening to mention PA in places — but the reasoning would not change.
What weak responses look like
- Profession-agnostic essay that names PA only once. The body is interchangeable with any other health-profession application.
- Generic "future clinician" or "future provider" language used consistently. The applicant has not committed to PA-specific framing.
- No engagement with PA-specific structural features anywhere. Collaborative practice, supervising physician, generalist scope, lateral mobility, team-based care — none of these appear.
Question 6: What is the applicant's plan for continued development?
What this is really asking
The CASPA prompt's verb is "How should future PAs learn to use these tools thoughtfully." The word "learn" is doing work. The reader is asking what the applicant's plan is for continuing to develop technology judgment as a future PA student and practitioner — not as a closing platitude, but as a substantive forward vision that follows from the rest of the essay.
Why it matters
This dimension separates the applicants who treat the essay as a finished position from the applicants who treat it as a starting point. The technology landscape will change during the applicant's PA training and early career — what counts as "current" clinical AI in 2026 will not be current in 2030. The strongest applicants demonstrate awareness that their current view is provisional, that PA training is when the foundation gets built, and that ongoing professional engagement is how a PA stays current in a moving field.
The institutional framing is the AAMC's Principle 4 in the AI in Medical Education principles: "Ensure Education and Training." The PA-specific framing is Gomes, Eiseman, and Joseph's JPAE June 2025 article on integrating AI into PA education. The professional-engagement framing is the AAPA House of Delegates work and the AAPA's own AI policy positions. The strongest applicants name at least one of these resources operationally — not as a name-drop, but as a specific commitment to read, attend, or engage.
Realistic forward-engagement options for an entering PA student include: continuing engagement with AAPA AI policy positions, reading the American Medical Association's Augmented Intelligence resources and the AAMC AI in Medical Education principles, following JAAPA for clinical-AI articles, tracking the Coalition for Health AI (CHAI) Blueprint and the Joint Commission RUAIH framework joint guidance from September 2025, reading Eric Topol's Deep Medicine, seeking out PA programs with explicit AI competency tracks, and seeking rotations in clinical settings where the technology is actually deployed.
What strong responses do
- Name at least one specific learning approach with operational meaning. Not "I will continue learning" but "I plan to follow JAAPA's clinical AI coverage and engage with the AAPA HOD AI policy work as a student delegate" or "I plan to seek rotations in both high-tech academic settings and low-resource community clinics so I can see how the same technology lands in different patient populations."
- Name at least one specific clinical domain or population where the applicant wants to apply this thinking. Primary care in rural FQHCs, ED triage in urban safety-net hospitals, endocrinology with CGM-using diabetic populations, family medicine with elderly patients navigating telemedicine. The specificity makes the trajectory plausible.
- Connect the forward vision to the position the essay has taken. The applicant who argued for thoughtful skepticism of clinical decision support tools should not pivot in the closing to enthusiastic technology adoption. The applicant who argued for ambient AI scribes freeing PA attention should not pivot to skepticism. The closing should land where the body argues.
- Demonstrate awareness that the position is provisional. "My current view is X, and I expect PA training to test and refine it" is more sophisticated than "I have figured this out and will execute on the plan."
What mediocre responses miss
- Generic "I will continue to learn about AI" closer with no specifics. No named resource, no specific clinical domain, no operational learning approach.
- Forward vision that contradicts the essay's earlier reasoning. The body argued one position, the closing pivots to a different one. The reader notices the inconsistency.
- Aspirational without grounding in current exposure. The applicant says they will engage with the AAPA HOD work or read JAAPA without showing any current curiosity about either.
What weak responses look like
- No forward engagement at all. The essay ends with a clinical observation and never addresses how the applicant's thinking will continue to develop.
- The closing pivots to a generic "Why PA" platitude. The applicant runs out of essay and falls back on inheritance from their personal statement instead of landing the technology argument.
- Career intent so vague it could apply to any future PA. "I want to provide compassionate care to underserved populations" is a statement that survives any context. It is not a forward vision.
What this all adds up to
The CASPA AI and Technology essay is not testing whether you can write about technology. It is testing whether you can think clinically about emerging tools, hold a tradeoff that resists the cliches, engage the equity clause that most applicants miss, integrate the patient experience into the technology question, and reason as a future PA — not as a future generic health professional. The strongest responses do all six of these things in 2,500 characters and read as the work of someone who actually saw a clinical AI tool in use and thought hard about what it meant.
The mediocre responses are the ones that address the surface — they discuss AI in healthcare, they assert the cliche about the human touch, they mention equity in the closing — but never go below it. The weak responses are the ones that read as if the applicant could have submitted the same essay to any health profession application without modification.
The reader is not looking for a tech expert. The reader is looking for a future PA who has been paying attention.
References
This article draws on the following primary, adjacent, and advisor sources. Sources are categorized by trust tier: T1 official primary, T2 officially adjacent (peer-reviewed or institutional), T3 established advisors with track records.
T1 — Official primary
- PAEA Admission Suite of Products Updates (2026-27 CASPA Cycle) — paeaonline.org — verbatim Situational Decision-Making Question prompt, optional status confirmation, replacement of COVID-19 Impact Essay
- PAEA Policies & Procedures for the PAEA Admissions Suite of Products 2026-27 (PDF, October 2025) — the binding cycle policy document; also formally ratifies PAEA's stance that AI detection tool output alone is insufficient grounds for a CASPA investigation
- PAEA "What Your Program Should Know About AI and Admissions" (Patrick McArdle, September 2023) — paeaonline.org — PAEA's first published statement on generative AI in admissions; quoted Donna Murray on health equity as a CASPA design principle
- AAMC Principles for the Responsible Use of AI in and for Medical Education (V1.0 January 2025; V2.0 July 2025) — aamc.org — seven principles authored by an AAMC review committee; Principle 1 (Human-Centered Focus) and Principle 3 (Equal Access to AI) are the institutional anchors for the CASPA prompt's framing
- American Medical Association — Augmented Intelligence in Medicine — ama-assn.org — the "augmented not artificial" framing that maps directly to the CASPA prompt's "use these tools thoughtfully" language
- AMA AI Principles (2023 update) — ama-assn.org — the most operationally useful ethics framework for a 2,500-character essay
- WHO Ethics and Governance of Artificial Intelligence for Health (2021) — who.int — six global health ethical principles; equity-oriented framing
- FDA — Permits Marketing of First Autonomous AI Diagnostic (IDx-DR) — fda.gov
- Coalition for Health AI (CHAI) Blueprint for Trustworthy AI in Healthcare — coalitionforhealthai.org — seven principles for AI governance in health systems; updated September 2025 with The Joint Commission RUAIH joint guidance
T2 — Officially adjacent
- Wong, A. et al. "External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients." JAMA Internal Medicine, 2021 — jamanetwork.com — the canonical published failure of the Epic Sepsis Model: 67% of sepsis cases missed, 88% false alarm rate. The reference admissions readers most often know by name when an applicant cites clinical decision support
- Gomes, Eiseman, Joseph. "The Integration of Artificial Intelligence in Physician Assistant Education." Journal of Physician Assistant Education, June 2025 — the most current peer-reviewed treatment of how PA training programs are responding to AI
- JAAPA "Artificial intelligence in cardiovascular practice" (May 2025) — the Journal of the American Academy of Physician Associates has accelerated AI clinical coverage in 2024-2025
- Wolters Kluwer "AI in PA Practice" survey (203 practicing PAs, September 2025; published December 11, 2025) — 56% of practicing PAs use AI in clinical practice, 87% feel they need more AI training, 40% name AI as a major factor reshaping the profession over the past three years
- Topol, Eric. Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. Basic Books, 2019 — basicbooks.com — the "deep phenotyping, deep learning, deep empathy and connection" framework whose third element maps directly to the CASPA prompt's "human-centered relationships"
T3 — Established advisors
- Phoebe Kubo (HowMedWorks). "How to Answer the New Technology in Healthcare Essay." February 7, 2026 — howmedworks.com/caspa-technology-essay/ — the highest-credibility advisor article on this specific essay. Author is the AAPA House of Delegates Chief Student Delegate 2024-2026 and authored an AAPA HOD resolution on AI. Identifies the four mandatory layers: education/training, ethics, balancing tech and touch, and equity/adaptability. Verbatim prompt and 2,500-character limit confirmation
- PrePA Clinic. "The New CASPA AI Essay (2026–2027 Cycle): What Pre-PAs Need to Know + How to Write It." March 2026 — prepaclinic.com — paraphrased prompt, five suggested topics, equity emphasis
- The PA Platform (Savanna Perry) — thepaplatform.com — established PA admissions consultant whose Life Experiences essay guidance is the closest sibling content for the new essay
- The Physician Assistant Life (Paul the PA) — general CASPA AI policy commentary, contextual
For the policy side of how CASPA's 2026-27 AI certification works alongside this essay, see our CASPA AI Certification Decoded clause-by-clause read. For the four-system comparison of AMCAS, AACOMAS, CASPA, and TMDSAS AI policies, see Can You Use ChatGPT for Your Medical School Application?. For the broader CASPA writing guide on this essay, see The New CASPA AI and Technology Essay (2026-2027): Verbatim Prompt, the Hidden Test, and 7 Angles That Work.
Get your CASPA AI and Technology essay reviewed against these dimensions
Writing this essay well is hard because the surface trap is real — most applicants write technically correct essays that read identically to every other application in the pile. The dimensions above are the questions experienced PA admissions readers are actually asking. The strongest responses ground in a specific clinical encounter, engage the equity clause as more than a closing aside, hold the AI tradeoff without collapsing into enthusiasm or pessimism, integrate the patient experience, reason as a future PA specifically, and articulate a plausible plan for continued development.
GradPilot reviews CASPA AI and Technology essays against exactly these dimensions. Upload your draft and get specific, actionable feedback on whether your clinical anchor is landing, whether your equity engagement reads as substantive, whether your tradeoff is held, whether the patient is present in the essay, whether your PA-role specificity passes the rewrite test, and whether your forward vision follows from the rest of the essay. Two free essay reviews per day, no credit card required. Your essay deserves a reader who has read everything in the field. That is what we built GradPilot to be.
Review Your Personal Statement
See how your AMCAS or secondary essay scores before you submit.