Attention: Restrictions on use of AUA, AUAER, and UCF content in third party applications, including artificial intelligence technologies, such as large language models and generative AI.
You are prohibited from using or uploading content you accessed through this website into external applications, bots, software, or websites, including those using artificial intelligence technologies and infrastructure, including deep learning, machine learning and large language models and generative AI.

ARTIFICIAL INTELLIGENCE Artificial Intelligence and Residency Applications: Is This the End of the Personal Statement?

By: Adam B. Weiner, MD, David Geffen School of Medicine, University of California, Los Angeles; Jonathan Bergman, MD, MPH, David Geffen School of Medicine, University of California, Los Angeles | Posted on: 05 Jan 2024

The urology application process has evolved rapidly over the past few years. Formerly, United States Medical Licensing Examination Step 1 scores were used by some programs to screen applicants for interviews and were considered among the most important factors for assessing candidates. Amid concerns for the burden of exam preparation and desire for a more holistic application process, Step 1 was made a pass/fail exam in January 2022. During the COVID pandemic, interviews were changed from in person to 100% virtual, primarily to reduce virus transmission. However, the virtual requirement was maintained to reduce the financial burden to applicants and ensure equity among the applicant pool. More recently, preference signaling has received positive feedback from both programs and applicants with the aim of providing an equitable system for students to demonstrate programmatic interest in place of other methods which might drive socioeconomic disparities (eg, away rotations and mentor or home department connections).

These changes were reactions to improve the application process. Most were intended to help encourage a holistic approach to ranking. With less focus on exam scores and the potential loss of interpersonal connections with virtual interviews, greater attention will be paid towards other application components that can highlight the individual traits of each student, including letters of recommendation and the personal statement. The personal statement, in particular, is poised to grow in importance. In the past, this part of the application varied from incredibly intimate and telling of an applicant’s fit to “cookie cutter” with repetitive platitudes. With the changes noted above, the personal statement stands out as an applicant’s opportunity to showcase their journey to medicine and urology in a format that can be honed into their voice.

However, as the application process changed to accommodate the COVID pandemic and augment equity, so too will interpretation of personal statements with the arrival of artificial intelligence (AI) language models such as ChatGPT. ChatGPT has already been shown to produce credible personal statements for residency applications that can be indistinguishable from those written by humans (Figure).1,2 With this new tool comes the need to forecast potential pros and cons so that our specialty can adapt appropriately.

image
Figure. Artificial intelligence (AI)–generated image of AI writing a personal statement.

Pro 1: Access to tools such as ChatGPT is equitable. Thus, resources spent on writing support for personal statements and access to mentors or other contacts who can assist with the writing process will play less of a role in creating disparities in the application process.

Con 1: Personal statements leveraging similar AI language models may start to seem homogeneous—potentially diminishing the worth of personal statements to programs trying to discern applicant fit. This could cause a subsequent need for programs to rely more on objective measures of applicant quality such as medical school reputation, further disadvantaging certain applicants. If students are going to use AI to write their entire personal statements, we might as well just call them statements.

Pro 2: Medical students applying to subspecialties are usually extremely busy at the time of application drafting with finishing clerkships, away rotations, and electives. AI language models can help medical students, and many others, save time initiating the writing process. This time can then be leveraged for balancing the many expectations of medical students transitioning from the final year of school. Importantly, AI should be used in this capacity to generate ideas or narratives that might be useful with the applicant only drafting what is actually personally relevant to them (see Con 1).

Con 2: Everyone is prone to mistakes using AI language models. Recently, a peer-reviewed publication was retracted after clear, accidental evidence of copying and pasting from ChatGPT was noted.3 This mistake was not caught during the peer-review process or copyediting. With so many applicants every year, these mistakes are likely to manifest in awkward ways in personal statements that could greatly jeopardize an applicant’s odds of matching—much worse than an honest typographical error.

Pro 3: International medical graduates, or those who otherwise might benefit from English language support, could leverage AI language models to improve writing clarity. This can help level the disadvantage felt by worthy applicants for whom English was not their first learned language, for instance. As noted in Con 1, however, these tools should be used only to the extent to prevent misrepresenting oneself in written words. Otherwise, if all students use the most common language models for grammar such as Grammarly, we might encounter issues with homogenizing individual written voices.

Con 3: Essay materials generated by AI might seem appealing to applicants to include, even if the words don’t truly represent the applicant. This could lead to inaccurate judgment of fit by programs, which could harm chances of interview offers at good fit programs or increase the likelihood of offers from poor fit programs. Again, personal statements should be kept personal.

When it comes to personal statements, many are familiar with an 8:1:1 ratio: 8 out of 10 personal statements are “OK” and don’t really hurt or help an applicant’s appeal; 1 out of 10 essays tend to decrease an applicant’s ratings; and 1 out of 10 statements help the applicant’s ranking. While this ratio might vary somewhat, we suggest that, going forward, both applicants and programs view the personal statements as a tool to help communicate individual passions. Applicants should take pride in these written works; this alone can prevent many of the issues noted above.

There is little doubt AI will change the landscape of urology residency personal statements. Although AI-driven technologies are improving objectivity, efficiency, and data-driven decision-making, it is unlikely that they will entirely replace the personal statement or the human component in the selection process. Guidance regarding appropriate and ethical use of AI should be taught to medical students. Program directors, applicants, and the medical education community should carefully manage these changes as the landscape evolves to guarantee a fair and comprehensive evaluation of aspiring physicians. Ultimately, we should continue to strive for the personal statement to remain personal.

  1. Johnstone RE, Neely G, Sizemore DC. Artificial intelligence software can generate residency application personal statements that program directors find acceptable and difficult to distinguish from applicant compositions. J Clin Anesth. 2023;89:111185.
  2. Patel V, Deleonibus A, Wells MW, Bernard SL, Schwarz GS. The plastic surgery residency application in the era of ChatGPT: a personal statement generated by artificial intelligence to statements from actual applicants. Ann Plast Surg. 2023;91(3):324-325.
  3. Conroy G. Scientific sleuths spot dishonest ChatGPT use in papers. Nature. 2023;10.1038/d41586-023-02477-w.

advertisement

advertisement