Attention: Restrictions on use of AUA, AUAER, and UCF content in third party applications, including artificial intelligence technologies, such as large language models and generative AI.
You are prohibited from using or uploading content you accessed through this website into external applications, bots, software, or websites, including those using artificial intelligence technologies and infrastructure, including deep learning, machine learning and large language models and generative AI.
ARTIFICIAL INTELLIGENCE Personalized Dietary Recommendations for Stone Prevention Using ChatGPT: A Step-by-Step Guide
By: Satomi Kiriakedis, BS, Oregon Health & Science University, Portland; Ian Metzler, MD, MTM, Oregon Health & Science University, Portland | Posted on: 19 Jan 2024
Urology stands to benefit from the integration of artificial intelligence (AI), particularly with time-consuming tasks like interpreting 24-hour urine collections in the management of nephrolithiasis. Although urine collections are recommended by guidelines, the complexity of lab interpretation in addition to the time investment for dietary counseling has limited their use, compromising patient care.1,2 AI tools like multimodal large language models (MLLMs) offer urologists resources to support them in interpreting these tests more efficiently and effectively. This technology can simplify the process and assist in providing personalized dietary counseling, thereby improving the quality of care. The MLLMs also provide an opportunity to create personalized recommendations that consider a patient’s native language, education level, and current food preferences. This article aims to serve as a guide to utilizing a specific MLLM, ChatGPT-4, in the conversion of individual lab results into customized dietary recommendations for kidney stone patients.
Step-by-Step Guide
To maximize the effectiveness of MLLMs, crafting the right prompt is essential. Prompt engineering—designing targeted prompts that steer the AI’s capabilities—can significantly influence the quality of the responses. This deliberate process requires an understanding of the model’s capabilities and limitations, and represents an area of ongoing research.3 Below are steps for crafting a prompt, with an accompanying example in the Figure.
- Preparation:
- Create or login to an account on https://openai.com/.
- Understand ChatGPT’s capabilities and limitations in relation to your clinical needs.
- Compile pertinent data, like laboratory findings and patient history, avoiding extraneous information that may mislead the model.
- Crafting the Prompt:
- Use precise language prompt and clarify your expectations. For instance, “Provide dietary recommendations for reducing stone recurrence in a patient with hyperoxaluria.”
- Contextualize the response by specifying what role you would like it to take, in this case “Act as a urologist and dietician….”
- The tone of your prompt should reflect the desired tone of the AI’s response. You can also clarify the audience of the response, including preferred language and education level of your patient.
- Include patient dietary preferences, such as their most consumed cuisines, for more personalized dietary suggestions.
- Convert lab values into clinical descriptors, like referring to “340 mg calcium excreted per day” as “hypercalciuria.”
- Engaging With ChatGPT:
- Initiate a new chat to avoid influence from past sessions (Figure).
- Send the prompt to ChatGPT.
- Evaluate the response for clinical relevance, completeness, accuracy, and appropriateness to your patient’s situation.
- Responses can sometimes seem correct at first glance but lack important elements and therefore need editing.
- Iterative Refinement:
If the response is inadequate, consider the following strategies (Figure):
a. Edit the original prompt and resubmit. This is best for if you need to provide more context or modify the goal.
b. Regenerate a response for minor adjustments like rephrased responses.
c. Use follow-up messages to emphasize certain points, correct errors, or request specific adjustments. - Clinical Interpretation and Implementation:
- Assess the advice within the clinical context of your patient.
- Check the AI’s suggestions against current guidelines and evidence-based practices.
- Integrate the AI-generated input with your expertise, patient-specific factors, and established practices.
- Inspect the prompts for any biases and adjust as appropriate.
- Remember that AI-generated advice should supplement, not supplant, professional clinical judgment.
Limitations
While ChatGPT is a sophisticated tool with great potential, it has limitations that are crucial to acknowledge, especially in medicine. Despite the tendency to anthropomorphize such AI systems, it’s essential to remember that MLLMs are computer programs whose knowledge and capabilities are essentially limited to 2 things: the data they were trained on, and the immediate context provided in the chat. They don’t “understand,” but merely predict the next sequence of words based on learned patterns.4 They also lack a sense of self or identity; they do not differentiate between who is speaking and can confuse user inputs with their own responses.5
The limitations of MLLMs are particularly pronounced when faced with topics that were underrepresented in their training data. In such instances, the AI may generate responses that are incorrect or nonsensical—a phenomenon known as “hallucination.”6 It lacks awareness to recognize gaps in its knowledge, which can be misleading. This characteristic can be particularly challenging in specialized fields like urology, where the model may have limited exposure and therefore less accuracy.
Another consideration is that MLLMs like ChatGPT are designed for language processing and not numerical analysis. They lack the capability to interpret or manipulate numbers with the precision required in medicine, such as understanding the intricacies of 24-hour urine collections.7
Additionally, users must be mindful of token limitations of the AI. Each model can process only a certain number of tokens—each representing up to several characters—and exceeding this limit (eg 4096 tokens/∼2000 words for GPT-4) causes the AI to lose access to additional information.8 This can lead to a lack of comprehension of the full context, which is particularly problematic when dealing with longer documents like medical records.
A final critical consideration when using MLLMs is data privacy. Since interactions with the AI can be stored and used for future training, it is essential to deidentify patient information to maintain confidentiality.9
Conclusion
ChatGPT-4 and other MLLMs offer promising avenues to alleviate the burden on physicians, providing analytical assistance for lab interpretations and improving personalized and culturally appropriate patient management recommendations. As AI increasingly intersects with health care, the importance of rigorous oversight remains vital. It is essential to navigate the integration of AI into clinical practice with a well-informed and judicious approach, ensuring that the benefits of technologies like ChatGPT-4 enhance, rather than replace, the human aspect of professional medical decision-making. As the AI landscape evolves through updates from OpenAI and other MLLM services like Google’s Bard and Meta’s LLaMA, the relative advantages of each platform may shift, bringing new benefits or capabilities. It is essential for urologists to stay current with these changes, evolving their application strategies to harness the technologic advancements to enhance patient care.
- Pearle MS, Goldfarb DS, Assimos DG, et al. Medical management of kidney stones: AUA guideline. J Urol. 2014;192(2):316-324.
- Milose JC, Kaufman SR, Hollenbeck BK, Wolf JS, Hollingsworth JM. Prevalence of 24-hour urine collection in high risk stone formers. J Urol. 2014;191(2):376-380.
- Giray L. Prompt engineering with ChatGPT: a guide for academic writers. Ann Biomed Eng. 2023;51(12):2629-2633.
- OpenAI. GPT-4 technical report. 2023. arXiv:2303.08774.
- Gerganov G. llama.cpp. 2022. GitHub. https://github.com/ggerganov/llama.cpp/blob/master/README.md
- Ji Z, Lee N, Frieske R, et al. Survey of hallucination in natural language generation. ACM Comput Surv. 2023;55(12):1-38.
- Chen L, Zaharia M, Zou J. How is ChatGPT’s behavior changing over time?. 2023. arXiv:2307.09009.
- Raf. What are tokens and how to count them?. OpenAI. https://help.openai.com/en/articles/4936856-what-are-tokens-and-how-to-count-them
- Natalie. What is ChatGPT?. OpenAI. https://help.openai.com/en/articles/6783457-what-is-chatgpt
advertisement
advertisement