Attention: Restrictions on use of AUA, AUAER, and UCF content in third party applications, including artificial intelligence technologies, such as large language models and generative AI.
You are prohibited from using or uploading content you accessed through this website into external applications, bots, software, or websites, including those using artificial intelligence technologies and infrastructure, including deep learning, machine learning and large language models and generative AI.

ARTIFICIAL INTELLIGENCE Ethical Issues Regarding the Use of Artificial Intelligence

By: Inés Rivero Belenchón, PhD, Virgen del Rocío University Hospital, Seville, Spain | Posted on: 20 Feb 2024

The use of artificial intelligence (AI) in medicine has exponentially increased in recent years, with a substantial increase in urology. During this time, some ethical issues have arisen regarding the use of this technology. These issues include patient safety, cybersecurity, transparency and interpretability of the data, inclusivity and equity, fostering responsibility, and the preservation of providers’ decision-making autonomy.1

To best understand the ethical issues regarding AI, we have to first know how it works. The fact is that AI is capable of executing activities that are traditionally carried out by humans, utilizing algorithms and computational models to glean insights from extensive data sets, and subsequently making predictions or decisions based on this acquired knowledge.2 Large learning model (LLM) tools, which include some famous platforms such as ChatGPT, use machine learning, which is a subdiscipline of AI. In urology, AI has been used to streamline patient workflow, improve diagnostic accuracy, increase computer analysis of radiological and pathological images, and facilitate precision medicine through the examination of extensive “big data.”1 These applications made the implementation of some ethical principles necessary to regulate its use.

Being aware of that, in 2021 the WHO released a document entitled “Ethics and Governance of Artificial Intelligence for Health: WHO Guidance,”3 which was elaborated over the 4 core ethical principles regulating health care (autonomy, beneficence, nonmaleficence, and justice). Moreover, recently this organization made a call for caution when using AI and warned of some risks related to its implementation4:

  • The data utilized for training AI might contain biases, resulting in the creation of misleading or inaccurate information that could potentially jeopardize health, fairness, and inclusivity.
  • LLMs generate responses that may seem authoritative and credible to end-users but are entirely incorrect in the context of health-related information.
  • LLMs may have been trained on data without providers’ prior consent for such purposes, and they may not adequately safeguard health-sensitive data.
  • LLMs can be misused to generate and disseminate highly convincing disinformation, whether in text, audio, or video form, making it challenging for the public to distinguish it from reliable health-related content.

In this context, the WHO identified 6 core principles that should be respected when implementing this technology: (1) protect autonomy; (2) promote human well-being, human safety, and the public interest; (3) ensure transparency, explainability, and intelligibility; (4) foster responsibility and accountability; (5) ensure inclusiveness and equity; and (6) promote AI that is responsive and sustainable.4

In this regard, AI framework development and its application to patients should guarantee this core principles ensuring patient autonomy, well-being and safety.1 This means that although AI could be used to aid diagnosis or surgical treatment, physicians are still ultimately responsible for interpreting the diagnostic results and for warranting the safe delivery of surgical care to the patient.5,6 On the other hand, urologists must navigate a range of ethical issues, such as patient privacy, bias in algorithms, accountability for AI-generated diagnoses, regulation to ensure transparency and reproducibility of studies, and implementation of LLM.1 Regarding this last issue, Dr Cacciamani7 highlighted the importance of standard reporting guidelines for the scholarly community to prevent a modern “Tower of Babel” effect in which different parties create a variety of bespoke guidance and regulations. Consequently, a group of different specialists has been created to stablish a consensus on disclosure and guidance for reporting LLM use in academic research and scientific writing.7 That is, urologists should be aware of these ethical implications and should work to mitigate potential risks while using the benefits of AI in their academic practice and in clinical use.1

To sum up, this technology was born with great promise, but also great risks arise, which demand even greater responsibility. We know that thanks to technology humanity has made huge steps forward in communication, mobility, and research, among others; but because of technology we, as a society, are now facing some big issues like the existence of atomic bombs or big confidentiality problems for the use of social media.6 This fact makes it clear that assuring a transparent and ethical way of developing a technology is not enough. It is also necessary to address the intention and objective of it and regulate its use. This new challenge for the correct implementation of AI lies at the boundary between humans and machine interaction, in a reciprocal questioning where new projections and exchanges arise, as the machine becomes humanized and humans are becoming machinated.8 And at the end, to choose “good” and avoid “evil” we need ethics.

  1. Cacciamani GE, Chen A, Gill IS, Hung AJ. Artificial intelligence and urology: ethical considerations for urologists and patients. Nat Rev Urol. 2024;21(1):50-59.
  2. Sidey-Gibbons JAM, Sidey-Gibbons CJ. Machine learning in medicine: a practical introduction. BMC Med Res Methodol. 2019;19(1):64.
  3. World Health Organization. Ethics and Governance of Artificial Intelligence for Health: WHO Guidance. World Health Organization; 2021.
  4. WHO calls for safe and ethical AI for health. World Health Organization. 2023. Accessed November 12, 2023. https://www.who.int/news/item/16-05-2023-who-calls-for-safe-and-ethical-ai-for-health#
  5. Hamet P, Tremblay J. Artificial intelligence in medicine. Metabolism. 2017;69:S36-S40.
  6. Hung AJ, Liu Y, Anandkumar A. Deep learning to automate technical skills assessment in robotic surgery. JAMA Surg. 2021;156(11):1059-1060.
  7. Cacciamani GE, Collins GS, Gill IS. ChatGPT: standard reporting guidelines for responsible use. Nature. 2023;618(7964):238.
  8. Benanti P. The urgency of an algorethics. Discov Artif Intell. 2023;3(1):11.

advertisement

advertisement