Attention: Restrictions on use of AUA, AUAER, and UCF content in third party applications, including artificial intelligence technologies, such as large language models and generative AI.
You are prohibited from using or uploading content you accessed through this website into external applications, bots, software, or websites, including those using artificial intelligence technologies and infrastructure, including deep learning, machine learning and large language models and generative AI.
JU INSIGHT Automated Identification of Robotic-Assisted Radical Prostatectomy Using Artificial Intelligence
By: Abhinav Khanna, MD, Mayo Clinic, Rochester, Minnesota; Alenka Antolin, MD, Theator, Inc, Palo Alto, California; Omri Bar, Theator, Inc, Palo Alto, California; Danielle Ben-Ayoun, MD, Theator, Inc, Palo Alto, California; Maya Zohar, Theator, Inc, Palo Alto, California; Stephen A. Boorjian, MD, Mayo Clinic, Rochester, Minnesota; Igor Frank, MD, Mayo Clinic, Rochester, Minnesota; Paras Shah, MD, Mayo Clinic, Rochester, Minnesota; Vidit Sharma, MD, Mayo Clinic, Rochester, Minnesota; R. Houston Thompson, MD, Mayo Clinic, Rochester, Minnesota; Tamir Wolf, MD, PhD, Theator, Inc, Palo Alto, California; Dotan Asselmann, Theator, Inc, Palo Alto, California; Matthew Tollefson, MD, Mayo Clinic, Rochester, Minnesota | Posted on: 19 Apr 2024
Khanna A, Antolin A, Bar O, et al. Automated identification of key steps in robotic-assisted radical prostatectomy using artificial intelligence. J Urol. 2024;211(4):575-584.
Study Need and Importance
Vast amounts of surgical video footage are being generated every day during minimally invasive surgeries across the globe. However, raw surgical video is unstructured, unlabeled, and mostly unusable as a data source. Thus, the immense potential of surgical video remains largely untapped. We aimed to create a novel artificial intelligence (AI) computer-vision algorithm for automatically identifying and annotating the various surgical steps of robotic-assisted radical prostatectomy (RARP).
What We Found
We used a cohort of 474 full-length RARP videos, each of which was manually annotated to identify the key surgical steps being performed. This labeled dataset was then used to train a novel AI computer-vision algorithm to automatically predict the step of surgery being performed at any given point during the surgical video. When compared to human video annotations as the gold standard, our AI algorithm achieved 92.8% overall accuracy for the full-length video. AI performance was highest for the vesicourethral anastomosis step (97.3%) and lowest for the final inspection step (76.8%).
Limitations
Although our dataset includes RARP video from several surgeons with varying techniques, external validation at other centers is required. Additionally, as technical refinements continue to be made to RARP, our models will require ongoing retraining over time to keep up with new advances in surgical techniques.
Interpretation for Patient Care
We developed a fully automated AI tool for annotation of RARP surgical video. Automated surgical video analysis has immediate practical applications in surgeon video review, surgical training and education, quality and safety benchmarking, medical billing and documentation, and operating room logistics.
advertisement
advertisement