Attention: Restrictions on use of AUA, AUAER, and UCF content in third party applications, including artificial intelligence technologies, such as large language models and generative AI.
You are prohibited from using or uploading content you accessed through this website into external applications, bots, software, or websites, including those using artificial intelligence technologies and infrastructure, including deep learning, machine learning and large language models and generative AI.
AUA2022: BEST POSTERS Deep Learning-based 3D Annotation of Tumor Foci Within Prostate Multiparametric MRI Images
By: Ashwin Sachdeva, MBBS, PhD; Nitin Singhal, MS; Avaneesh Meena, MTech; Saikiran Bonthu, MTech; Yatin Jain, FRCR; Pedro Oliveira, FRCPath; Vijay Ramani, FRCS | Posted on: 01 Nov 2022
Introduction
Prostate cancer (PCa) is the most commonly diagnosed cancer in men and one of the leading causes of death from cancer.1 T2-weighted imaging, diffusion-weighted imaging, and dynamic contrast-enhanced imaging constitute the standard clinical method for prostate MRI. A radiologist interprets the examinations using the PI-RADS (Prostate Imaging-Reporting and Data System) v2 system, which estimates the likelihood of clinically relevant malignancy. In spite of MRI’s promise for PCa detection, the performance still suffers from high levels of variation between radiologists,2 limited positive predictive value (27%-58%),3 and low inter-reader agreement (k = 0.46-0.78), low sensitivity (<75%), and large variation in reported specificity (23%-87%).4 A conclusive diagnosis still requires a biopsy given these limitations of prostate MRI, which may still result in detection of low-risk PCa and associated impacts on the patient and the health service. Conversely, up to 10% of clinically significant PCa may be MRI invisible.
Samples from radical prostatectomy provide an opportunity to build models that utilize aligned whole-mount images of the resected prostate and in vivo MRI imaging, and may thus improve the overall specificity of the diagnostic prostate MRI. Tumor volume may be mapped directly from histopathological images onto MRI by registering digital histology images with their matching MRI slices. This results in accurate cancer diagnosis, including annotation of tumor foci that may be “invisible” on traditional multiparametric MRI (mpMRI) interpretation.
In this work, we propose a novel approach for identification of clinically significant PCa on mpMRI based upon retrospective comparison of in vivo mpMRI images to spatially concordant digitally scanned images of hematoxylin & eosin (H&E) stained prostatectomy tissue sections. We further compare labels obtained from histopathology-MRI fusion and marked by radiologists in a deep learning model development setting.
Method
We utilized whole-mount histopathological sections from paraffin-embedded radical prostatectomy specimens in conjunction with corresponding diagnostic pre-biopsy mpMRI. The algorithm pipeline utilized the following steps: (1) localization of tumor foci; (2) reconstruction of H&E slides in 3D; (3) alignment of reconstructed histology to mpMRI images; (4) labeling of aggressive PCa on mpMRI images using reconstructed 3D histology; and (5) training of a deep learning-based model for unsupervised segmentation of aggressive PCa foci using mpMRI images and transferred labels. The Figure illustrates the algorithm pipeline. Using this method, the extent of cancer may be mapped directly onto mpMRI, allowing for the correct segmentation of voxels corresponding to tumor foci, and also the identification of mpMRI-invisible lesions using radiomic features. Apex and base tissue blocks were cut perpendicular to the axial plane, with the central portion of the gland sliced in 4 mm thick sections. H&E whole-mount slides were digitized at 40 times magnification. Performance of the end-to-end algorithm pipeline is presented in this work. Deep learning semantic segmentation models (U-Net with Efficient-Net B4)5 were trained with respect to 3 distinct labeling strategies: (1) histopathology-MRI fusion; (2) radiologist with less than 5 years of experience; (3) radiologist with more than 10 years of experience. Models were trained on data from 45 patients with radical prostatectomy and pre-surgical MRI, and data from 30 patients were used to evaluate the models.
Results
The volume estimate from the original prostate specimen was compared to the reconstructed volume to assess the 3D reconstruction performance and showed a correlation of 85%-88%. The performance of models trained with radiologists’ labels was lower than those trained with histopathology-MRI fusion labels. Both radiologists reported lower F1-scores, 21%∼26% for those with less experience and 40%∼45% for those with significant experience compared to histopathology labels. The agreement between predicted labels on histopathology and pathologist labels was strong (F1-score > 94%). The tumor detection rate (AUC) for MRI models trained with histopathology labels was 0.93, but for models trained with radiologists labels was 0.78∼0.83.
Conclusions
For prostate MRI interpretation, models trained using histopathology labels demonstrated much superior performance in detecting cancer. Our future endeavors will focus on identifying radiological patterns of prognostic importance utilizing concurrent multimodality data such as histology, genetics, and clinical factors.
- Sung H, Ferlay J, Siegel RL, et al. Global cancer statistics 2020: GLOBOCAN estimates of incidence and mortality worldwide for 36 cancers in 185 countries. CA Cancer J Clin. 2021;71:209-249.
- Sonn GA, Fan RE, Ghanouni P, et al. Prostate magnetic resonance imaging interpretation varies substantially across radiologists. Eur Urol Focus. 2019;5:592-599.
- Westphalen AC, McCulloch CE, Anaokar JM, et al. Variability of the positive predictive value of PIRADS for prostate MRI across 26 centers: experience of the society of abdominal radiology prostate cancer disease-focused panel. Radiology. 2020;296(1):76-84.
- Ahmed HU, El-Shater Bosaily A, Brown LC, et al. Diagnostic accuracy of multi-parametric MRI and TRUS biopsy in prostate cancer (PROMIS): a paired validating confirmatory study. Lancet. 2017;389(10071):815-822.
- Ronneberger O, Fischer P, Brox T. U-Net: convolutional networks for biomedical image segmentation. In: Navab N, Hornegger J, Wells W, Frangi A, eds. Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015. MICCAI 2015. Lecture Notes in Computer Science. Vol 9351. Springer; 2015:1-8.
advertisement
advertisement