Purchase this article with an account.
Tan Hung Pham, Aloysius Ang, Victor T C Koh, Ching-Yu Cheng, Michael J A Girard; Deep Learning Algorithms to Isolate and Quantify the Structures of the Anterior Segment in Optical Coherence Tomography Images. Invest. Ophthalmol. Vis. Sci. 2019;60(9):1482. doi: https://doi.org/.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To develop deep learning algorithms that can: 1) reliably detect the scleral spur location (SSL) in anterior optical coherence tomography (ASOCT) scans, 2) segment anterior tissues and important areas (corneoscleral shell, iris and anterior chamber), 3) extract 2D structural parameters.
128 horizontal B-scans was acquired through the center of the anterior segment for both eyes of 261 subjects using ASOCT (CASIA, Tomey). All images were corrected for optical distortion. 649 B-scans were used for training and 157 for testing in the SSL detection model, while 120 B-scans were used for training and 50 for testing in the ASOCT segmentation model. To detect the SSL, we developed a custom deep learning algorithm to identify point landmarks. A hot map approach was first used to segment the region around the SSL, its exact location was obtained using image moments. We used inter- and intra-observer tests to validate the model’s performance. An additional custom deep learning architecture (Full Resolution Residual U-net) was then used to segment tissues in the ASOCT images. Segmentation performance was assessed using the Dice coefficient (vs manual segmentations). From the segmented tissues and SSL, we developed post-processing algorithms to automatically extract the following parameters: Trabecular Iris Space Area (TISA), Angle Opening Distances (AOD), Iris measurements, Anterior Chamber (AC) Area, Pupil Diameter, Anterior Chamber Width (ACW), Anterior Chamber Depth (ACD), Lens Vault (LV) and Cornea Thickness (Figure 1). The extracted parameters were compared with manual measurements method for 30 images using the mean absolute percentage error (MAPE).
Our approach for SSL detection showed superior results (error with respect to manual marking: 114.4±77.0 µm) over traditional regression approach (error: 175.4±103.8 µm). Based on the inter-observer test, there is no significant difference (p>0.05) between the machine-human and human-human performance. For ASOCT segmentation, dice scores were 92.5±1.4%, 97.7±0.5%, 95.1±1.0% for the iris, anterior chamber and corneoscleral shell respectively. The average MAPE for all parameters was 15.7±9.2%.
We offer a solution to automatically assess the structures of the anterior segment using custom deep learning algorithms applied to ASOCT scans. This could help in the management of angle closure glaucoma.
This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.
This PDF is available to Subscribers Only