Purchase this article with an account.
Yuxuan Cheng, Zhongdi Chu, Mengxi Shen, Rita Laiginhas, Jeremy Liu, Yingying Shi, Jianqing Li, Hao Zhou, Qinqin Zhang, Giovanni Gregori, Philip J Rosenfeld, Ruikang K Wang; Automatic segmentation of geographic atrophy in OCT scans using deep learning. Invest. Ophthalmol. Vis. Sci. 2022;63(7):1040 – F0287.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To automatically identify, segment, and quantify geographic atrophy (GA) based on optical attenuation coefficients (OACs) and optical coherence tomography (OCT) datasets using deep learning algorithms.
Normal eyes and eyes with GA secondary to age-related macular degeneration (AMD) were imaged with swept-source OCT using a 6x6 mm scan pattern. Depth-resolved OACs were calculated from OCT scans. For each OCT scan, three images were generated and combined to produce pseudo-color images (Figure 1): (1) the OAC-identified RPE elevation from Bruch’s membrane (BM), (2) the sum OAC projection between the inner limiting membrane and BM, and (3) the sub-retinal pigment epithelium slab projection (subRPE) extending from 64 to 400µm below BM. An attention improved U-net model was trained to segment the composite images with a focal loss for a better classification of evolving GA lesions. All GA lesions were manually labeled from the subRPE slabs by senior graders for evaluating the model. User-friendly software with the model was developed to test the algorithm in clinical settings. The performance of the model was evaluated using DICE similarity coefficients (DSCs). The areas of the GA lesions were calculated and compared with manual segmentations using Pearson’s correlation and Bland-Altman analyses. Both the model output and the manual outlines excluded GA lesions with greatest linear dimension less than 250µm.
A set of 153 GA eyes, 30 drusen only eyes, and 60 normal eyes were used to develop and test the model. The dataset was split 80:20 for training:validation. Another 30 AMD eyes with GA lesions were prepared for testing. The model reached the dice coefficients of 0.958, 0.941, 0.930 on the training, validation, and testing after 300 epochs of training, respectively. The mean area difference on the testing set was 0.106 mm2 between manual segmentation and the model, and the Pearson’s correlation was 0.957. Figure 2 shows a working example of the software on a case with GA lesions.
The proposed model using composite color images derived from OCT scans effectively and accurately identified, segmented, and quantified GA lesions.
This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.
This PDF is available to Subscribers Only