Purchase this article with an account.
Yukun Guo, Tristan Hormel, Liqin Gao, Bingjie Wang, Thomas S Hwang, Yali Jia; Automated nonperfusion area segmentation on wide-field optical coherence tomography angiography using deep learning. Invest. Ophthalmol. Vis. Sci. 2020;61(7):1618.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Nonperfusion areas (NPA) detected on wide-field optical coherence tomography angiograms (OCTA) achieved with the montaged nasal, macular and temporal regions can provide additional diagnostic information in diabetic retinopathy (DR) compared to macular scans alone. However, the extended field of view presents additional challenges for the segmentation of NPA. In this study, we propose a deep learning-based algorithm to segment NPA on montaged wide-field OCTA.
We acquired 6×6-mm OCTA scans from the central macula, adjacent temporal region, and the area centered on the optic disc with a 70-kHz OCT commercial AngioVue system (Optovue, Inc) on one eye each from 163 participants including 26 healthy controls, 30 participants with diabetes without retinopathy, 66 participants with nonproliferative DR and 41 participants with proliferative DR. A deep convolutional neural network [Fig. 1D] was built to detect NPA [blue in Fig. 1E] and signal reduction artifacts [yellow in Fig. 1E] from the three regions. The input to the network included the inner retinal thickness map [Fig. 1A] and reflectance mean projection [Fig. 1B] as well as the en face angiogram of the superficial vascular complex [Fig. 1C]. Trained experts manually graded NPA and signal reduction artifacts to establish the ground truth. Five-fold cross-validation was used to evaluate the algorithm’s ability to detect true NPA and recognize signal reduction artifacts on the entire dataset.
On healthy eyes, the algorithm shows high specificity on segmentation of NPA at the nasal (mean ± std, 1.00±0.00), macular (1.00±0.00) and temporal (0.93±0.06) regions. On diabetic eyes, the algorithm detected the actual NPA with high accuracy, with F-scores of 0.77±0.07 at the nasal, 0.83±0.08 at the macular and 0.74±0.09 at the temporal regions. Due to signal reduction and low scan quality frequently occurring in the temporal region, the lowest accuracy for NPA detection was shown in this region [Fig. 2]. The performance of the algorithm (F-scores) did not correlate with the severity of DR in the nasal (Pearson R=0.07, p=0.78) and macular (R=0.17, p=0.35) regions, while in the temporal region, it correlated with the severity of DR (R=-0.49, p=0.004).
The proposed deep-learning algorithm can segment NPAs and distinguish them from signal reduction artifacts on extended field OCTA in healthy and diabetic eyes.
This is a 2020 ARVO Annual Meeting abstract.
This PDF is available to Subscribers Only