Purchase this article with an account.
Jianfei Liu, Christine Shen, Yoo-Jean Han, Nancy Aguilera, Joanne Li, Tao Liu, Johnny Tam; Deep Learning Based Quantitative Adaptive Optics Image Analysis. Invest. Ophthalmol. Vis. Sci. 2020;61(7):1649.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Deep learning-based methods have recently been applied to analyze adaptive optics (AO) retinal images, but thus far, the applications have been primarily focused on cone detection. The goal of this study is to explore the use of deep learning methods for different types of AO image analyses, such as cone photoreceptor cell detection and segmentation as well as retinal pigment epithelial (RPE) cell detection.
Starting with U-Net (a commonly-used deep learning method for medical image segmentation), we modified the network to simultaneously incorporate cell centroid, region, and contour cues for the purposes of developing a unified framework that could be applied to both cone and RPE images. Cones were imaged using a custom-built AO instrument with nonconfocal split detection capabilities (IOVS 55(7):4244-4251,2014), and RPE cells were imaged using the same instrument with an indocyanine green detection channel (IOVS 57(10):4376-4384,2016). For cones, 428 AO images from 20 subjects were used for training. For RPE, the same U-Net framework was used, except that training was performed with 43 AO images from 15 subjects. In both cases, a separate set of 10 manually-labeled images from 10 subjects was reserved for testing.
Overall, a single, unified deep learning framework (modified U-Net) was successful in analyzing both cones and RPE cells imaged using AO. For cone segmentation, the error in cone diameter measurements when compared to manual annotations was 0.89±0.93µm. This was significantly better than the accuracy of a previously-validated cone segmentation algorithm that does not use deep learning (Circularly-Constrained Active Contour Model, CCACM, IOVS 59(11):4639-4652, 2018), which resulted in an error of 1.30±0.85µm (p<0.001; see figure for an example). Moreover, deep learning resulted in fewer false negatives. For cone detection, when compared to manual annotations, the sensitivity, recall, and F1 Score were 96.6±4.1%, 94.9±4.7%, and 95.7±4.1%, respectively. For RPE detection, the results were 93.0±4.1%, 95.4±3.9%, and 94.1±2.0% (mean±SD).
By incorporating additional image cues, deep learning can result in improved detection and segmentation of cells imaged using AO. A key advantage is that consistent detection and segmentation results can be obtained simultaneously. The proposed unified framework could lead to an objective, robust method for the quantitative analysis of AO retinal images.
This is a 2020 ARVO Annual Meeting abstract.
This PDF is available to Subscribers Only