Purchase this article with an account.
David Cunefare, Alison L Huckenpahler, Emily J Patterson, Alfredo Dubra, Joseph Carroll, Sina Farsiu; Deep learning multimodal detection and classification of cone and rod photoreceptors in adaptive optics scanning light ophthalmoscope images. Invest. Ophthalmol. Vis. Sci. 2019;60(9):4592. doi: https://doi.org/.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To develop an automated method for simultaneous detection and classification of cones and rods in adaptive optics scanning light ophthalmoscope (AOSLO) images.
We designed a novel convolutional neural network (CNN) architecture combining the complimentary information from simultaneously captured non-confocal split detector and confocal AOSLO images. Expert manual graders were used to annotate images for training the CNN, and data augmentation was used to effectively increase the size of our training dataset. The trained network was used to simultaneously identify cone and rod locations. Our data set consisted of 35 confocal and split detector image pairs across 7 normal subjects, totaling 6,791 cones and 27,937 rods. Images were acquired from between 3 to 7 degrees eccentricity from the fovea. We used leave-one-subject-out cross validation to assess the performance of our method. One-to-one matches between the automatic results and manual markings were found for the cones and rods separately and were used to calculate the sensitivity and false discovery rate (FDR). Relative error of rod and cone density measurements for our method was calculated as well.
Fig.1 illustrates the rod and cone detection and classification from our deep learning multimodal method. In comparison to the gold standard of manual grading, our method had an average sensitivity of 0.968 ± 0.047 and FDR of 0.042 ± 0.040 for cones, and an average sensitivity of 0.885 ± 0.074 and FDR of 0.130 ± 0.080 for rods across the dataset. The average relative error for density measurements was 5.1 ± 5.2% for cones and 11.0 ± 12.0% for rods. The average computation time was 0.21 seconds per image.
Our fully automated CNN based method for simultaneously detecting cones and rods in multimodal AOSLO images was congruent with expert manual markings across a range of retinal eccentricities.
This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.
Fig.1: a) Confocal AOSLO image at 7 degrees from the fovea in a normal subject. b) Co- registered non-confocal split detector AOSLO image from the same location. (c-d) Automatically detected cones from our CNN method are shown with purple asterisks, and rods with yellow asterisks. In comparison to expert manual marking, cones were detected with a sensitivity of 1.000 and FDR of 0.008, and rods were detected with a sensitivity of 0.987 and FDR of 0.072 for this image pair.
This PDF is available to Subscribers Only