Purchase this article with an account.
David Cunefare, Emily J Patterson, Sarah Blau, Christopher S Langlo, Alfredo Dubra, Joseph Carroll, Sina Farsiu; Convolutional neural network based detection of cones in multimodal adaptive optics scanning light ophthalmoscope images of achromatopsia. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1737. doi: https://doi.org/.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
To develop an automated method for detecting cones in “clinical grade” adaptive optics scanning light ophthalmoscope (AOSLO) images of subjects with achromatopsia (ACHM).
To detect cone locations, we used a convolutional neural network (CNN) based method that combines information from simultaneously captured non-confocal split detector and confocal AOSLO images. Results are compared to our open source single-mode CNN method (Cunefare et al., Sci. Rep. 7:6620, 2017) trained with only split detector images. Each network was trained based on labeled patches extracted from input images. We then used the trained networks to create probability maps from test images, from which cone locations were automatically identified. Our data set consisted of 144 confocal and split detector image pairs from 11 ACHM subjects containing 6692 cones. Leave-one-subject-out cross validation was used. One-to-one matches between the automatic results and manual markings (made on the split detector images with the confocal image used to resolve ambiguities) were found in order to calculate measures of sensitivity and false discovery rate (FDR).
Fig.1 shows a qualitative example of the cone positions from the single-mode CNN method and the dual-mode CNN method versus manual marking. The method using the single-mode CNN had an average sensitivity of .876 and FDR of .118. Our method using the dual-mode CNN had an average sensitivity of .891 and FDR of 0.094.
Our CNN based method for cone detection had good agreement with manual marking, even in challenging low-quality ACHM images. Performance using a dual-mode CNN, which combines information from both confocal and split detector channels, was superior to that using a single modality CNN.
This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.
Fig.1: a) Split detector AOSLO image from an ACHM subject. b) Simultaneously captured confocal AOSLO image. Automatically detected cones in comparison to manual marking shown for the c) single-mode CNN method and d) dual-mode CNN method. Purple (automatic) and yellow (manual) asterisks denote a correctly identified cone, cyan is a cone missed by the algorithm (false negative), and red is a location marked by the algorithm but not manually (false positive). Orange arrows point to cones missed by the single-mode CNN but correctly identified by the dual-mode CNN.
This PDF is available to Subscribers Only