July 2019
Volume 60, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2019
Deep learning multimodal detection and classification of cone and rod photoreceptors in adaptive optics scanning light ophthalmoscope images
Author Affiliations & Notes
  • David Cunefare
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
  • Alison L Huckenpahler
    Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
  • Emily J Patterson
    Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
  • Alfredo Dubra
    Ophthalmology, Stanford University, Palo Alto, California, United States
  • Joseph Carroll
    Ophthalmology & Visual Sciences, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
    Cell Biology, Neurobiology & Anatomy, Medical College of Wisconsin, Milwaukee, Wisconsin, United States
  • Sina Farsiu
    Biomedical Engineering, Duke University, Durham, North Carolina, United States
    Ophthalmology, Duke University, Durham, North Carolina, United States
  • Footnotes
    Commercial Relationships   David Cunefare, None; Alison Huckenpahler, None; Emily Patterson, None; Alfredo Dubra, Boston Micromachines Corporation (C), Meira GTx (C); Joseph Carroll, AGTC (F), MeiraGTx (C); Sina Farsiu, Google (R)
  • Footnotes
    Support  Foundation for Fighting Blindness; unrestricted grant from Research to Prevent Blindness to Duke University; Google Faculty Research award; NIH Grants R21EY027086, P30EY005722, U01EY025477, R01EY017607, P30EY001931, F30EY027706, and T32EB001040.
Investigative Ophthalmology & Visual Science July 2019, Vol.60, 4592. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      David Cunefare, Alison L Huckenpahler, Emily J Patterson, Alfredo Dubra, Joseph Carroll, Sina Farsiu; Deep learning multimodal detection and classification of cone and rod photoreceptors in adaptive optics scanning light ophthalmoscope images. Invest. Ophthalmol. Vis. Sci. 2019;60(9):4592.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop an automated method for simultaneous detection and classification of cones and rods in adaptive optics scanning light ophthalmoscope (AOSLO) images.

Methods : We designed a novel convolutional neural network (CNN) architecture combining the complimentary information from simultaneously captured non-confocal split detector and confocal AOSLO images. Expert manual graders were used to annotate images for training the CNN, and data augmentation was used to effectively increase the size of our training dataset. The trained network was used to simultaneously identify cone and rod locations. Our data set consisted of 35 confocal and split detector image pairs across 7 normal subjects, totaling 6,791 cones and 27,937 rods. Images were acquired from between 3 to 7 degrees eccentricity from the fovea. We used leave-one-subject-out cross validation to assess the performance of our method. One-to-one matches between the automatic results and manual markings were found for the cones and rods separately and were used to calculate the sensitivity and false discovery rate (FDR). Relative error of rod and cone density measurements for our method was calculated as well.

Results : Fig.1 illustrates the rod and cone detection and classification from our deep learning multimodal method. In comparison to the gold standard of manual grading, our method had an average sensitivity of 0.968 ± 0.047 and FDR of 0.042 ± 0.040 for cones, and an average sensitivity of 0.885 ± 0.074 and FDR of 0.130 ± 0.080 for rods across the dataset. The average relative error for density measurements was 5.1 ± 5.2% for cones and 11.0 ± 12.0% for rods. The average computation time was 0.21 seconds per image.

Conclusions : Our fully automated CNN based method for simultaneously detecting cones and rods in multimodal AOSLO images was congruent with expert manual markings across a range of retinal eccentricities.

This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.

 

Fig.1: a) Confocal AOSLO image at 7 degrees from the fovea in a normal subject. b) Co- registered non-confocal split detector AOSLO image from the same location. (c-d) Automatically detected cones from our CNN method are shown with purple asterisks, and rods with yellow asterisks. In comparison to expert manual marking, cones were detected with a sensitivity of 1.000 and FDR of 0.008, and rods were detected with a sensitivity of 0.987 and FDR of 0.072 for this image pair.

Fig.1: a) Confocal AOSLO image at 7 degrees from the fovea in a normal subject. b) Co- registered non-confocal split detector AOSLO image from the same location. (c-d) Automatically detected cones from our CNN method are shown with purple asterisks, and rods with yellow asterisks. In comparison to expert manual marking, cones were detected with a sensitivity of 1.000 and FDR of 0.008, and rods were detected with a sensitivity of 0.987 and FDR of 0.072 for this image pair.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×