July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Fully automated quantification of retinal cones and anterior chamber cells using deep learning
Author Affiliations & Notes
  • yue wu
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Sa Xiao
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Ariel Rokem
    eScience Institute, University of Washington, Seattle, Washington, United States
  • Cecilia S Lee
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Leslie Wilson
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Kathryn Pepple
    Ophthalmology, University of Washington, Seattle, Washington, United States
    Ophthalmology, Puget Sound Veteran Affairs, Seattle, Washington, United States
  • Ramkumar Sabesan
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Aaron Y. Lee
    Ophthalmology, University of Washington, Seattle, Washington, United States
    Ophthalmology, Puget Sound Veteran Affairs, Seattle, Washington, United States
  • Footnotes
    Commercial Relationships   yue wu, None; Sa Xiao, None; Ariel Rokem, None; Cecilia Lee, None; Leslie Wilson, None; Kathryn Pepple, None; Ramkumar Sabesan, None; Aaron Lee, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 1222. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      yue wu, Sa Xiao, Ariel Rokem, Cecilia S Lee, Leslie Wilson, Kathryn Pepple, Ramkumar Sabesan, Aaron Y. Lee; Fully automated quantification of retinal cones and anterior chamber cells using deep learning. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1222.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : The counting of cells and other biological structures in images has wide applications in clinical diagnosis and basic research. For example, the density of cone cells in adaptive optics scanning laser ophthalmoscope (AOSLO) images can be used to distinguish between healthy and abnormal retina. Another example is the number of anterior chamber (AC) cells in OCT images can be used to grade the level of uveitis inflammation. Currently, counting and quantifying cells in images is a tedious manual task. To resolve these problems, we adapted a deep learning algorithm to count and measure the size of cone and AC cells.

Methods : For counting cones, we obtained 18 AOSLO confocal images, each approximately 500*600 pixels. The images were cropped into smaller images (32*32). Cone centers in these images were first labelled automatically using a modified Li and Roorda algorithm and then refined manually, with the restriction that cones on image boundaries were not labelled, as their centers would not be pinpointed reliably.
For counting AC cells, the raw data consisted of 216 OCT B-scans (1024*1000) of inflamed anterior chambers and 228 OCTs of uninflamed. The B-scans were randomly sampled from 33 C-scan volumes. AC cell centers in the B-scans were manually labelled by a clinician.
We trained a region-based convolutional neural network to automatically detect cone and AC cells. Since this network requires labelled bounding boxes as input data, we created approximate bounding boxes around the cones and AC cells based on their centers. We used 80% of the data for training and 20% for validation. We computed the sensitivity and precision of prediction vs the human labels on the validation dataset.

Results : For cone prediction, our method achieved a sensitivity of 99.6%. Furthermore, it localized cones on image boundaries, even though the original labelled data had no boundary cones. Finally, we achieved a mean distance of 0.78 pixels between predicted and labelled cone centers.
For AC cells, we achieved sensitivity and precision both over 99%, and a mean distance between predicted vs labelled cell center of 0.79 pixels.
Finally, our method not only localizes the cells, but also yields approximate sizes of the cells, which may have useful applications.

Conclusions : We have demonstrated a fully automated deep-learning method for the quantification and localization of cone and AC cells.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×