Purchase this article with an account.
yue wu, Sa Xiao, Ariel Rokem, Cecilia S Lee, Leslie Wilson, Kathryn Pepple, Ramkumar Sabesan, Aaron Y. Lee; Fully automated quantification of retinal cones and anterior chamber cells using deep learning. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1222.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The counting of cells and other biological structures in images has wide applications in clinical diagnosis and basic research. For example, the density of cone cells in adaptive optics scanning laser ophthalmoscope (AOSLO) images can be used to distinguish between healthy and abnormal retina. Another example is the number of anterior chamber (AC) cells in OCT images can be used to grade the level of uveitis inflammation. Currently, counting and quantifying cells in images is a tedious manual task. To resolve these problems, we adapted a deep learning algorithm to count and measure the size of cone and AC cells.
For counting cones, we obtained 18 AOSLO confocal images, each approximately 500*600 pixels. The images were cropped into smaller images (32*32). Cone centers in these images were first labelled automatically using a modified Li and Roorda algorithm and then refined manually, with the restriction that cones on image boundaries were not labelled, as their centers would not be pinpointed reliably.For counting AC cells, the raw data consisted of 216 OCT B-scans (1024*1000) of inflamed anterior chambers and 228 OCTs of uninflamed. The B-scans were randomly sampled from 33 C-scan volumes. AC cell centers in the B-scans were manually labelled by a clinician.We trained a region-based convolutional neural network to automatically detect cone and AC cells. Since this network requires labelled bounding boxes as input data, we created approximate bounding boxes around the cones and AC cells based on their centers. We used 80% of the data for training and 20% for validation. We computed the sensitivity and precision of prediction vs the human labels on the validation dataset.
For cone prediction, our method achieved a sensitivity of 99.6%. Furthermore, it localized cones on image boundaries, even though the original labelled data had no boundary cones. Finally, we achieved a mean distance of 0.78 pixels between predicted and labelled cone centers.For AC cells, we achieved sensitivity and precision both over 99%, and a mean distance between predicted vs labelled cell center of 0.79 pixels.Finally, our method not only localizes the cells, but also yields approximate sizes of the cells, which may have useful applications.
We have demonstrated a fully automated deep-learning method for the quantification and localization of cone and AC cells.
This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.
This PDF is available to Subscribers Only