Investigative Ophthalmology & Visual Science Cover Image for Volume 61, Issue 9
July 2020
Volume 61, Issue 9
Free
ARVO Imaging in the Eye Conference Abstract  |   July 2020
Deep Learning Predicts Laterality and Capture Field of Color Fundus Photographs
Author Affiliations & Notes
  • Michael Kawczynski
    Genentech, South San Francisco, CA, California, United States
  • Neha Anegondi
    Genentech, South San Francisco, CA, California, United States
  • Simon Gao
    Genentech, South San Francisco, CA, California, United States
  • Jeffrey Willis
    Genentech, South San Francisco, CA, California, United States
  • Thomas Bengtsson
    Genentech, South San Francisco, CA, California, United States
  • Jian Dai
    Genentech, South San Francisco, CA, California, United States
  • Footnotes
    Commercial Relationships   Michael Kawczynski, Genentech (E); Neha Anegondi, Genentech (E); Simon Gao, Genentech (E); Jeffrey Willis, Genentech (E); Thomas Bengtsson, Genentech (E); Jian Dai, Genentech (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2020, Vol.61, PB0015. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael Kawczynski, Neha Anegondi, Simon Gao, Jeffrey Willis, Thomas Bengtsson, Jian Dai; Deep Learning Predicts Laterality and Capture Field of Color Fundus Photographs. Invest. Ophthalmol. Vis. Sci. 2020;61(9):PB0015.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop a deep learning model capable of identifying the laterality (left vs right eye) and capture field of color fundus photographs. Often laterality and capture field labels are not available, resulting in a need for a solution to a significant curation problem.

Methods : Using 42,695 color fundus photographs (CFP) from 645 patients of the MARINA trial (NCT00056836), we developed a deep learning (DL) model to classify the laterality (left or right) and the capture field (F1M, F2, F3M, or FR) of a CFP. All CFPs were cropped to remove extraneous pixels and resized to 299x299x3 pixels. We trained the Xception convolutional neural network (CNN) on CFPs from 90% of the MARINA patients, validated on CFPs from the remaining 10% of MARINA patients, and then tested the model on 41,696 CFPs from 380 patients of the ANCHOR trial (NCT00061594). Layers of global average pooling, dropout (0.85), dense (256) with l2 regularization (0.0001), and dense (8) with softmax activation were added to the Xception CNN base model. The loss function was categorical crossentropy and the optimizer was Adam with learning rate of 0.001. The model weights were initialized with those pretrained on the ImageNet dataset. The model was trained using data augmentation of rotation range (45), width shift range (0.2), height shift range (0.2), zoom range (0.2), shear range (0.2), and vertical flipping, for 3 epochs with base layers untrainable, then an additional 200 epochs with all layers trainable. The model used for testing was determined by selecting the model from the epoch with the lowest validation loss.

Results : The model was evaluated on a test set of 41,696 CFPs from 380 patients of the ANCHOR trial (NCT00061594) and achieved an accuracy of 94.5% when predicting 8 classes (L-F1M, L-F2, L-F3M, L-FR, R-F1M, R-F2, R-F3M, R-FR), and an accuracy of 98.9% when predicting 4 classes (L-F1M/F2/F3M, L-FR, R-F1M/F2/F3M, R-FR).

Conclusions : Our DL model is capable of predicting the laterality and capture fields in CFPs with high accuracy. This automated approach to image labeling could contribute to improving the efficiencies of future imaging studies. Our study supports the utility of DL models in categorizing ophthalmic imaging characteristics.

This is a 2020 Imaging in the Eye Conference abstract.

 

The eight different combinations of left (right column) and right eyes and capture fields.

The eight different combinations of left (right column) and right eyes and capture fields.

 

Performance of model to classify CFP laterality and field on ANCHOR test set.

Performance of model to classify CFP laterality and field on ANCHOR test set.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×