August 2019
Volume 60, Issue 11
Open Access
ARVO Imaging in the Eye Conference Abstract  |   August 2019
Abnormality prediction from fundus images using Deep Learning and large amounts of data
Author Affiliations & Notes
  • Krunalkumar Ramanbhai Patel
    CARIn, Carl Zeiss India (Bangalore) Pvt. Ltd., Bangalore, India
  • Alexander Freytag
    Carl Zeiss AG, Jena, Germany
  • Keyur Ranipa
    CARIn, Carl Zeiss India (Bangalore) Pvt. Ltd., Bangalore, India
  • Nathalia Spier
    Carl Zeiss AG, Oberkochen, Germany
  • Alexander Urich
    Carl Zeiss AG, Munich, Germany
  • Footnotes
    Commercial Relationships   Krunalkumar Ramanbhai Patel, Carl Zeiss India (Bangalore) Pvt. Ltd. (E); Alexander Freytag, Carl Zeiss AG (E); Keyur Ranipa, Carl Zeiss India (Bangalore) Pvt. Ltd. (E); Nathalia Spier, Carl Zeiss AG (E); Alexander Urich, Carl Zeiss AG (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science August 2019, Vol.60, PB0102. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Krunalkumar Ramanbhai Patel, Alexander Freytag, Keyur Ranipa, Nathalia Spier, Alexander Urich; Abnormality prediction from fundus images using Deep Learning and large amounts of data. Invest. Ophthalmol. Vis. Sci. 2019;60(11):PB0102.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Recent screening solutions for eye-related diseases mostly focus on individual diseases like DR, AMD, or Glaucoma. In contrast, we aim for a generic solution that allows screening for any abnormality in patient’s eyes. Potential users of report that classifies the retina as normal or abnormal could be corporate employees who are covered by insurance agency who is paying for this screening as part of the wellness program.

Methods : Due to the application requirements, we present an automated approach based on deep neural networks. To allow for reliable abnormality predictions from the model, we trained with large amounts of manually annotated data. Therefore, we conducted experiments on a 2D fundus image dataset obtained with a low-cost, hand-held fundus camera, Visuscout® 100 (ZEISS, Jena, Germany). We collected data from a non-mydriatic clinical setup using VISUHEALTH platform. VISUHEALTH is a cloud based platform, through which a referred patient by GP or Diabatoloist is managed (screened) by Retinal specialist remotely. The dataset is comprised of 135k images of normal patients and 13k images of patients with different signs of abnormality. A medical expert annotated all images. The dataset was split into 80% for training and 20% for testing by randomly assigning patients exclusively either to train or to test. For the prediction of abnormality, we trained state-of-the-art convolutional neural networks either from scratch or pre-trained from ImageNet. Training datasets were up sampled to allow for balanced class distributions. For different fractions of the dataset, we ran the training with four random initializations to allow for statistical relevance of results.

Results : Fig. 1 shows the results of a trained model on a held-out test set. The model reaches an AUC of 92%. For a sensitivity of 86%, our model reaches a specificity level of 86%. Fig. 2 demonstrates the benefit of collecting large medical datasets, especially for training from scratch.

Conclusions : We evaluated abnormality prediction from 2D fundus images with deep neural networks. Leveraged by large amounts of data, our trained models surpassed 92% AUC on a real-world held-out test set comprised of 14K images.

This abstract was presented at the 2019 ARVO Imaging in the Eye Conference, held in Vancouver, Canada, April 26-27, 2019.

 

Fig. 1: Results of predicting abnormality from 2D fundus images

Fig. 1: Results of predicting abnormality from 2D fundus images

 

Fig. 2: Dependency of the sensitivity-equals-specificity working points (left) and the AUC (right) on the amount of training data.

Fig. 2: Dependency of the sensitivity-equals-specificity working points (left) and the AUC (right) on the amount of training data.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×