July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Retinal pathology screening with a multi-image convolutional neural network
Author Affiliations & Notes
  • Benoit HIJAZI
    Finistère, Bretagne, France, CHRU Morvan Brest, Brest, France
  • Gwenolé Quellec
    Finistère, Bretagne, France, CHRU Morvan Brest, Brest, France
  • Ali Erginay
    APHP Paris, Paris, France
  • Mathieu Lamard
    Finistère, Bretagne, France, CHRU Morvan Brest, Brest, France
  • Beatrice Cochener
    Finistère, Bretagne, France, CHRU Morvan Brest, Brest, France
  • Footnotes
    Commercial Relationships   Benoit HIJAZI, None; Gwenolé Quellec, None; Ali Erginay, None; Mathieu Lamard, None; Beatrice Cochener, None
  • Footnotes
    Support  Fond Unique Interministériel
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 1726. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Benoit HIJAZI, Gwenolé Quellec, Ali Erginay, Mathieu Lamard, Beatrice Cochener; Retinal pathology screening with a multi-image convolutional neural network. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1726.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose : Convolutional neural networks (CNNs) have become popular tools for detecting diabetic retinopathy (DR) in fundus examination. However, they suffer from one major limitation: training a CNN requires image-level supervision, whereas clinical interpretations are generally assigned to multiple images (one interpretation per eye or even per exam). A solution is proposed to train a CNN with exam-level supervision. Two exam-level criteria are investigated: presence of diabetic macular edema (DME) and need for referral. Need for referral is particularly challenging: patients are referred to an ophthalmologist either because image quality is insufficient or because any retinal pathology has been detected or suspected.

Methods : A novel transfer learning strategy is proposed. The starting point is a custom CNN previously trained to detect referable DR in single images: this detector was trained in Kaggle’s DR dataset with image-level supervision. The network is then modified to process full exams containing multiple images (usually four images): all images in one exam are processed in parallel through the network and their predictions are combined. Finally, the resulting exam-level CNN is fine-tuned with exam-level supervision (presence of DME or need for referral) in a dataset of 25,702 consecutive exams from the OPHDIAT screening network in Paris.

Results : Using the proposed multi-image CNN, the presence of DME was detected with an area under the ROC curve (AUC) of Az=0.923 before fine-tuning and Az=0.978 after fine-tuning. Regarding the need for referral, an AUC of Az=0.740 was obtained without fine-tuning, which is very low: it shows that the need for referral is very different from referable DR detection. However, after fine-tuning, the need for referral was detected with an AUC of Az=0.925: our detector thus outperforms a second human reader.

Conclusions : We have presented a multi-image CNN which can be trained on any DR screening dataset, unlike competing CNNs. We and others had already shown that DR can be detected very well with a CNN; we show that DME can be detected reliably as well. However, detecting DR and DME alone is not enough: an automated system cannot afford to miss cases needing referral, whatever the reason for referral. The proposed system, designed in the context of RetinOpTIC and commercialized by MESSIDOR, addresses this limitation and therefore increases the reliability of automated screening.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.