June 2017
Volume 58, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2017
RetiNet: Automatic AMD identification in OCT volumetric data
Author Affiliations & Notes
  • Stefanos Apostolopoulos
    University of Bern, Bern, Switzerland
  • Carlos Ciller
    University of Lausanne, Lausanne, Switzerland
    University of Bern, Bern, Switzerland
  • Sandro De Zanet
    Ecole Polytechnique Federale de Lausanne, Lausanne, Switzerland
  • Sebastian Wolf
    Bern University Hospital, Bern, Switzerland
  • Raphael Sznitman
    University of Bern, Bern, Switzerland
  • Footnotes
    Commercial Relationships   Stefanos Apostolopoulos, None; Carlos Ciller, None; Sandro De Zanet, None; Sebastian Wolf, None; Raphael Sznitman, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2017, Vol.58, 387. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Stefanos Apostolopoulos, Carlos Ciller, Sandro De Zanet, Sebastian Wolf, Raphael Sznitman; RetiNet: Automatic AMD identification in OCT volumetric data. Invest. Ophthalmol. Vis. Sci. 2017;58(8):387.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Optical Coherence Tomography (OCT) acquires micrometer-resolution 3d scans of the retina. Visual inspection of OCT volumes is currently the main method for Age-related Macular Degeneration (AMD) identification. This is an expensive and time-consuming process which, in turn, limits our ability to acquire the extensive ground truth information required for automatic machine-learning algorithms. To overcome these difficulties, we propose a novel Convolutional Neural Network (CNN) architecture for the automatic detection of AMD in OCT, which can be trained using a single label per OCT volume.

Methods : Our deep learning algorithm involves two training phases: first, we train a Bscan-level classifier CNN, RetiNet-B by considering the Cscan label as a pseudo-label for each Bscan within the volume. We then integrate RetiNet-B as a feature extractor into a larger CNN, RetiNet-C, which classifies Cscans in one shot.
We used a balanced subset of the publicly available AMD dataset of Duke University (Farsiu et al. 2014), consisting of 228 Cscans (114 healthy, 114 non-exudative AMD), with 100 Bscans per Cscan. The total training time for 5-fold cross-validation was 3 days.

Results : RetiNet-C achieves an aera under the curve (AUC) of 99.71% and outperforms competing image classification methods: VGG19 (99.4%, Simonyan et al. 2015), ResNet (99.3%, Zhang et al. 2015), DenseNet (99.2%, Huang et al. 2016), as well as other automatic AMD classification methods: 99.17% for Farsiu et al. 2014 and 98.4% for Venhuizen et al. 2015.
Additionally, RetiNet-C achieves a false-positive ratio (FPR) of 6% for a false-negative ratio (FNR) of 1%, compared to ResNet (7%), DenseNet (13%) and VGG19 (16%). This means that if we can accept a FNR of 1% FNR, we could reduce the amount of subjects that need to be manually examined by 94%. The FNR of 1% was chosen according to clinical guidelines.

Conclusions : We have proposed a novel strategy for automatic identification of AMD in OCT volumes, which requires only Cscan-level labels for training. Our approach involves a novel two-stage deep learning architecture that first learns domain specific Bscan-level features and then combines them for one-shot Cscan classification. We developed our approach using publicly available OCT data and compared its performance to techniques from the OCT domain and the computer vision literature, outperforming them in terms of AUC and the more clinically-relevant FNR-to-FPR metric.

This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×