June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Self-supervised pretraining enables deep learning-based classification of AMD with fewer annotations
Author Affiliations & Notes
  • Robbie Holland
    BioMedIA, Imperial College London, London, London, United Kingdom
  • Martin Joseph Menten
    BioMedIA, Imperial College London, London, London, United Kingdom
    Institute for AI and Informatics in Medicine, Technische Universitat Munchen, Munchen, Bayern, Germany
  • Oliver Leingang
    Laboratory for Ophthalmic Image Analysis, Medizinische Universitat Wien, Wien, Wien, Austria
    Christian Doppler Laboratory for Artificial Intelligence in Retina, Christian Doppler Forschungsgesellschaft, Wien, Austria
  • Hrvoje Bogunovic
    Laboratory for Ophthalmic Image Analysis, Medizinische Universitat Wien, Wien, Wien, Austria
  • Ahmed M Hagag
    Institute of Ophthalmology, University College London, London, London, United Kingdom
    Moorfields Eye Unit, National Institute for Health Research, London, London, United Kingdom
  • Rebecca Kaye
    Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, Hampshire, United Kingdom
  • Sophie Riedl
    Laboratory for Ophthalmic Image Analysis, Medizinische Universitat Wien, Wien, Wien, Austria
  • Ghislaine Traber
    Institute of Molecular and Clinical Ophthalmology Basel, Basel, Basel-Stadt, Switzerland
    Department of Ophthalmology, Universitat Basel, Basel, Basel-Stadt, Switzerland
  • Lars Fritsche
    Department of Biostatistics, University of Michigan, Ann Arbor, Michigan, United States
  • Toby Prevost
    Nightingale-Saunders Clinical Trials & Epidemiology Unit, King's College London, London, London, United Kingdom
  • Hendrik P Scholl
    Institute of Molecular and Clinical Ophthalmology Basel, Basel, Basel-Stadt, Switzerland
    Department of Ophthalmology, Universitat Basel, Basel, Basel-Stadt, Switzerland
  • Ursula Schmidt-Erfurth
    Laboratory for Ophthalmic Image Analysis, Medizinische Universitat Wien, Wien, Wien, Austria
  • Sobha Sivaprasad
    Institute of Ophthalmology, University College London, London, London, United Kingdom
    Moorfields Eye Unit, National Institute for Health Research, London, London, United Kingdom
  • Daniel Rueckert
    BioMedIA, Imperial College London, London, London, United Kingdom
  • Andrew J Lotery
    Clinical and Experimental Sciences, Faculty of Medicine, University of Southampton, Southampton, Hampshire, United Kingdom
  • Footnotes
    Commercial Relationships   Robbie Holland None; Martin Menten None; Oliver Leingang None; Hrvoje Bogunovic None; Ahmed Hagag None; Rebecca Kaye None; Sophie Riedl None; Ghislaine Traber None; Lars Fritsche None; Toby Prevost None; Hendrik Scholl None; Ursula Schmidt-Erfurth None; Sobha Sivaprasad None; Daniel Rueckert None; Andrew Lotery None
  • Footnotes
    Support  Wellcome Trust Collaborative Award, “Deciphering AMD by deep phenotyping and machine lea (PINNACLE)”, ref. 210572/Z/18/Z.
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 3004 – F0274. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Robbie Holland, Martin Joseph Menten, Oliver Leingang, Hrvoje Bogunovic, Ahmed M Hagag, Rebecca Kaye, Sophie Riedl, Ghislaine Traber, Lars Fritsche, Toby Prevost, Hendrik P Scholl, Ursula Schmidt-Erfurth, Sobha Sivaprasad, Daniel Rueckert, Andrew J Lotery; Self-supervised pretraining enables deep learning-based classification of AMD with fewer annotations. Invest. Ophthalmol. Vis. Sci. 2022;63(7):3004 – F0274.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Deep learning can detect and classify age-related macular degeneration (AMD), which are critical to patient monitoring and prognosis. Traditionally deep learning requires vast amounts of labelled data, which are time-consuming and costly to acquire. In this work we leverage self-supervised pretraining of deep-learning models to achieve high accuracy in classifying AMD while using fewer labelled training data.

Methods : Experiments were conducted on a dataset of 57,875 OCT images from the Southampton Eye Unit collected by the PINNACLE consortium. We trained multiple ResNet50 neural networks to classify between healthy eyes and early/intermediate AMD, and between early/intermediate and late AMD. For both tasks we measured the degradation in classification accuracy when limiting the amount of available training labels. We then repeated these experiments but used self-supervised pretraining, which learns from all images in the dataset even in the absence of labels. To this end, we used two different self-supervised pretraining methods: BYOL, which aims to maximize agreement between different views of the same image, and SimCLR which additionally aims to maximize disagreement between two views from different images.

Results : As expected, classification performance degraded as the number of training labels decreased. Without pretraining, classification between early/intermediate AMD and healthy eyes degraded from 0.92 to 0.64 area under curve (AUC) as the number of training labels decreased from 8299 to 20. Self-supervised pretraining restored performance, and furthermore elevated accuracy when all labels were available. Methods pretrained with BYOL/SimCLR decreased from 0.94/0.91 to 0.74/0.68 AUC for 8299 and 20 training labels respectively. Similar trends were observed when classifying between late and early/intermediate AMD.

Conclusions : Self-supervised pretraining significantly boosted the ability of deep learning to classify AMD in OCT images, especially when only small amounts of data were available as is typical in medical imaging. This motivates a shift in focus towards procuring fewer, higher quality annotations and unlocks the benefits of deep learning for smaller datasets.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 

Self-supervised pretraining boosts classification between early/intermediate AMD and healthy eyes (left) and between eyes with early/intermediate and late AMD (right) at varying amounts of available training data.

Self-supervised pretraining boosts classification between early/intermediate AMD and healthy eyes (left) and between eyes with early/intermediate and late AMD (right) at varying amounts of available training data.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×