June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Label-efficient retinal OCT classification and segmentation using image restoration-based self-supervised learning
Author Affiliations & Notes
  • Hrvoje Bogunovic
    Christian Doppler Lab for Artificial Intelligence in Retina, Medizinische Universitat Wien, Wien, Wien, Austria
  • Antoine Rivail
    Christian Doppler Lab for Artificial Intelligence in Retina, Medizinische Universitat Wien, Wien, Wien, Austria
  • Botond Fazekas
    Christian Doppler Lab for Artificial Intelligence in Retina, Medizinische Universitat Wien, Wien, Wien, Austria
  • Dmitrii Lachinov
    Christian Doppler Lab for Artificial Intelligence in Retina, Medizinische Universitat Wien, Wien, Wien, Austria
  • Julia Mai
    Ophthalmology, Medizinische Universitat Wien, Wien, Wien, Austria
  • Ursula Schmidt-Erfurth
    Ophthalmology, Medizinische Universitat Wien, Wien, Wien, Austria
  • Footnotes
    Commercial Relationships   Hrvoje Bogunovic Heidelberg Engineering, Code F (Financial Support), Apellis, Code F (Financial Support), RetInSight, Code F (Financial Support), Bayer, Code R (Recipient), Apellis, Code R (Recipient); Antoine Rivail None; Botond Fazekas None; Dmitrii Lachinov None; Julia Mai None; Ursula Schmidt-Erfurth Apellis, Code C (Consultant/Contractor), Kodiak, Code F (Financial Support), Roche, Code F (Financial Support), Novartis, Code F (Financial Support), Genentech, Code F (Financial Support), RetInSight, Code F (Financial Support), Apellis, Code F (Financial Support), RetInSight, Code P (Patent)
  • Footnotes
    Support  Christian Doppler Research Organization
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 5447. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hrvoje Bogunovic, Antoine Rivail, Botond Fazekas, Dmitrii Lachinov, Julia Mai, Ursula Schmidt-Erfurth; Label-efficient retinal OCT classification and segmentation using image restoration-based self-supervised learning. Invest. Ophthalmol. Vis. Sci. 2023;64(8):5447.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Current deep neural networks (DNN) require a large number of labeled examples for training. Recently, self-supervised learning (SSL) has become a promising technique to pretrain DNN with a vast amount of unlabeled imaging datasets producing general-purpose “foundation models”. However, the best techniques for pretraining DNN on retinal optical coherence tomography (OCT) data that would allow effective fine-tuning for both OCT classification and segmentation downstream tasks remain unclear.

Methods : SSL for image classification task was based on Generic Autodidactic Models (aka Models Genesis) paradigm. It consisted of a series of pretext image restoration tasks, composed of non-linear intensity shift, in- and out-painting, local pixel shuffling, and patch swapping. The SSL pretrained U-net encoder was then fine-tuned for OCT classification task of distinguishing between diabetic macular edema (DME) and retinal vein occlusion (RVO). For retinal layer segmentation task, the SSL involved converting the layer boundaries regressed by a U-net-based DNN into pixel-wise segmentation maps in order to restore the layer content of the input scan.

Results : A total of 5000 OCT volumes acquired with the Spectralis OCT device were used as an unlabeled data set for SSL. On a downstream classification task for distinguishing between DME and RVO (Fig. 1), the SSL pretrained network reached an AUC > 0.8 with as little as 50 cases, compared to an AUC of 0.65 obtained in a purely supervised setting. When a sufficient number of labeled cases (>500) were available both approaches achieved a similar AUC of 0.95. In a retinal layer segmentation task, SSL pretrained network achieved the same mean absolute error with 25% of the labeled data as the model trained from scratch on the entire labeled data set.

Conclusions : We evaluated self-supervised methodologies for OCT image analysis on clinically relevant image diagnostic and quantification tasks. Our SSL pretrained models showed effective fine-tuning properties, outperforming the models trained from scratch. This shows a promising step toward obtaining label-efficient foundation models in retinal OCT without the need of large data sets and extensive training efforts.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

 

Performance in distinguishing between DME and RVO on OCT with different amounts of labeled data for training. Models were pretrained from scratch (blue) or pretrained with SSL (orange).

Performance in distinguishing between DME and RVO on OCT with different amounts of labeled data for training. Models were pretrained from scratch (blue) or pretrained with SSL (orange).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×