Purchase this article with an account.
Robbie Holland, Martin Joseph Menten, Oliver Leingang, Hrvoje Bogunovic, Ahmed M Hagag, Rebecca Kaye, Sophie Riedl, Ghislaine Traber, Lars Fritsche, Toby Prevost, Hendrik P Scholl, Ursula Schmidt-Erfurth, Sobha Sivaprasad, Daniel Rueckert, Andrew J Lotery; Self-supervised pretraining enables deep learning-based classification of AMD with fewer annotations. Invest. Ophthalmol. Vis. Sci. 2022;63(7):3004 – F0274.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Deep learning can detect and classify age-related macular degeneration (AMD), which are critical to patient monitoring and prognosis. Traditionally deep learning requires vast amounts of labelled data, which are time-consuming and costly to acquire. In this work we leverage self-supervised pretraining of deep-learning models to achieve high accuracy in classifying AMD while using fewer labelled training data.
Experiments were conducted on a dataset of 57,875 OCT images from the Southampton Eye Unit collected by the PINNACLE consortium. We trained multiple ResNet50 neural networks to classify between healthy eyes and early/intermediate AMD, and between early/intermediate and late AMD. For both tasks we measured the degradation in classification accuracy when limiting the amount of available training labels. We then repeated these experiments but used self-supervised pretraining, which learns from all images in the dataset even in the absence of labels. To this end, we used two different self-supervised pretraining methods: BYOL, which aims to maximize agreement between different views of the same image, and SimCLR which additionally aims to maximize disagreement between two views from different images.
As expected, classification performance degraded as the number of training labels decreased. Without pretraining, classification between early/intermediate AMD and healthy eyes degraded from 0.92 to 0.64 area under curve (AUC) as the number of training labels decreased from 8299 to 20. Self-supervised pretraining restored performance, and furthermore elevated accuracy when all labels were available. Methods pretrained with BYOL/SimCLR decreased from 0.94/0.91 to 0.74/0.68 AUC for 8299 and 20 training labels respectively. Similar trends were observed when classifying between late and early/intermediate AMD.
Self-supervised pretraining significantly boosted the ability of deep learning to classify AMD in OCT images, especially when only small amounts of data were available as is typical in medical imaging. This motivates a shift in focus towards procuring fewer, higher quality annotations and unlocks the benefits of deep learning for smaller datasets.
This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.
Self-supervised pretraining boosts classification between early/intermediate AMD and healthy eyes (left) and between eyes with early/intermediate and late AMD (right) at varying amounts of available training data.
This PDF is available to Subscribers Only