Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Binary Classification of OCT Images using the Retina Foundation Model on Limited Data
Author Affiliations & Notes
  • David Kuo
    Duke University Department of Ophthalmology, Durham, North Carolina, United States
  • Qitong Gao
    Computer Science, Duke University, Durham, North Carolina, United States
  • Miroslav Pajic
    Computer Science, Duke University, Durham, North Carolina, United States
  • Majda Hadziahmetovic
    Duke University Department of Ophthalmology, Durham, North Carolina, United States
  • Footnotes
    Commercial Relationships   David Kuo Genentech, Code C (Consultant/Contractor); Qitong Gao None; Miroslav Pajic None; Majda Hadziahmetovic None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 1616. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      David Kuo, Qitong Gao, Miroslav Pajic, Majda Hadziahmetovic; Binary Classification of OCT Images using the Retina Foundation Model on Limited Data. Invest. Ophthalmol. Vis. Sci. 2024;65(7):1616.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Foundation models have transformed machine learning, demonstrating the ability to perform few-shot and in-context learning among other emergent behaviors. The RetFound model, a large vision transformer (ViT-L) model trained by masked autoencoding on 1.6 million color fundus photos and OCT B-scans, is the first model pretrained at such scale for ophthalmology, demonstrating strong performance on downstream tasks from diabetic retinopathy grading to stroke detection. In this study, we measure the data efficiency of the RetFound model in learning to identify normal vs. abnormal OCT B-scans.

Methods : 1144 TopCon Maestro OCT central B-scans (978 normal, and 166 abnormal), were curated from 647 diabetic patients participating in mobile van screenings and randomly split 80/10/10 into training, validation, and final test sets. Three models (ImageNet-pretrained ResNet-50, ImageNet-pretrained ViT-L, and RetFound) were finetuned on the full training dataset of 915 OCT B-scans and on smaller training data subsets with 500, 250, 100, and 50 OCT B-scans respectively with weighted cross-entropy loss for class imbalance across 3 random seeds. Mean accuracy, AUROC, average precision, F1 score, precision, and recall on the final test set were reported for each model.

Results : Trained across 3 random seeds on 915, 500, 250, 100, and 50 OCT B-scans, respectively, and evaluated on the final test set, ResNet-50 achieved mean accuracies of 82.5 +/- 5.1%, 82.0 +/-5.0%, 71.9 +/- 9.8%, 55.4 +/- 0.8%, and 56.5 +/- 5.3%; with mean AUROCs of 92.4 +/- 3.2%, 88.7 +/- 2.2%, 85.9 +/- 3.9%, 57.0 +/- 1.4%, and 58.5 +/- 1.6%. ViT-L achieved mean accuracies of 84.9 +/- 2.3%, 82.9 +/- 3.7%, 73.0 +/- 2.8%, 67.7 +/- 6.5%, and 63.9 +/- 4.3%; with mean AUROCs of 93.3 +/- 1.0%, 92.4 +/- 1.1%, 85.1 +/- 1.6%, 78.8 +/- 5.9%, 78.4 +/- 2.8%. Finally, RetFound achieved mean accuracies of 91.2 +/- 2.3%, 88.6 +/- 2.6%, 85.6 +/- 0.7%, 74.9 +/- 4.0%, 68.8 +/- 8.4%; with mean AUROCs of 96.7 +/- 0.6%, 96.5 +/-1.6%, 92.8 +/- 0.1%, 88.5 +/- 3.1%, 84.6 +/- 6.9%.

Conclusions : RetFound performed best across all dataset sizes with ViT-L slightly outperforming ResNet-50. These findings validate the benefits of RetFound's retina-specific pretraining; however, further research is warranted to establish best practices for fine-tuning RetFound and similar pre-trained models in classification tasks utilizing different retinal imaging modalities.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×