June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Contrastive Learning Driven Self-Supervised Framework for Segmentation of Biomarker of Diabetic Macular Edema
Author Affiliations & Notes
  • Hina Raja
    Department of Ophthalmology, The University of Tennessee Health Science Center, Memphis, Tennessee, United States
  • Muhammad Usman Akram
    Department of Computer and Software Engineering, National University of Sciences and Technology, Islamabad, Pakistan
  • Siamak Yousefi
    Department of Ophthalmology, The University of Tennessee Health Science Center, Memphis, Tennessee, United States
    Department of Genetics, Genomics, and Informatics, The University of Tennessee Health Science Center VolShop Memphis, Memphis, Tennessee, United States
  • Footnotes
    Commercial Relationships   Hina Raja None; Muhammad Akram None; Siamak Yousefi NIH/NEI, Code F (Financial Support), Remidio, Code F (Financial Support), RPB, Code F (Financial Support), Bright Focus Foundation, Code F (Financial Support)
  • Footnotes
    Support  EY031725, EY033005
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 238. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hina Raja, Muhammad Usman Akram, Siamak Yousefi; Contrastive Learning Driven Self-Supervised Framework for Segmentation of Biomarker of Diabetic Macular Edema. Invest. Ophthalmol. Vis. Sci. 2023;64(8):238.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop a self-supervised model based on deep learning (DL) and contrastive learning (CL) for segmentation of diabetic macular edema (DME)-induced lesions based on OCT scans.

Methods : We developed a self-supervised DL-CL model with two stages of training (Fig 1) based on 13,000 OCT scans collected from the publicly available Zang dataset (dataset 1) and 610 OCT scans collected from the publicly available Duke-II dataset (dataset 2). In the first stage, we developed an encoder-decoder model based on RAG-NET architecture and employed a cross entropy loss function (Lic) then trained the model based on annotated OCT scans with manifestations of intra-retinal fluid (IRF), sub-retinal fluid (SRF), and hard exudates (HE) lesions (from the first dataset). In the second stage, we developed a CL model and employed a contrastive loss function (Lc), to learn retinal lesion segmentation in a self-supervised manner based on unlabeled OCT scans (from the second dataset). This is achieved by imposing self-supervised constraining on the segmentation using contrastive loss function. We trained our model based on a subset of OCT scans from both datasets and validated the model based on an independent subset from dataset 1 with 3000 OCT scans (Fig 2).

Results : The area under the receiver operating characteristics curves (AUCs) of the model in detecting DME lesion for labeled and unlabeled data were 0.96 and 0.94 respectively.

Conclusions : We developed a self-supervised contrastive learning-driven model that segments DME lesions without any labels. The proposed model may help clinicals for diagnosing DME.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

 

Diagram of the proposed self-supervised model for detecting DME. Training phase 1: supervised learning based on encoder-decoder network using RAG-NET. Training phase 2: contrastive learning model learns the segmentation of retinal lesions based on previous learned experience and prior knowledge.

Diagram of the proposed self-supervised model for detecting DME. Training phase 1: supervised learning based on encoder-decoder network using RAG-NET. Training phase 2: contrastive learning model learns the segmentation of retinal lesions based on previous learned experience and prior knowledge.

 

Qualitative and quantitative evaluations. (A) Segmentation results of the proposed model for identification of DME-induced lesions (IRF in red, SRF in yellow, and HE in blue). Columns (a) and (b) present normal and DME OCT images, respectively. Column (c) represents the ground truth segmentation of scans in the previous column. Column (d) shows the model’s segmented outcome of normal images. Column (e) presents the identified lesions by the model. (B) AUC of the model based on labeled and unlabeled images.

Qualitative and quantitative evaluations. (A) Segmentation results of the proposed model for identification of DME-induced lesions (IRF in red, SRF in yellow, and HE in blue). Columns (a) and (b) present normal and DME OCT images, respectively. Column (c) represents the ground truth segmentation of scans in the previous column. Column (d) shows the model’s segmented outcome of normal images. Column (e) presents the identified lesions by the model. (B) AUC of the model based on labeled and unlabeled images.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×