June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Deep learning based segmentation of retinal fluids using optical coherence tomography (OCT) data
Author Affiliations & Notes
  • Alexander Schmid
    Digital Pathology, Carl Zeiss Meditec AG Oberkochen, Oberkochen, Baden-Württemberg, Germany
  • Krunalkumar Ramanbhai Patel
    Center of Application and Research in India, Carl Zeiss India Pvt Ltd, Bangalore, Karnataka, India
  • Footnotes
    Commercial Relationships   Alexander Schmid Carl Zeiss Meditec, Inc., Code E (Employment); Krunalkumar Ramanbhai Patel Carl Zeiss Meditec, Inc., Code E (Employment)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 2077 – F0066. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Alexander Schmid, Krunalkumar Ramanbhai Patel; Deep learning based segmentation of retinal fluids using optical coherence tomography (OCT) data. Invest. Ophthalmol. Vis. Sci. 2022;63(7):2077 – F0066.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : OCT, a non-invasive imaging modality, plays a central role in ophthalmology. Efficient and precise segmentation of various abnormalities seen in OCT imaging can help detect and monitor disease progression, thus advising and supporting effective treatment decisions. For this purpose, we propose a new deep learning-based segmentation pipeline for retinal fluids in OCT.

Methods : The underlaying data is formed by an OCT B-scan dataset containing 1540 images and masks, split in 1291 training and 249 validation cases, annotated for three different retinal fluids: intraretinal fluid (IRF), subretinal fluid (SRF) and pigmented epithelial detachment (PED), captured by a CIRRUS™ HD-OCT 4000 (ZEISS, Dublin, CA). The performance of the model is also evaluated using the RETOUCH dataset, containing 70 SD-OCT volumes from three different OCT vendors. The segmentation model is based on a squeeze-and-excitation architecture including transfer learning strategies with partial freezing variants utilizing trained weights from ImageNet. The whole implementation is done using Tensorflow-Keras and the model is optimized using Tversky loss. The quantitative evaluation of the model prediction is performed using the Dice coefficient and the MeanIoU.

Results : According to the efficient interplay of transfer learning techniques across ImageNet and the OCT data, an overall validation Dice coefficient of 66.3 and MeanIoU of 69.6 percentage points could be achieved by training all fluid categories simultaneously. Figure 1 a) reveals the reached metric values for each individual label. In addition, the RETOUCH examinations resulted in the mentioned numbers in Figure 1 b).

Conclusions : This work constitutes that an individual adaptation of the network and the integration of domain-specific knowledge of other modalities through special transfer learning techniques enables semantic features to be learned even with minimal, complex data sets. Overall, adaptions and transfer learning scenarios of this network could increase the segmentation performance by 15.3 % measured by the Dice coefficient, compared to a standard segmentation implementation (nnUnet).

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 

Figure 1: Overview of reached Dice coefficients and MeanIoU values according to different configurations. a) refers to the custom OCT dataset, b) denotes to the public RETOUCH dataset with the same fluids.

Figure 1: Overview of reached Dice coefficients and MeanIoU values according to different configurations. a) refers to the custom OCT dataset, b) denotes to the public RETOUCH dataset with the same fluids.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×