June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Generalizing 7 layer OCT segmentation from Heidelberg to Topcon devices without labels
Author Affiliations & Notes
  • Yue Wu
    University of Washington, Seattle, Washington, United States
  • Abraham Olvera-Barrios
    Moorfields Eye Hospital NHS Foundation Trust, London, London, United Kingdom
  • Ryan T Yanagihara
    University of Washington, Seattle, Washington, United States
  • Marian S. Blazes
    University of Washington, Seattle, Washington, United States
  • Catherine A Egan
    Moorfields Eye Hospital NHS Foundation Trust, London, London, United Kingdom
  • Cecilia S Lee
    University of Washington, Seattle, Washington, United States
  • Adnan Tufail
    Moorfields Eye Hospital NHS Foundation Trust, London, London, United Kingdom
  • Aaron Y Lee
    University of Washington, Seattle, Washington, United States
  • Footnotes
    Commercial Relationships   Yue Wu None; Abraham Olvera-Barrios None; Ryan Yanagihara None; Marian Blazes None; Catherine Egan None; Cecilia Lee None; Adnan Tufail None; Aaron Lee None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 183 – F0030. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yue Wu, Abraham Olvera-Barrios, Ryan T Yanagihara, Marian S. Blazes, Catherine A Egan, Cecilia S Lee, Adnan Tufail, Aaron Y Lee; Generalizing 7 layer OCT segmentation from Heidelberg to Topcon devices without labels. Invest. Ophthalmol. Vis. Sci. 2022;63(7):183 – F0030.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Deep learning (DL) models have transformed medical image analysis, including OCT image analysis. However, these models require copious labeled data for each device of interest, as they do not generalize across devices from different manufacturers. We sought to use Generative Adversarial Networks (GAN) to generalize a DL model trained on Heidelberg OCTs to segment Topcon OCTs.

Methods : We developed an unsupervised GAN model, GANSeg, to segment Topcon 1000 OCTs (domain B) from the UK Biobank while training on 110 labeled 7-layer segmentations from the Duke Heidelberg dataset (domain A). Traditional supervised DL models learn a mapping from A to Alabel (Figure 1a), and do not generalize to B images. In contrast, GANSeg uses a GAN to apply B style to the contents of A images, while simultaneously making the U-Net segmenter robust to images in both styles (Figure 1b). To validate GANSeg segmentations, three graders manually segmented the 7 layers on 30 OCTs from UK Biobank, and the Intersection over Union (IOU) between the graders, a U-Net trained only on Heidelberg (U-NetHeidelberg) and U-NetGANSeg are compared on the 30 OCTs.

Results : U-NetGANSeg significantly outperforms a U-NetHeidelberg in segmenting Topcon 1000 in all layers (Figure 2). It achieves comparable IOUs, between 60% to 70%, to human graders, all while having no labeled Topcon 1000 data. Moreover, GANSeg retained the ability to segment Heidelberg OCTs after learning to segment Topcon OCTs.

Conclusions : GANSeg achieved comparable IOUs to human graders, and the GANSeg framework enables us to transfer supervised DL algorithms across devices without labeled data, thereby greatly expanding the applicability of DL algorithms.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

 

Figure 1a. Schematic of traditional supervised DL framework, with A being training images, Alabel labeled masks and U a U-Net. 1b. Schematic of GANSeg, where GA2B denotes the generator of style A images into style B, and vice versa for GB2A. U is a single U-Net that segments all versions of images derived from A (A, A2B, and A2B2A) and compares them to Alabel. 1c. Sample 256*256 cropped OCTs from Duke Heidelberg (left) and Topcon 1000 (right) highlighting differences between the different devices.

Figure 1a. Schematic of traditional supervised DL framework, with A being training images, Alabel labeled masks and U a U-Net. 1b. Schematic of GANSeg, where GA2B denotes the generator of style A images into style B, and vice versa for GB2A. U is a single U-Net that segments all versions of images derived from A (A, A2B, and A2B2A) and compares them to Alabel. 1c. Sample 256*256 cropped OCTs from Duke Heidelberg (left) and Topcon 1000 (right) highlighting differences between the different devices.

 

Figure 2a. Example segmentations of Duke Heidelberg OCT B-scans, and 2b. Topcon 1000 B-scans. 2c. Inter-grader IOUs (G1, G2, G3) , U-NetHeidelberg vs G1 and U-NetGANSeg vs G1 for 30 Topcon images by segmentation layer

Figure 2a. Example segmentations of Duke Heidelberg OCT B-scans, and 2b. Topcon 1000 B-scans. 2c. Inter-grader IOUs (G1, G2, G3) , U-NetHeidelberg vs G1 and U-NetGANSeg vs G1 for 30 Topcon images by segmentation layer

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×