Investigative Ophthalmology & Visual Science Cover Image for Volume 62, Issue 8
June 2021
Volume 62, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2021
Detecting Double Layer Signs with OCT volumes using a 3D Convolutional Neural Network (CNN)
Author Affiliations & Notes
  • Yuka Kihara
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • yingying shi
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Cancan Lyu
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Jin Yang
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Liang Wang
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Xiaoshuang Jiang
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Mengxi Shen
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Rita Laiginhas
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Hironobu Fujiyoshi
    Chubu Daigaku, Kasugai, Aichi, Japan
  • Giovanni Gregori
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Philip J Rosenfeld
    University of Miami Mary and Edward Norton Library of Ophthalmology, Miami, Florida, United States
  • Aaron Y Lee
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Footnotes
    Commercial Relationships   Yuka Kihara, None; yingying shi, None; Cancan Lyu, None; Jin Yang, None; Liang Wang, None; Xiaoshuang Jiang, None; Mengxi Shen, None; Rita Laiginhas, None; Hironobu Fujiyoshi, None; Giovanni Gregori, Carl Zeiss Meditec (F); Philip Rosenfeld, Carl Zeiss Meditec (F), Carl Zeiss Meditec (C); Aaron Lee, Carl Zeiss Meditec (F), Genentech (C), Microsoft (F), Novarits (F), NVIDIA (F), Santen (F), Topcon (R), US Food and Drug Administration (E), Verana Health (C)
  • Footnotes
    Support  NIH/NEI K23EY029246, Latham Vision Innovation Award, an unrestricted grant from Research to Prevent Blindness, NIH/NIA R01AG060942, Carl Zeiss Meditec, Inc. (Dublin, CA), the Salah Foundation, the National Eye Institute Center Core Grant (P30EY014801)
Investigative Ophthalmology & Visual Science June 2021, Vol.62, 2102. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yuka Kihara, yingying shi, Cancan Lyu, Jin Yang, Liang Wang, Xiaoshuang Jiang, Mengxi Shen, Rita Laiginhas, Hironobu Fujiyoshi, Giovanni Gregori, Philip J Rosenfeld, Aaron Y Lee; Detecting Double Layer Signs with OCT volumes using a 3D Convolutional Neural Network (CNN). Invest. Ophthalmol. Vis. Sci. 2021;62(8):2102.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : The presence of a double layer sign (DLS) on structural OCT B-scans is a critical predictor for subclinical choroidal neovascularization (CNV), a stage of non-exudative type 1 macular neovascularization (MNV) before onset of exudation. We sought to develop a 3D CNN to detect DLS of any size using only structural OCT B-scans.

Methods : Eyes with a DLS and eyes with drusen (Dr), to serve as a control, were imaged using 6x6mm swept source OCT angiography (SS-OCTA, PLEX Elite 9000, Carl Zeiss Meditec, Dublin, CA). Each scan pattern consisted of 500 A-scans per B-scan with each B-scan repeated twice at each of 500 B-scan positions along the 6mm y-axis. The OCTA data was used for manual labeling of DLS and Dr; only the structural OCT was used for deep learning.

Results : A total of 232 eyes (196 patients; 173 with DLS and 53 with Dr) were imaged using the SS-OCTA scan pattern. The deep learning model for multi-region segmentation (Figure 1) labels DLS and Dr on a single B-scan image (3D-2D model). We generated dense annotations by integrating manual annotations and predicted segmentation (Figure 2). After refining the labels, we trained a final 3D convolutional model that segments volumetrically (3D-3D model). Finally, eyes with MNV were identified based on en-face projection maps of the predicted masks. Accuracy of final classification was 92.85% (3D-2D model) and 94.28% (3D-3D model). Mean intersection over union (IoU) was DLS: 31.39%, Dr: 12.23% for the 3D-2D model, and DLS: 57.36%, Dr: 25.20% for the 3D-3D model.

Conclusions : Our network can detect DLS from structural B-scans alone by applying an annotation refinement technique for 3D CNN to a dataset with coarse annotations.

This is a 2021 ARVO Annual Meeting abstract.

 

A. Process overview. B. Annotation refinement process; we built a 3D-2D model, then generated prediction masks for each B-scan to obtain dense segmentation masks with pseudo labels. Blue: double layer sign; green: Dr. We expanded the original annotation using these interim results, then trained our 3D-3D model.

A. Process overview. B. Annotation refinement process; we built a 3D-2D model, then generated prediction masks for each B-scan to obtain dense segmentation masks with pseudo labels. Blue: double layer sign; green: Dr. We expanded the original annotation using these interim results, then trained our 3D-3D model.

 

A & B. Original annotations/predicted masks and the en-face projection map. C. For vertical slice (a), we mapped original annotations (b) and extracted all lesions that overlapped with or were very close to the original annotations (c). The extracted lesions were relabeled (d) so that each connected lesion had a consistent label. D. The criteria in C were applied to the normal B-scan direction to obtain the refined annotation.

A & B. Original annotations/predicted masks and the en-face projection map. C. For vertical slice (a), we mapped original annotations (b) and extracted all lesions that overlapped with or were very close to the original annotations (c). The extracted lesions were relabeled (d) so that each connected lesion had a consistent label. D. The criteria in C were applied to the normal B-scan direction to obtain the refined annotation.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×