June 2021
Volume 62, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2021
Segmentation of reticular pseudodrusen (RPD) lesions in OCT B-scans using deep learning
Author Affiliations & Notes
  • Yelena Bagdasarova
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Himeesh Kumar
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
  • Robyn H Guymer
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
  • Marian Blazes
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Cecilia S Lee
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Aaron Y Lee
    Ophthalmology, University of Washington, Seattle, Washington, United States
  • Zhichao Wu
    Centre for Eye Research Australia Ltd, East Melbourne, Victoria, Australia
  • Footnotes
    Commercial Relationships   Yelena Bagdasarova, None; Himeesh Kumar, None; Robyn Guymer, None; Marian Blazes, None; Cecilia Lee, None; Aaron Lee, Carl Zeiss Meditec (F), Genentech (C), Microsoft (F), Novartis (F), NVIDIA (F), Santen (F), Topcon (R), US Food and Drug Administration (E), Verana Health (C); Zhichao Wu, None
  • Footnotes
    Support  NIH/NIA R01AG060942, NIH/NEI K23EY029246, Latham Vision Innovation Award, and an unrestricted grant from Research to Prevent Blindness
Investigative Ophthalmology & Visual Science June 2021, Vol.62, 2138. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Yelena Bagdasarova, Himeesh Kumar, Robyn H Guymer, Marian Blazes, Cecilia S Lee, Aaron Y Lee, Zhichao Wu; Segmentation of reticular pseudodrusen (RPD) lesions in OCT B-scans using deep learning. Invest. Ophthalmol. Vis. Sci. 2021;62(8):2138.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : RPD are currently considered a high-risk feature for progression in early age-related macular degeneration. We thus sought to train a deep learning model to reliably segment RPD on B-scans in order to automatically detect and quantify the extent of these lesions.

Methods : 478 B-scan images with visible RPD from 29 participants with bilateral large drusen were used. An expert assigned each pixel a binary classification of RPD or normal. Using Dice coefficient loss, a U-Net was trained on randomly sampled regions of 359 B-scans of 20 participants. The network was then validated on the 119 B-scans of the remaining 9 participants. Sensitivity to training patch size and the number of scans used for training was investigated.

Results : The model segmentation (Figure 1) achieved a Dice of 0.67 and 0.8 on the validation and training sets, respectively. Horizontal flipping and rotation imparted no appreciable gain on the Dice. Training patch widths were varied from 32 pixels to the full size 1024 pixels (Figure 2A), resulting in the decrease of the Dice for the 32 pixel case, which is attributed to a higher fraction of lesion segments being cut off for this patch width. A patch width of 256 pixels was selected for training thereafter. For the data subtraction study (Figure 2B), the number of scans in the training set were varied from 359 down to 1 scan. With a training set of 1 B-scan the Dice was 0.48.

Conclusions : A U-Net was successfully trained to segment RPD lesions achieving a Dice of 0.67. Our results suggest that patch width should be chosen to avoid cutting off most lesions. The data subtraction study suggests that the dataset is uniform enough for the network to learn a general set of features from a single B-scan. Our preliminary findings demonstrate promise for developing a reliable method for detecting and quantifying RPD.

This is a 2021 ARVO Annual Meeting abstract.

 

Figure 1: (A) Histogram of Dice for validation patches. (B) (Left) Examples of segment of input B-scan patch, (center) ground truth, and (right) model vs ground truth with true positives(white), false negatives(red), and false positives(blue).

Figure 1: (A) Histogram of Dice for validation patches. (B) (Left) Examples of segment of input B-scan patch, (center) ground truth, and (right) model vs ground truth with true positives(white), false negatives(red), and false positives(blue).

 

Figure 2: Dice for model predictions evaluated on full-size validation scans as a function of training patch width (A) and number of scans sampled from for training set (B).

Figure 2: Dice for model predictions evaluated on full-size validation scans as a function of training patch width (A) and number of scans sampled from for training set (B).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×