August 2019
Volume 60, Issue 11
Open Access
ARVO Imaging in the Eye Conference Abstract  |   August 2019
Deep learning based robust fovea localization using OCT Angiography
Author Affiliations & Notes
  • Homayoun Bagherinia
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Mary Durbin
    Carl Zeiss Meditec, Inc., Dublin, California, United States
  • Footnotes
    Commercial Relationships   Homayoun Bagherinia, Carl Zeiss Meditec, Inc. (E); Mary Durbin, Carl Zeiss Meditec, Inc. (E)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science August 2019, Vol.60, PB0108. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Homayoun Bagherinia, Mary Durbin; Deep learning based robust fovea localization using OCT Angiography. Invest. Ophthalmol. Vis. Sci. 2019;60(11):PB0108.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Automated analyses of retinal disease use the location of fovea as a reference point. Finding the fovea location using structural OCT en face images in disease cases can be a hard problem due to the poor contrast in structural OCT enface and loss of foveal pit. In contrast, OCT-A superficial slabs provide high contrast vasculature images. This abstract proposes a robust method to find the center of the fovea in OCT-A images using a convolutional neural network (CNN) architecture.

Methods : Images from normal and eyes with retinal disease such as AMD and DR were acquired with field of views (FOV) of 3x3mm, 6x6mm, 8x8mm, 9x9mm, 12x12mm, 15x9mm using CIRRUSTM HD-OCT 5000 with AngioPlex® OCT Angiography, and PLEX® Elite 9000 SS-OCT (ZEISS, Dublin, CA). Superficial capillary plexus vasculature images were generated and framed into a 512x512 pixel image over 12x12mm FOV to create the training and testing images. The fovea location of 1330 images (1064 and 266 for training and testing) has been marked by a human expert. The training images were augmented by rotating each image around the center between ±10 degrees to increase the training images to 7448. The target images are 3mm diameter binary discs centered at the fovea location. A U-net with 5 contracting and 5 expansive layers with rectified linear unit, max pooling, binary cross-entropy loss, and sigmoid activation in the final layer were used for an end-to-end training. The fovea center was detected by template matching using the predicted image and the 3 mm binary disc.

Results : Figure 1 shows examples of fovea locations detected by the algorithm and human expert with corresponding predicted area of 3mm diameter disc centered at the fovea for normal and various disese cases such as AMD and DR. Figure 2 shows a difference plot of the fovea location detected by the algorithm and human expert. The success rate is 97% for an error smaller than 150 μm between the algorithm and human expert.

Conclusions : We demonstrated a robust method to find the center of the fovea in OCT Angiography images using a U-net architecture. Robust detection of the fovea location is crucial for automated analyses of retinal disease.

This abstract was presented at the 2019 ARVO Imaging in the Eye Conference, held in Vancouver, Canada, April 26-27, 2019.

 

Fig 1: Examples of fovea location detected by the algorithm (red cross) and human expert (green cross) with corresponding predicted segmentation of 3 mm diameter disc centered at the fovea.

Fig 1: Examples of fovea location detected by the algorithm (red cross) and human expert (green cross) with corresponding predicted segmentation of 3 mm diameter disc centered at the fovea.

 

Fig 2: Difference plot of fovea location detected by the algorithm and human expert.

Fig 2: Difference plot of fovea location detected by the algorithm and human expert.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×