August 2019
Volume 60, Issue 11
Free
ARVO Imaging in the Eye Conference Abstract  |   August 2019
Deep Learning Based Method for Retinal Layer Segmentation In Optical Coherence Tomography Images
Author Affiliations & Notes
  • Ivana Zadro
    Faculty of Electrical Engineering and Computing, Zagreb, Croatia
  • Sven Lončarić
    Faculty of Electrical Engineering and Computing, Zagreb, Croatia
  • Marin Radmilovic
    Department of Ophthalmology, University Hospital Centre “Sestre milosrdnice“, Zagreb, Croatia
  • Zoran Vatavuk
    Department of Ophthalmology, University Hospital Centre “Sestre milosrdnice“, Zagreb, Croatia
  • Footnotes
    Commercial Relationships   Ivana Zadro, None; Sven Lončarić, None; Marin Radmilovic, None; Zoran Vatavuk, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science August 2019, Vol.60, PB0110. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ivana Zadro, Sven Lončarić, Marin Radmilovic, Zoran Vatavuk; Deep Learning Based Method for Retinal Layer Segmentation In Optical Coherence Tomography Images. Invest. Ophthalmol. Vis. Sci. 2019;60(11):PB0110.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Manual segmentation of retinal layers in pathological optical coherence tomography (OCT) scans is time-consuming, expert-grader dependent and prone to errors. Because of this, an automatic process of retinal layer segmentation is needed. The objective of this study is to investigate the applicability of the U-Net network with a postprocessing method for automatic segmentation of retinal layers from a database of OCT images with age-related macular degeneration (AMD).

Methods : The method consists of three steps: the first step is the U-Net neural network for layer segmentation. The second step is a postprocessing method which corrects the neural network outputs by utilizing the following a priori knowledge about the retinal layers: a) the layers are topologically ordered, b) the layers cannot be intertwined. The third part is a Canny edge filtering used for layer boundary extraction.
The expert grader annotated four layer boundaries in a total of 1270 B-scans: inner limiting membrane (ILM), inner nuclear layer/inner plexiform layer boundary (IPL/INL), retinal pigment epithelium (RPE) and Bruch’s membrane (BM).
The proposed model was trained and validated on 23 macular spectral-domain OCT volumes of eyes with age-related macular degeneration. To evaluate a model a leave-one-out volume validation was repeated 23 times. The Dice similarity index was used for layer segmentation accuracy whereas the accuracy of the layer boundary segmentation was evaluated by the average surface distance error (ASDE).

Results : Average Dice similarity indices for three layers and surface ASDE metrics for four layer boundaries across all 23 validation experiments are given in Table 1.
The comparison between the manual segmentation and the segmentation using the proposed method is shown in Figure 1. Figure 1 (a) shows the original OCT image with manually segmented layer boundaries, Figure 1 (b) is the corresponding U-Net network output, Figure 1 (c) is the result of the postprocessing step, and Figure 1 (d) shows the final result after applying Canny edge filtering.

Conclusions : The average Dice similarity indices above 90% for all three layers and low average surface distance errors for all four layer boundaries indicate that the proposed method can be used for effective automatic segmentation of retinal layers, even in the presence of retinal pathological changes.

This abstract was presented at the 2019 ARVO Imaging in the Eye Conference, held in Vancouver, Canada, April 26-27, 2019.

 

 

Figure 1. a) b) c) d)

Figure 1. a) b) c) d)

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×