July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Automatic segmentation of retinal and choroidal thickness in OCT images using convolutional neural networks
Author Affiliations & Notes
  • David Alonso-Caneiro
    Queensland University of Technology, Kelvin Grove, Queensland, Australia
  • Scott A Read
    Queensland University of Technology, Kelvin Grove, Queensland, Australia
  • Jared Hamwood
    Queensland University of Technology, Kelvin Grove, Queensland, Australia
  • Stephen Vincent
    Queensland University of Technology, Kelvin Grove, Queensland, Australia
  • Michael J Collins
    Queensland University of Technology, Kelvin Grove, Queensland, Australia
  • Footnotes
    Commercial Relationships   David Alonso-Caneiro, None; Scott Read, None; Jared Hamwood, None; Stephen Vincent, None; Michael Collins, None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 1732. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      David Alonso-Caneiro, Scott A Read, Jared Hamwood, Stephen Vincent, Michael J Collins; Automatic segmentation of retinal and choroidal thickness in OCT images using convolutional neural networks. Invest. Ophthalmol. Vis. Sci. 2018;59(9):1732.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To evaluate the performance of a fully automatic method based on a deep learning approach to segment retinal and choroidal boundaries in OCT images, and derive retinal thickness (RT) and choroidal thickness (ChT) using data obtained from a healthy pediatric cohort.

Methods : Custom designed convolutional neural networks (CNN) were trained to classify three boundaries; the inner limiting membrane (ILM), the retinal pigment epithelium (RPE) and the chorio-scleral interface (CSI). The CNN uses a rectangular path size of 61x31 (VxH) pixels to train the network and three network-input options were tested during training; (i) standard intensity, (ii) attenuation coefficient equivalent and (iii) a combination of both (dual). For each option, the network was trained on the same 137 randomly selected images (70 subjects) and validated on 28 images to ensure adequate training. The network was then tested on 30 different images from 30 different subjects. To test repeatability, consecutive images from the same subject/location were used. The CNN outputs a probability map for each boundary position that was traced with a graph-search technique. The results from the automatic method were compared to data from manual segmentation by an experienced observer.

Results : The well-defined ILM and RPE boundaries showed small errors (1 pixel) in comparison to the CSI which exhibited slightly larger errors (4 pixels) across all tested options (Table 1). The different CNN inputs had a small effect on the boundary error, with the dual input yielding a slightly smaller mean error and SD. Analysis of the ChT and RT, revealed errors of -0.1 and -2 pixels respectively. The mean repeatability difference results (in pixels) for the RT [1.10;1.09;1.08] and ChT [3.40;3.43;3.38] across all options were comparable with the repeatability from manual segmentation [RT 1.24,ChT 2.51].

Conclusions : The automatic and manual segmentation methods showed very close agreement, which suggests that the proposed method provides robust detection of the retinal and choroidal boundaries of interest.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

 

Table 1. Difference in boundary position and thickness between the proposed algorithm and the manual observer for the data set (n=30) and the different input options.

Table 1. Difference in boundary position and thickness between the proposed algorithm and the manual observer for the data set (n=30) and the different input options.

 

Fig 1. B-scans representing different examples, with typical variations in thickness and contrast. Manual segmentation (red-solid) and automatic using dual input (green-dotted).

Fig 1. B-scans representing different examples, with typical variations in thickness and contrast. Manual segmentation (red-solid) and automatic using dual input (green-dotted).

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×