Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Semi-automated segmentation of ONH tissues using deep learning
Author Affiliations & Notes
  • Kelly A. Clingo
    Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
  • Cameron A Czerpak
    Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
  • Harry A Quigley
    Ophthalmology, Johns Hopkins Medicine Wilmer Eye Institute, Baltimore, Maryland, United States
  • Thao D Nguyen
    Mechanical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
    Ophthalmology, Johns Hopkins Medicine Wilmer Eye Institute, Baltimore, Maryland, United States
  • Footnotes
    Commercial Relationships   Kelly Clingo None; Cameron Czerpak None; Harry Quigley Heidelberg Engineering, Code C (Consultant/Contractor); Thao Nguyen None
  • Footnotes
    Support  Brightfocus G2021012S, Research to Prevent Blindness
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 2492. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kelly A. Clingo, Cameron A Czerpak, Harry A Quigley, Thao D Nguyen; Semi-automated segmentation of ONH tissues using deep learning. Invest. Ophthalmol. Vis. Sci. 2024;65(7):2492.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To develop an automated method to segment the different tissues in the optic nerve head (ONH) in optical coherence tomography (OCT) radial scans using deep learning.

Methods : Spectral domain OCT (SD-OCT) was used to acquire 24 radial scans of the ONH of 58 human eyes (mean resolution R, Z=5.87, 3.87 µm/pixel). The image contrast was enhanced using adaptive histogram equalization and a Gamma filter. Bruch’s membrane (BM), BM opening (BMO), choroid-sclera interface (CS), and anterior lamina cribrosa (ALC) were manually marked in each scan. Image sets of 46 ONHs were used to train a DeepLabv3+ network (Chen 2018) (initialized ResNet-50), 6 image sets were used for validation, and another 6 were used for testing the trained network (Net1, Fig. 1). When presented with a new image, Net1 outputs a confidence value for each pixel being a given tissue (e.g. retina, choroid, etc.). A polynomial is fit to the cluster of tissue labels to determine the tissue border (e.g. 6th order for CS). The user accepts or corrects by dragging the markings to the correct location. The updated markings train a copy of Net1 (Net2, Fig. 1). For later runs Net1 and Net2 are averaged. The accuracy of the method was measured as the difference between the predicted and manual markings (6 testing ONHs).

Results : The average root mean square error of the semi-automated marking for the BMO, BM, CS, and ALC were 17.55±22.13, 2.32±3.81, 6.27±4.64, and 12.84±8.29 pixels, respectively (Fig. 2). Of the 6 test ONHs, 1 required <5 min. of manual correction, 4 required 5-10 min., and 1 required 10-20 min. All were faster than manual marking (1hr/ONH).

Conclusions : The trained network accurately identified the tissues of the ONH and marked their boundaries. Adjusting the markings updates the model with more training data.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

Figure 1. Image 1 is input to Net1 and Net2. Both were trained on the training set. Net1 and Net2 each output confidence matrices 2a, 2b for each tissue border. 2a, 2b are averaged and each pixel is assigned to the tissue label with the highest confidence, resulting in the clusters of tissue labels (3). The highest confidence in each column converts the cluster to a border point (4). Each border is fit to a polynomial, and the user can correct the final border segmentations (5) before they are used to continue training Net2 before the next image.

Figure 1. Image 1 is input to Net1 and Net2. Both were trained on the training set. Net1 and Net2 each output confidence matrices 2a, 2b for each tissue border. 2a, 2b are averaged and each pixel is assigned to the tissue label with the highest confidence, resulting in the clusters of tissue labels (3). The highest confidence in each column converts the cluster to a border point (4). Each border is fit to a polynomial, and the user can correct the final border segmentations (5) before they are used to continue training Net2 before the next image.

 

Figure 2. Comparison of manual and predicted markings for radial scan with the A) highest and B) lowest error.

Figure 2. Comparison of manual and predicted markings for radial scan with the A) highest and B) lowest error.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×