Investigative Ophthalmology & Visual Science Cover Image for Volume 62, Issue 8
June 2021
Volume 62, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2021
AxoNet 2.0: A deep learning (DL)-based tool for morphometric analysis of retinal ganglion cell (RGC) axons
Author Affiliations & Notes
  • Vidisha Goyal
    Electrical and Computer Engineering, Georgia Institute of Technology College of Engineering, Atlanta, Georgia, United States
  • Gabriela Sánchez-Rodríguez
    Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States
  • Bailey Hannon
    Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States
  • Matthew D. Ritch
    Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States
  • Aaron M. Toporek
    Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States
  • Andrew Feola
    Atlanta VA Center for Visual & Neurocognitive Rehabilitation, Decatur, Georgia, United States
  • Arthur Thomas Read
    Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States
  • C Ross Ethier
    Wallace H. Coulter Department of Biomedical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States
    Woodruff School of Mechanical Engineering, Georgia Institute of Technology, Atlanta, Georgia, United States
  • Footnotes
    Commercial Relationships   Vidisha Goyal, None; Gabriela Sánchez-Rodríguez, None; Bailey Hannon, None; Matthew Ritch, None; Aaron Toporek, None; Andrew Feola, None; Arthur Read, None; C Ethier, None
  • Footnotes
    Support  NIH R01 EY025286 (CRE), 5T32 EY007092-32 (BGH), Department of Veteran Affairs R&D Service Career Development Award (RX002342; AJF), and Georgia Research Alliance (CRE).
Investigative Ophthalmology & Visual Science June 2021, Vol.62, 1012. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vidisha Goyal, Gabriela Sánchez-Rodríguez, Bailey Hannon, Matthew D. Ritch, Aaron M. Toporek, Andrew Feola, Arthur Thomas Read, C Ross Ethier; AxoNet 2.0: A deep learning (DL)-based tool for morphometric analysis of retinal ganglion cell (RGC) axons. Invest. Ophthalmol. Vis. Sci. 2021;62(8):1012.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : A key outcome measure in animal models of glaucoma is the number and appearance of RGC axons, typically obtained from light micrographs of optic nerve (ON) cross-sections. However, this analysis is time-consuming and subjective. Here we expand on our RGC axon counting software (AxoNet 1.0, [1]) to also identify axoplasm (“Ax”) and myelin sheath (“MySh”) of normal-appearing RGC axons and extract their morphometric features.

Methods : Micrographs from 12 control and 14 hypertensive rat ONs were acquired as before [1], representing a wide range of glaucomatous damage. 1421 12x12 um2 sub-images were randomly selected from these micrographs, and Ax and MySh of normal-appearing axons were annotated by 16 trained individuals with cross-validation. Sub-images were randomly divided into training (90%), validation (5%), and test (5%) sets. A trained U-Net architecture generated segmentation of Ax and MySh of the normal-appearing axons, assigning a probability of each pixel being Ax or MySh. These maps were processed through an image processing pipeline to compute, for each axon: area, convex area, perimeter, eccentricity, average MySh probability (average probability over all MySh pixels of each pixel being MySh), and average Ax probability. Outcome measures included: (i) the soft-dice coefficient between predicted and ground-truth segmentation maps; (ii) R2 regressing number of the normal-appearing axons counted automatically vs. ground truth. [1] Ritch+, Sci Rep, 2020

Results : The model performed well (Fig 1) with high soft-dice coefficients (Fig 2). AxoNet 2.0 outperformed AxoNet 1.0 as judged by agreement with ground truth (Fig 2).

Conclusions : A DL model can segment Ax and MySh of normal-appearing axons from ON sub-images with varying ON health. This approach will speed RGC axon counting and morphometric analysis in animal models of glaucomatous optic neuropathy.

This is a 2021 ARVO Annual Meeting abstract.

 

AxoNet 2.0 performance on a representative image. (A) Original sub-image (B) Annotated image (pink=Ax, yellow=MySh) (C) Predicted Ax probability map (greyscale bar: Ax probability) (D,E) Ax and MySh of segmented axons, respectively (F) Features computed for a segmented axon.

AxoNet 2.0 performance on a representative image. (A) Original sub-image (B) Annotated image (pink=Ax, yellow=MySh) (C) Predicted Ax probability map (greyscale bar: Ax probability) (D,E) Ax and MySh of segmented axons, respectively (F) Features computed for a segmented axon.

 

(A) Average mean soft-dice coefficients, a measure of image agreement [95% CI] across datasets for the full-mask, Ax mask, and MySh mask. Comparison between automated and manual axon counts for the training (B,C), validation (D,E) and test (F,G) data sets.

(A) Average mean soft-dice coefficients, a measure of image agreement [95% CI] across datasets for the full-mask, Ax mask, and MySh mask. Comparison between automated and manual axon counts for the training (B,C), validation (D,E) and test (F,G) data sets.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×