June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Anchor-free Attention-guided Deep Neural Network for Automated Detection and Grading of Optic Nerve Axons
Author Affiliations & Notes
  • Kyle Freeman
    The University of Tennessee Health Science Center, Memphis, Tennessee, United States
  • Sophie Pilkinton
    The University of Tennessee Health Science Center, Memphis, Tennessee, United States
  • Madhusudhanan Balasubramanian
    Electrical and Computer Engineering, The University of Memphis, Memphis, Tennessee, United States
  • Shelby Graham
    The University of Tennessee Health Science Center, Memphis, Tennessee, United States
  • Monica M Jablonski
    The University of Tennessee Health Science Center, Memphis, Tennessee, United States
  • Footnotes
    Commercial Relationships   Kyle Freeman None; Sophie Pilkinton None; Madhusudhanan Balasubramanian None; Shelby Graham None; Monica Jablonski None
  • Footnotes
    Support  NEI EY001200 and RPB Challenge Grant
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 375. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kyle Freeman, Sophie Pilkinton, Madhusudhanan Balasubramanian, Shelby Graham, Monica M Jablonski; Anchor-free Attention-guided Deep Neural Network for Automated Detection and Grading of Optic Nerve Axons. Invest. Ophthalmol. Vis. Sci. 2023;64(8):375.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Labeling retinal ganglion (RGC) cell axons when studying neurodegenerative ocular pathologies is time consuming and often fraught with large inter-observer variation. Previously, we developed AxonClassNet using an anchor-based convolutional neural network (Mask RCNN) that can segment and classify RGC axons as live or necrotic from confocal microscopy images. In this study, we present an anchor-free attention-guided network for detecting and grading ON axons.

Methods : Learning-based object detection methods often use region proposals and anchors in images to identify objects of interest. Hyper-parameters are identified depending on the scale, aspect ratio and density of the objectsin the images. In this study, we utilized an anchor-free spatial attention-guided architecture (CenterMask) to detect and classify ON axons as live and necrotic. ONs from several generations of NIH heterogenous stock outbred rats were harvested, prepared, embedded, sliced, stained with p-phenylenediamine and imaged using a confocal microscope. A set of 74 ON images from 4 rats were semi-automatically annotated as follows. First, axons in these images were automatically localized and graded as live or necrotic using the AxonClassNet. These annotations were manually reviewed and corrected by trained reviewers. CenterMask was initialized with pre-trained weights and fine-tuned using ON images. A set of 25, 12 and 35 images were used for training, validation and testing respectively.

Results : The average precision, recall, and F1-score for correctly segmenting and grading necrotic axons were 92.9%, 81.4% and 86.1% respectively. For correctly segmenting and grading live axons, the average precision, recall and F1-score were 76.4%, 79.5% and 73.1% respectively.

Conclusions : CenterMask trained on ON images is capable of identifying and classifying RGC axons as live or necrotic. When compared to anchor-based AxonClassNet, the anchor-free CenterMask provided lower accuracy in segmenting and grading live axons. With additional training, the anchor-free architecture can be used for high throughput analysis of ON axons in the study of neurodegenerative diseases such as glaucoma.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×