Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
3D reconstruction of non-human primate axons using machine-learning software
Author Affiliations & Notes
  • Viraj Deshpande
    Opthalmology and Vision Science, University of California Davis, Davis, California, United States
  • Nicholas Marsh-Armstrong
    Opthalmology and Vision Science, University of California Davis, Davis, California, United States
  • Footnotes
    Commercial Relationships   Viraj Deshpande None; Nicholas Marsh-Armstrong None
  • Footnotes
    Support  NEI RO1 EY029086 to Dr. Nicholas Marsh-Armstrong
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 1455. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Viraj Deshpande, Nicholas Marsh-Armstrong; 3D reconstruction of non-human primate axons using machine-learning software. Invest. Ophthalmol. Vis. Sci. 2024;65(7):1455.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Manual axon segmentation within Scanning Block-Face Electron Microscopy (SBEM) datasets to study axon morphology change at the optic nerve head (ONH) in animal glaucoma models is feasible but time-consuming. We present progress towards developing a method to reconstruct non-human primate (NHP) axon segments from induced experimental glaucoma (EG) ONH SBEM datasets using AIVIA Machine-Learning (AML) software.

Methods : Six blocks (pixel size 1280x1280x32) acquired at 9.91x9.91x50 nm x,y,z resolution were selected from an SBEM dataset to train the AML model. We manually traced 1605 axons (48150 contours) using IMOD tomographic reconstruction software. Training data was binned 1x, 2x, and 5x in the x,y dimension and we assessed AIVIA output (AO) accuracy by comparing centroid distance (CD) between the AO and corresponding manually-traced axons. Training data was 2-binned to reduce processing time and the AML model was trained. Model was applied to 6 blocks (z-depth 19); 3 with myelinated axons and 3 with unmyelinated. We tested 35 AML parameter permutations and AO was optimized by comparing CD. Outside AIVIA, we binarized AO grayscale probability maps using IPLab software after comparing 25 image segmentation parameter permutations. Axons were reconstructed through binarized contour overlap in each AO image stack frame. Segmented AO 3D accuracy was assessed by comparing the number of contours attributed to the same axon by our AML/segmentation pipeline to axons defined by manual tracing.

Results : Mean CD comparison showed that 2bin data (n=3594 axons) outperformed 5bin data (p<0.0001) and was comparable to unbinned data (p>0.05). Of AML parameters tested, 500 epochs performed best. Of segmentation permutations, an 85% max intensity cutoff of AO probability maps was optimal. In 3D space, AO axon yield (number >90% traced over 19 frames) was 59.9% ± 3.8% in myelinated blocks (n=3, n=96 axons) and 42.7% ± 12.5% in unmyelinated blocks (n=3, n=106 axons).

Conclusions : We have created a workflow to produce axon models from SBEM datasets using AML and image segmentation. Raw data can be binned 2x without accuracy loss, but AML tracing requires higher resolution raw data than manual tracing. Myelinated axon portions yield higher accuracy than unmyelinated. While further curation is needed to exclude misidentified axons, our method should obtain ample axon segments to compare parameters in EG versus contralateral control ONH.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×