July 2019
Volume 60, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2019
Generic UNet machine learning architecture can be trained to accurately and consistently identify retinal vascular features.
Author Affiliations & Notes
  • Eric Kunz
    Moran Eye Center, Salt Lake City, Utah, United States
  • Alex Cheng
    Moran Eye Center, Salt Lake City, Utah, United States
  • Colin Andrew Bretz
    Moran Eye Center, Salt Lake City, Utah, United States
  • Aaron B Simmons
    Moran Eye Center, Salt Lake City, Utah, United States
  • M Elizabeth Hartnett
    Moran Eye Center, Salt Lake City, Utah, United States
  • Footnotes
    Commercial Relationships   Eric Kunz, None; Alex Cheng, None; Colin Bretz, None; Aaron Simmons, None; M Elizabeth Hartnett, None
  • Footnotes
    Support  EY014800, R01EY015130, R01EY017011, Research to Prevent Blindness, Inc., The Retina Research Foundation of the Macula Society
Investigative Ophthalmology & Visual Science July 2019, Vol.60, 1488. doi:https://doi.org/
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Eric Kunz, Alex Cheng, Colin Andrew Bretz, Aaron B Simmons, M Elizabeth Hartnett; Generic UNet machine learning architecture can be trained to accurately and consistently identify retinal vascular features.. Invest. Ophthalmol. Vis. Sci. 2019;60(9):1488. doi: https://doi.org/.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : More detailed analysis of retinal flatmount images from animals within the rat oxygen induced retinopathy model (OIR) is possible with the use of available machine learning tools. Using a UNet machine learning architecture, we aim to build a series of models to identify and quantify peripheral retinal avascular area (AVA) and, in the future, distinct morphologies of intravitreal neovascularization (IVNV). An initial demonstration was performed to compare a trained UNet model’s predictions with manually identified peripheral AVA.

Methods : Images of lectin stained retinal flatmounts from postnatal day (p) 20 rats were selected from previous OIR studies. A mask of the manual graded selection of AVA was created for each image within ImageJ as the ground truth. Two additional manual selections of AVA were collected and reported as a percentage of total retinal area to provide an average ground truth (AVA-m) for comparison with the results of the trained UNet model (AVA-u).
Uniform pre-processing of each image and mask involved reducing the image size to 3000 pixels in width while constraining aspect ratio and processing with CLAHE (Contrast Limited Adaptive Histogram Equalization) to normalize local contrast. A three layer generic UNet model was trained on a total of 42 images that included 13 original images and 39 copies altered by mirroring, rotation, and shearing.
The selection of AVA predicted by the trained UNet model (AVA-u) was compared to the manually generated AVA selection averages (AVA-m). The comparison was repeated with post-processing of AVA-u to remove erroneous inclusion of flatmount edges and the central region of the retina.

Results : The AVA-u predictions without post-processing image enhancement was on average 4.9±3.4% larger than the AVA-m average, although this was not statistically different (t-test, p=0.48). Post-processing of the UNet mask to exclude retinal edges and central retina reduced the difference between AVA-u and AVA-m within 4.1±2.8% (p=0.52).

Conclusions : The UNet trained model produced masks that effectively identified AVA with minimal inclusion of non-AVA areas or image artifact. Further refinement in image pre-processing and UNet training parameters will improve the model's ability to exclude non-perfused areas of the retina that are not considered AVA.

This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×