Purchase this article with an account.
Eric Kunz, Alex Cheng, Colin Andrew Bretz, Aaron B Simmons, M Elizabeth Hartnett; Generic UNet machine learning architecture can be trained to accurately and consistently identify retinal vascular features.. Invest. Ophthalmol. Vis. Sci. 2019;60(9):1488. doi: https://doi.org/.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
More detailed analysis of retinal flatmount images from animals within the rat oxygen induced retinopathy model (OIR) is possible with the use of available machine learning tools. Using a UNet machine learning architecture, we aim to build a series of models to identify and quantify peripheral retinal avascular area (AVA) and, in the future, distinct morphologies of intravitreal neovascularization (IVNV). An initial demonstration was performed to compare a trained UNet model’s predictions with manually identified peripheral AVA.
Images of lectin stained retinal flatmounts from postnatal day (p) 20 rats were selected from previous OIR studies. A mask of the manual graded selection of AVA was created for each image within ImageJ as the ground truth. Two additional manual selections of AVA were collected and reported as a percentage of total retinal area to provide an average ground truth (AVA-m) for comparison with the results of the trained UNet model (AVA-u).Uniform pre-processing of each image and mask involved reducing the image size to 3000 pixels in width while constraining aspect ratio and processing with CLAHE (Contrast Limited Adaptive Histogram Equalization) to normalize local contrast. A three layer generic UNet model was trained on a total of 42 images that included 13 original images and 39 copies altered by mirroring, rotation, and shearing.The selection of AVA predicted by the trained UNet model (AVA-u) was compared to the manually generated AVA selection averages (AVA-m). The comparison was repeated with post-processing of AVA-u to remove erroneous inclusion of flatmount edges and the central region of the retina.
The AVA-u predictions without post-processing image enhancement was on average 4.9±3.4% larger than the AVA-m average, although this was not statistically different (t-test, p=0.48). Post-processing of the UNet mask to exclude retinal edges and central retina reduced the difference between AVA-u and AVA-m within 4.1±2.8% (p=0.52).
The UNet trained model produced masks that effectively identified AVA with minimal inclusion of non-AVA areas or image artifact. Further refinement in image pre-processing and UNet training parameters will improve the model's ability to exclude non-perfused areas of the retina that are not considered AVA.
This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.
This PDF is available to Subscribers Only