Purchase this article with an account.
Rohan Gupta, Samuel Heaps, Bruno Alvisio, Francesca BARONE, Arvydas Maminishkis, Juan Amaral, Irina Bunea, Kristi Creel, Mitra Farnoodian, Kapil Bharti; Automatic retinal layer segmentation in OCT images of a laser induced porcine model for retinal injury. Invest. Ophthalmol. Vis. Sci. 2022;63(7):2081 – F0070.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Optical coherence tomography (OCT) provides useful assessment of retinal health, but OCT image analysis is often qualitative. Machine learning segmentation methods have been shown to be effective in quantifying changes seen in OCT images. However, there are no automatic segmentation tools for preclinical porcine models. We propose a novel computational pipeline using supervised deep learning methods to automatically segment porcine retinal layers in OCT B-scans in a laser-induced model for retinal injury.
B-scans were acquired in Yucatan pigs using the Spectralis (Heidelberg, Germany) spectral domain OCT and were exported with a corresponding infrared reflectance fundus image that specifies the scan location. B-scans and fundus images were manually segmented by an experienced observer using LabelMe, a Python based annotation tool (Wada, 2018). The healthy and laser-treated datasets consist of 12 B-scans (6 eyes, 3 pigs) and 12 B-scans (7 eyes, 4 pigs), respectively.Our proposed pipeline is primarily composed of three U-Net semantic convolutional neural networks. The first U-Net was trained to segment 7 retinal layers under healthy conditions. Due to the destructive effects of the laser in the retinal outer segments, a separate set of models (trained with a laser-treated dataset) is required to accurately segment damaged regions. To automatically identify regions of laser ablation within the OCT, a second U-Net is tasked to perform binary segmentation on the fundus image. Image processing techniques utilize relationships between the two image types, outputting predicted boundaries of lasered regions on B-scans. Using this B-scan input, a third U-Net is trained to segment damaged retinal layers.
Model outputs were compared to manual segmentation and were evaluated using mean dice coefficient (MDC), a measure of similarity between two samples. The first U-Net displayed an average MDC of 0.916 (SD=0.062) and the second U-Net presented an MDC of 0.995. Subsequent computations predict boundaries of the laser ablated regions with a mean absolute pixel error of 7.50 (SD=7.33).
Initial results suggest potential of this framework to be used as an effective evaluative tool for researchers using porcine models for retinal degeneration. Future work includes retraining models with larger datasets and developing algorithms for layer quantification by varied metrics.
This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.
This PDF is available to Subscribers Only