Abstract
Purpose :
To develop a fully automated deep-learning model to segment six distinct retinal layers along with the choroid and sclera in rat optical coherence tomography (OCT) volume scans based on a limited dataset. The proposed model aims to include the choroidal and scleral segmentations in rat OCT images with minimal averaging, which has not been fully exploited in deep-learning frameworks for pre-clinical research.
Methods :
We acquired OCT volume scans (2 mm x 2 mm, 1000 A-scans per B-scan, 29 B-scans per volume with 8 frames averaged per B-scan; Bioptigen 4300; Leica Microsystems). We manually segmented the retinal nerve fiber layer (RNFL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer (ONL), external limiting membrane + photoreceptors (ELM+PR), choroid, and sclera to serve as ground truth (n=11 eyes, n=293 scans). Several OCT scans that were difficult to segment manually were discarded. We used 209 scans for training, 58 scans for testing, and 29 scans for validation. We ensured scans from the same eye were only present in one dataset. We trained a U-Net with 7,702,345 trainable parameters for 100 epochs (NVIDIA Quadro RTX 8000 48GB) using the CrossEntropy Loss and SGD optimizer. We used the Dice Coefficient to assess the model performance.
Results :
DeepRetNet2.0 segmented OCT scans with similar performance as human annotators. In the test dataset, it achieved a dice coefficient above 0.8 for every layer except the INL (RNFL=0.90, IPL=0.89, INL=0.79, OPL=0.83, ONL=0.93, ELM+PR =0.93, choroid=0.85, sclera=0.86).
Conclusions :
DeepRetnet2.0 can reliably segment the given layers as shown by the high Dice coefficient across all layers. These results show DeepRetnet2.0's potential as a reliable tool in preclinical ophthalmological research. Future work will increase the human-annotated database, optimize the hyperspace of the tunable parameters, and work towards improving overall dice coefficients.
This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.