Abstract
Purpose :
The analysis of sdOCT (spectral domain ocular coherence tomography) b-scans can be time consuming and subjective. We sought an approach to provide more accurate, objective and quantitative analysis of sd-OCT using data from a transgenic pig model of P23H human rhodopsin (Tg P23H hRHO) Autosomal dominant Retinitis Pigmentosa.
Methods :
sdOCT was used to assess changes in Tg P23H hRHO pigs as a function of time. Sets of 1000 b-scans were averaged and registered using a custom Matlab script to produce 100 images/set (1000 x 1024 pixel images). Averaged images had retinal layers drawn manually using a modified version of a Matlab program called OCTSEG. Ground truth labels were drawn for the inner and outer nuclear layers (INL and ONL, respectively) and for the retinal pigment epithelium (RPE). A set of 274 averaged b-scans drawn from 27 pigs were selected and randomly split into training (206), testing (340), and validation (34) sets. Thirty-two patches of 48 x 1024 pixels were randomly extracted from each training image and formed the input to the model. A U-Net network was trained over 50 epochs using the Adam optimizer and the Matlab Deep Learning toolbox, validating its loss against the validation set every one third of an epoch. This final network was also exported to a Keras/Tensorflow model to allow for easy model sharing.
Results :
The final network was evaluated against the post validation test set (34 images). Five
metrics were considered: global accuracy, per class mean accuracy, mean intersection over union (IoU),
class weighted IoU, and mean boundary F1 score. For our purposes, the mean boundary F1 score was
most critical since we are interested in layer edge locations. The final model achieved a mean boundary
F1 score of 0.979 (SD=0.0378) with a per class mean accuracy of 0.896 (SD=0.0554). We are now comparing these quantified b-scans to quantified measurements from confocal images of the same retinas in the similar retinal locations.
Conclusions :
This network provides a way of easily segmenting averaged b-scans and significantly
reduces the amount of time needed to measure retinal layer thicknesses. As more data is labeled, we
expect the retrained model to further improve; eventually enabling fully automatic segmentation of pig b-scan data.
This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.