Abstract
Purpose :
Clinical trials for dry age-related macular degeneration therapeutics track geographic atrophy (GA) area as a primary endpoint—it would be valuable to fully automate this measurement using optical coherence tomography (OCT) rather than commonly used fundus autofluorescence. We investigated the accuracy of a GA segmentation algorithm for OCT, comparing the relative performance of different en face views, a critical factor to delineate choroidal hyperreflectivity.
Methods :
Patients diagnosed with GA at our centre were retrospectively identified and 50 OCT volume scans were extracted with no exclusions. Scans were manually segmented, delineating GA area in the en face view using custom software. Different en face views and OCT B-scans were used to determine GA boundaries, where en face views were generated based on automated layer segmentation. Using this ground truth, we applied a deep learning segmentation algorithm trained on the graded images and the four automatically generated en face images created via three different methods of integrating in the axial direction:
Entire volume
ILM to Bruch’s membrane
Bruch’s to an offset to the choroid.
335 µm slab 65 μm below RPE Fit [1]
Statistical analysis with DICE coefficient to gauge overlap area of test images, and Pearson’s coefficient to report on correlation of reported areas.
Results :
Average OCT image quality was 61/100 (STD=8). For each, a U-Net segmentation architecture used 5-fold cross validation that left our 10 images as a test set at each fold; 6 images were used for validation leaving 34 images for training. Methods 1 and 4 produced the highest DICE coefficients (0.81) and the highest correlation was found using method 1 (0.82) (Table 1).
Conclusions :
Fully automated segmentation of GA areas in OCT data can yield accurate results. However, the method of en face generation can dramatically change the appearance of the choroidal reflectivity and the GA area measured. A consensus approach would be advisable in comparisons to FAF images.
This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.