Abstract
Purpose :
Subretinal drusenoid deposits (SDD) are an important risk factor for faster disease progression in age-related macular degeneration. As a result, automated detection and quantification of SDD in optical coherence tomography (OCT) scans is of particular interest. The purpose of this study is to develop a well-performing automated segmentation approach to quantify SDD in OCTs.
Methods :
In total, the dataset consists of 1358 manual graded B-scans from 14 OCT volumes (Spectralis) and 14 subjects. Only SDD of stage 2 and stage 3 were annotated. The data was split on a patient-distinct basis into training (n=8), validation (n=2) and test set (n=4). Two state-of-the-art deep learning approaches were implemented: SwinUNETR and Mask R-CNN (R-50-FPN), treating the SDD detection as 3D segmentation and 2D instance segmentation task, respectively. For the instance segmentation method, a connected components analysis (connectivity=4) was performed to convert the pixel-wise annotation maps into separated objects. The Dice metric was used to assess the similarity between the algorithms and the annotations. Spearman’s rank correlation coefficients (ρ) were determined to evaluate the capability in quantifying the SDD volume amount and the number of SDD.
Results :
Using expert manual annotations as a reference on the test set, the model segmentation based on Mask R-CNN (Figure 1) achieved a Dice of 0.579 (± 0.071) compared to 0.326 (± 0.035) achieved by SwinUNETR. The Mask R-CNN model reached a good correlation concerning the SDD volume amount (ρ=0.82, p=0.0003) and the number of SDD (ρ=0.76, p=0.0016). In contrast, the SwinUNETR model achieved a moderate correlation concerning the volume amount (ρ=0.70, p=0.0052) and a poor correlation concerning the SDD number (ρ=0.48, p=0.0814).
Conclusions :
A Mask R-CNN model was shown successful in detecting and segmenting SDD in OCT, reaching good correlations with human gradings and outperforming another state-of-the-art deep learning approach (SwinUNETR) by a big margin.
This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.