Abstract
Purpose :
Reference databases of healthy patients are useful for clinicians in diagnosis of glaucoma. However, construction of these databases requires expert judgement to remove individuals with glaucoma and scans of poor quality. We propose a deep learning algorithm to predict usability of these scans based on health and scan quality.
Methods :
Retrospective analysis was performed on 1751 3D Wide OCT scans from 1696 (7K) individuals sampled from a 10 optometry practices that use Maestro2 (Topcon Healthcare, Tokyo, Japan). Using a reading center method, the OCT Hood reports were judged on being of acceptable quality and without signs of pathology by one of two graders. Scans were deemed usable if the circumpapillary b-scan showed no signs of clipping or segmentation error, the macular ganglion cell plus inner plexiform layer (GCL+) thickness map had no major artifacts or signs of pathology in the donut-shaped region, and the retinal nerve fiber layer (RNFL) probability map (p-map) was consistent with patterns seen in normal eyes.
The data were split patient-wise, with 80% used for training and 20% used for validation. Three CAFormer M36 models were respectively used to extract features from the GCL+ thickness map, circumpapillary B-scan, and RNFL probability map (Figure 1). The features from each model were concatenated and used to predict usability.
Area under receiver operating characteristic curve (AUROC) and average precision (AP) were calculated based on output scores from the validation set.
Results :
The AUROC on the validation set was 0.93 (95% CI 0.88-0.97), and the AP was 0.98 (0.94-0.995). The model was effective in detecting both glaucoma and poor scans, while also identifying usable scans; examples are shown in Figure 2.
Conclusions :
We demonstrate a system for grading Hood reports which are usable for a reference database, both in terms of scan quality and health. This system could be used to aid selection in a database, and could also be used to assist in a clinical setting.
This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.