Abstract
Purpose :
Using deep learning to automate the classification of colour fundus images based on the presence of choroidal nevi is challenging due to the limited availability of labelled data. To overcome this limitation, this study aims to test the performance of a patch-based deep learning model in completing this task.
Methods :
This study uses 580 fundus images collected and labelled by the Alberta Ocular Brachytherapy Program as part of routine clinical practice. Half of the images were labelled as “Lesion” and the other half as “Normal”. A YOLOv8 classification model (using 40 training epochs, batch sizes of 16, and initial learning rates set at 0.001 for AdamW and 0.01 for SGD) was used in three experiments. In Experiment 1, the original full-size images (3918 x 3916 pixels) were resized to 600 x 600 pixels. Random augmentations of the images were used during training to improve the model’s generalizability. In Experiment 2, full-size images were resized to 3000 x 3000 pixels and then uniformly divided into 25 patches. Each patch was relabeled based on the presence of any portion of a nevus. To address the class imbalance issue caused by patching, data augmentations including changes in hue and brightness, along with random rotation, and flipping, were applied to patches labelled as “Lesion.” Augmentations from Experiment 1 were consistently applied during training. In Experiment 3, we applied the same augmentation methods used in Experiment 2 to select images with noise. We also randomly reduced the contrast of selected images. The same augmentations used in Experiments 1 and 2, were applied during training. In each experiment, performance was measured using accuracy, precision, and recall.
Results :
Experiment 1 resulted in an accuracy of 85.2%, a precision of 83.0%, and a recall of 87.0%. Experiment 2 resulted in an accuracy of 90.3%, a precision of 91.4%, and a recall of 88.1%. Experiment 3 resulted in an accuracy of 92.6%, a precision of 93.8%, and a recall of 90.1%.
Conclusions :
The YOLOv8 model with patch-specific augmentation targeting noise and contrast issues, generated the best results. This study demonstrates the adaptability of the YOLOv8 model for improved accuracy in challenging fundus image classification scenarios with limited labelled data.
This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.