Abstract
Purpose :
The global impact of diabetic retinopathy (DR), a prominent cause of blindness, is worsened by barriers to accessing ophthalmic care and routine eye exams, underscoring the need for innovative screening methods. Recent proposals for low-cost retinal cameras offer potential avenues for accessible screening, yet their widespread adoption remains elusive. Advancements in artificial intelligence (AI) have demonstrated notable proficiency in recognizing DR from retinal fundus photos. This study integrates these approaches, aiming to innovate a cost-effective DR screening system for utilization in under-resourced settings.
Methods :
Three deep learning models-Swin Transformer, ResNet50, and MobileNetV3-were trained for binary classification of DR from retinal fundus photos. The goal of each model was to classify images into two categories: no DR and DR (mild, moderate, or severe). A subset of the EYEPACS dataset was divided into 75% (n = 18306) for training and 25% (n = 6064) for testing. To ensure independent training and testing sets, the images of a patient's right and left eyes were allocated together, either both to the training set or both to the testing set.
Design constraints for the ophthalmoscope were as follows: fitting many phone models, low cost, having customizable lenses, and easy (dis)assembly. The design incorporates interchangeable lenses to fit the needs of patients/physicians while snap-fit/threaded components allow quick and easy assembly. A 28D aspheric lens was used due to its balance between magnification and field of view.
Results :
The Swin Transformer had an area under receiver operating characteristic (AUROC) of 0.90, sensitivity of 0.83, specificity of 0.83, and F1 of 0.87. The MobileNetV3 had an AUC 0.72, sensitivity of 0.68, specificity of 0.68, and F1 of 0.75. The Resnet50 had an AUC of 0.74, sensitivity 0.69, specificity of 0.67, and F1 of 0.75. The Swin Transformer was the best performing model (Fig. 1).
The ophthalmoscope lens is positioned 33 mm away from the patient’s eye and can be adjusted between 20-30 cm away from the phone's camera and costs under 20$ to manufacture (Fig. 2).
Conclusions :
The deep learning models identified DR from retinal photos with high accuracy. The ophthalmoscope design is adjustable and low cost. Integration of these two components may provide a promising avenue towards a low cost retinal screening system.
This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.