Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
DRAGONN-SLR: Diabetic Retinopathy Assessment using Guided Optics, Neural Nets, and Smartphones for Low Resource Settings
Author Affiliations & Notes
  • Ashley Tin
    Carle Illinois College of Medicine, Urbana, Illinois, United States
  • Quan A. Vo
    Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois, United States
  • Zachary H. Gilliam
    Mechanical Engineering, University of Illinois Urbana-Champaign, Urbana, Illinois, United States
  • Justin Huynh
    Carle Illinois College of Medicine, Urbana, Illinois, United States
  • Footnotes
    Commercial Relationships   Ashley Tin None; Quan Vo None; Zachary Gilliam None; Justin Huynh None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 4923. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Ashley Tin, Quan A. Vo, Zachary H. Gilliam, Justin Huynh; DRAGONN-SLR: Diabetic Retinopathy Assessment using Guided Optics, Neural Nets, and Smartphones for Low Resource Settings. Invest. Ophthalmol. Vis. Sci. 2024;65(7):4923.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : The global impact of diabetic retinopathy (DR), a prominent cause of blindness, is worsened by barriers to accessing ophthalmic care and routine eye exams, underscoring the need for innovative screening methods. Recent proposals for low-cost retinal cameras offer potential avenues for accessible screening, yet their widespread adoption remains elusive. Advancements in artificial intelligence (AI) have demonstrated notable proficiency in recognizing DR from retinal fundus photos. This study integrates these approaches, aiming to innovate a cost-effective DR screening system for utilization in under-resourced settings.

Methods : Three deep learning models-Swin Transformer, ResNet50, and MobileNetV3-were trained for binary classification of DR from retinal fundus photos. The goal of each model was to classify images into two categories: no DR and DR (mild, moderate, or severe). A subset of the EYEPACS dataset was divided into 75% (n = 18306) for training and 25% (n = 6064) for testing. To ensure independent training and testing sets, the images of a patient's right and left eyes were allocated together, either both to the training set or both to the testing set.
Design constraints for the ophthalmoscope were as follows: fitting many phone models, low cost, having customizable lenses, and easy (dis)assembly. The design incorporates interchangeable lenses to fit the needs of patients/physicians while snap-fit/threaded components allow quick and easy assembly. A 28D aspheric lens was used due to its balance between magnification and field of view.

Results : The Swin Transformer had an area under receiver operating characteristic (AUROC) of 0.90, sensitivity of 0.83, specificity of 0.83, and F1 of 0.87. The MobileNetV3 had an AUC 0.72, sensitivity of 0.68, specificity of 0.68, and F1 of 0.75. The Resnet50 had an AUC of 0.74, sensitivity 0.69, specificity of 0.67, and F1 of 0.75. The Swin Transformer was the best performing model (Fig. 1).
The ophthalmoscope lens is positioned 33 mm away from the patient’s eye and can be adjusted between 20-30 cm away from the phone's camera and costs under 20$ to manufacture (Fig. 2).

Conclusions : The deep learning models identified DR from retinal photos with high accuracy. The ophthalmoscope design is adjustable and low cost. Integration of these two components may provide a promising avenue towards a low cost retinal screening system.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×