Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Detecting Glaucoma Using Iris Photos with Deep Learning
Author Affiliations & Notes
  • Abhilash Katuru
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Iyad Majid
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Min Shi
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Yu Tian
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Yan Luo
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Tobias Elze
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Mengyu Wang
    Harvard Ophthalmology AI Lab, Schepens Eye Research Institute of Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts, United States
  • Footnotes
    Commercial Relationships   Abhilash Katuru None; Iyad Majid None; Min Shi None; Yu Tian None; Yan Luo None; Tobias Elze Genentech, Code F (Financial Support); Mengyu Wang Genentech, Code F (Financial Support)
  • Footnotes
    Support  This work was supported by NIH R00 EY028631, NIH R21 EY035298, NIH R01 EY030575, NIH P30 EY003790, NIH K23 EY032634, NIH R21 EY032953, Research to Prevent Blindness International Research Collaborators Award, Research to Prevent Blindness Career Development Award, Alcon Young Investigator Grant, and Grimshaw-Gudewicz Grant.
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 1645. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Abhilash Katuru, Iyad Majid, Min Shi, Yu Tian, Yan Luo, Tobias Elze, Mengyu Wang; Detecting Glaucoma Using Iris Photos with Deep Learning. Invest. Ophthalmol. Vis. Sci. 2024;65(7):1645.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : To assess deep learning’s performance using iris photos to detect glaucoma.

Methods : We included 24,253 patients from Massachusetts Ear and Eye tested between 2010 and 2022 with 112,243 iris photos. Iris photos were captured during the optical coherence tomography test. Eyes with a visual field (VF) mean deviation (MD) < -3 dB and abnormal glaucoma hemifield test (GHT) and pattern standard deviation (PSD) results were categorized as glaucoma. Eyes with VF MD greater >= -1 dB and normal GHT and PSD results were categorized as non-glaucoma. Our data was split into training (80%), validation (10%), and testing (10%) subsets. We used the four most widely used deep learning models (VGG-19, MobileNet, EfficientNet, and Vision Transformer [ViT]) in our study. The area under the receiver operating characteristic curve (AUC) was used to assess model performance, which was then compared by bootstrapping with t-test. We also created Grad-CAM maps to understand the important regions the deep learning models focused on.

Results : The average age of our patients was 62 ± 16 years. A summary of our results is presented in Figure 1. VGG-19 and ViT performed the best (AUC: 0.736 for VGG-19 and 0.729 for ViT), which were significantly better (p < 0.001) than MobileNet (0.702) and EfficientNet (0.698). There was a notable difference in model performance between males and females, with males achieving a better performance (male AUC: 0.745, female AUC: 0.730, p < 0.001). Compared to Asian and White demographic groups, we found that our model performs better on Black patient groups (Asian AUC: 0.739, White AUC: 0.736, Black AUC: 0.748, p < 0.001). We also found that the model performed slightly better on Hispanic groups than non-Hispanic groups (Hispanic AUC: 0.737, non-Hispanic AUC: 0.735, p = 0.003). In addition, analyzing the generated regional importance maps with VGG-19, it seems our model primarily focuses on areas of the peripheral region of the iris (Figure 2).

Conclusions : Models trained on iris photos have the potential to perform glaucoma detection tasks. In addition, there are substantial performance disparities in different gender and racial groups. Deep learning models also tend to focus on the periphery of the iris when assessing for glaucoma.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

 

Generated regional importance map, where red regions are more important. The top two eyes are glaucomatous, while bottom two are normal.

Generated regional importance map, where red regions are more important. The top two eyes are glaucomatous, while bottom two are normal.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×