Investigative Ophthalmology & Visual Science Cover Image for Volume 65, Issue 7
June 2024
Volume 65, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2024
Development and Validation of Artificial Intelligence-driven Cataract Detection Model Using Smartphone-captured Images
Author Affiliations & Notes
  • Jaemyoung Sung
    Ophthalmology, Tulane University, New Orleans, Louisiana, United States
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Takenori Inomata
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Yasutsugu Akasaki
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Tianxiang Huang
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Ken Nagino
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Kunihiko Hirosawa
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Yuichi Okumura
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • yuki moroka
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Akie Midorikawa-Inomata
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Atsuko Eguchi
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Shintaro Nakao
    Ophthalmology, Juntendo Daigaku, Bunkyo-ku, Tokyo, Japan
  • Footnotes
    Commercial Relationships   Jaemyoung Sung None; Takenori Inomata Johnson and Johnson Vision Care, Seed, Novartis, Kowa, InnoJin , Code F (Financial Support); Yasutsugu Akasaki None; Tianxiang Huang None; Ken Nagino InnoJin, Code F (Financial Support); Kunihiko Hirosawa None; Yuichi Okumura InnoJin, Code F (Financial Support); yuki moroka None; Akie Midorikawa-Inomata InnoJin, Code F (Financial Support); Atsuko Eguchi None; Shintaro Nakao None
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2024, Vol.65, 3715. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jaemyoung Sung, Takenori Inomata, Yasutsugu Akasaki, Tianxiang Huang, Ken Nagino, Kunihiko Hirosawa, Yuichi Okumura, yuki moroka, Akie Midorikawa-Inomata, Atsuko Eguchi, Shintaro Nakao; Development and Validation of Artificial Intelligence-driven Cataract Detection Model Using Smartphone-captured Images. Invest. Ophthalmol. Vis. Sci. 2024;65(7):3715.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Cataracts are one of the leading causes of blindness worldwide, particularly in resource-limited regions with minimal resources to diagnose or intervene in a timely manner. Inexpensive, effective screening methods – such as cataract detection using commonplace smartphone cameras – may have strong implications for improving accessibility and visual outcomes for the global population. This study was conducted to evaluate the feasibility of a smartphone-based artificial intelligence (AI) algorithm for detecting and grading cataracts.

Methods : This cross-sectional study was conducted between April 2020 and March 2022. Images of cataracts were captured using smartphones (iPhone and Xperia) and slit-lamp microscopy. Three ophthalmologists graded the images based on Emery-Little (EL) classification. Concordance rate between AI and Ophthalmologists was also evaluated in this study. A cataract grade estimation model was developed using the “Neural Network Libraries” software, which stratified the images into two classes: A) ≤grade 2 or B) ≥grade 3. Diagnostic performance of the software was evaluated using sensitivity and specificity for determining cataracts above grade 3.

Results : 1,821 iPhone and 1,889 Xperia images of the cataracts were collected from 178 participants. A cataract grade estimation model was created using 2,548 smartphone images (EL Grade 1, 768; Grade 2, 1,521; and Grade 3≥, 259 images). Concordance rates yielded by the cataract grade estimation model for 1,162 iPhone and Xperia images were 78.7% and 87.2% for Xperia on dilated eyes and 89.5% and 81.5% on non-dilated eyes, respectively. For non-dilated eyes, sensitivities/specificities from iPhone and Xperia were 62.2% / 93.6% and 38.4% / 96.3%, respectively. For dilated eyes, sensitivities/specificities from iPhone and Xperia were 92.3% / 76.6% and 83.3% / 87.9%, respectively.

Conclusions : Machine learning and AI analysis of raw images from smartphone cameras appears to have potential in detecting moderate-to-severe nuclear cataracts. In resource poor or rural settings without traditional tools to visualize the lens (i.e., slit-lamp microscope), this may have implications in promoting early detection and intervention to reduce global prevalence of visual impairment/blindness, as well as to minimize surgical complications associated with delayed treatment.

This abstract was presented at the 2024 ARVO Annual Meeting, held in Seattle, WA, May 5-9, 2024.

 

 

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×