June 2021
Volume 62, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2021
Mobile models for AMD and DR screening
Author Affiliations & Notes
  • Sofia Gonçalves
    Associacao para a Investigacao Biomedica e Inovacao em Luz e Imagem, Coimbra, Coimbra, Portugal
  • Luís Mendes
    Associacao para a Investigacao Biomedica e Inovacao em Luz e Imagem, Coimbra, Coimbra, Portugal
  • Jose G Cunha-Vaz
    Associacao para a Investigacao Biomedica e Inovacao em Luz e Imagem, Coimbra, Coimbra, Portugal
  • Rufino Silva
    Universidade de Coimbra, Coimbra, Coimbra, Portugal
  • Footnotes
    Commercial Relationships   Sofia Gonçalves, None; Luís Mendes, AIBILI (S); Jose Cunha-Vaz, Adverum Biotechnologies (C), Alimera Sciences (C), Allergan (C), Bayer (C), Carl Zeiss Meditec (C), Gene Signal (C), Novartis (C), Oxular (C), Pfizer (C), Roche (C), Sanofi (C), Vifor Pharma (C); Rufino Silva, Alimera Sciences (C), Allergan (C), Bayer (C), Novartis (C), Novus Nordisk (C), Roche (C), Thea Pharmaceuticals (C)
  • Footnotes
    Support  Fundação para a Ciência e Tecnologia (Project nº 02/SAICT/2017 – 032412)
Investigative Ophthalmology & Visual Science June 2021, Vol.62, 2113. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sofia Gonçalves, Luís Mendes, Jose G Cunha-Vaz, Rufino Silva; Mobile models for AMD and DR screening. Invest. Ophthalmol. Vis. Sci. 2021;62(8):2113.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : The identification of lesions in color fundus photography (CFP) to detect and characterize ocular pathologies requires specialized professionals to manually perform this task. In this study, we explore two different CNN architectures, Inceptionv3 (Iv3) and MobileNetv2 (MbNv2), and test their performance on identifying pathologies in CFPs.

Methods : We gathered 29,329 fovea-centered CPFs from the Epidemiologic (NCT01298674) and Incidence (NCT02748824) studies, previously classified regarding three targets: age-related macular degeneration (AMD), Diabetic Retinopathy (DR) and both AMD and DR. Given the unbalanced nature of data, we split the dataset into 5 balanced sets and tested a majority voting ensemble approach. For each architecture, three experiments (E1, E2, E3) were conducted. In E1, training was performed with non-augmented data, in E2, images were treated with a color correction software developed by Harvard Medical School that standardizes brightness, contrast, and color balance, and in E3 we studied the impact of horizontal flips, brightness changes, and small rotations and translations on the model performance.

Results : Iv3 achieved the best performance in E3 for all the targets with an AUC above 85%, and high sensitivity (>0.67) and specificity (>0.82) for the targeted class. As for MbNv2, each experiment presents different results depending on the target. When targeting AMD, E1 and E2 show similar performance with AUC reaching 0.78, but a slightly smaller sensitivity in E2. As for DR, E2 shows a high value for AUC but a low value for sensitivity (0.41). Finally, targeting AMD+DR, the AUC is equal in all experiments, however, sensitivity is higher (0.64) in E1 and specificity is higher in E2.

Conclusions : MbNv2 is a highly efficient basic architecture that ensures good accuracy in low computational budget and energetically economic devices. The use of the MbNv2 architecture opens the possibility of processing data in mobile devices and may come as an advantage on the screening of blinding age-related diseases in remote and rural areas with weak healthcare infrastructure and systems.

This is a 2021 ARVO Annual Meeting abstract.

 

Evaluation metrics for the tested ensemble models.

Evaluation metrics for the tested ensemble models.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×