June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Deep-learning enabled multi-modal fusion models for dementia screening using colour fundus photographs
Author Affiliations & Notes
  • Dominic Williamson
    University College London Institute of Ophthalmology, London, London, United Kingdom
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • Robbert Struyven
    University College London Centre for Medical Image Computing, London, United Kingdom
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • Siegfried Wagner
    University College London Institute of Ophthalmology, London, London, United Kingdom
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • David Romero-Bascones
    Mondragon Unibertsitatea, Mondragon, Pais Vasco, Spain
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • Yukun Zhou
    University College London Centre for Medical Image Computing, London, United Kingdom
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • Mateo Gende Lozano
    Universidade da Coruna, A Coruna, Galicia, Spain
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • Timing Liu
    University of Cambridge, Cambridge, Cambridgeshire, United Kingdom
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • Mario Cortina Borja
    Great Ormond Street Institute of Child Health, University College London, London, London, United Kingdom
  • Jugnoo Rahi
    Great Ormond Street Institute of Child Health, University College London, London, London, United Kingdom
  • Axel Petzold
    University College London Institute of Ophthalmology, London, London, United Kingdom
    UCL Queen Square Institute of Neurology, London, London, United Kingdom
  • Yue Wu
    Department of Ophthalmology, University of Washington, Seattle, Washington, United States
  • Cecilia S. Lee
    Department of Ophthalmology, University of Washington, Seattle, Washington, United States
    Roger and Angie Karalis Johnson Retina Center, Seattle, Washington, United States
  • Aaron Y Lee
    Department of Ophthalmology, University of Washington, Seattle, Washington, United States
    Roger and Angie Karalis Johnson Retina Center, Seattle, Washington, United States
  • Alastair K Denniston
    University Hospitals Birmingham NHS Foundation Trust, Birmingham, Birmingham, United Kingdom
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • Daniel Alexander
    University College London Centre for Medical Image Computing, London, United Kingdom
  • Pearse Keane
    University College London Institute of Ophthalmology, London, London, United Kingdom
    NIHR Moorfields Biomedical Research Centre, London, Greater London, United Kingdom
  • Footnotes
    Commercial Relationships   Dominic Williamson None; Robbert Struyven None; Siegfried Wagner None; David Romero-Bascones None; Yukun Zhou None; Mateo Gende Lozano None; Timing Liu None; Mario Cortina Borja None; Jugnoo Rahi None; Axel Petzold Novartis, Code C (Consultant/Contractor), Heidelberg Engineering, Roche, Code R (Recipient); Yue Wu None; Cecilia Lee Alzheimer's disease Drug Discovery Foundation (ADDF), Code F (Financial Support); Aaron Lee Santen, Regeneron, Carl Zeiss Meditec, Microsoft, Novartis, NVIDIA, Code F (Financial Support), US Food and Drug Administration, Genentech, Verana Health, Gyroscope, Topcon, Johnson and Johnson, Code R (Recipient); Alastair Denniston None; Daniel Alexander None; Pearse Keane Apellis, Code C (Consultant/Contractor), Allergan, Topcon, Heidelberg Engineering, Novartis, Roche, Bayer, Code F (Financial Support), Big Picture Medical, Code I (Personal Financial Interest)
  • Footnotes
    Support  None
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 262. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Dominic Williamson, Robbert Struyven, Siegfried Wagner, David Romero-Bascones, Yukun Zhou, Mateo Gende Lozano, Timing Liu, Mario Cortina Borja, Jugnoo Rahi, Axel Petzold, Yue Wu, Cecilia S. Lee, Aaron Y Lee, Alastair K Denniston, Daniel Alexander, Pearse Keane; Deep-learning enabled multi-modal fusion models for dementia screening using colour fundus photographs. Invest. Ophthalmol. Vis. Sci. 2023;64(8):262.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : For dementia screening, the benefit of incorporating retinal information, in addition to traditional non-retinal risk factors (TRFs), has not yet been fully established. Here, we compare the performance of four models in screening for prevalent all-cause dementia using I. TRFs, II. retinal morphological features and TRFs, III. a colour fundus-only deep learning algorithm, and IV. a fusion of the deep learning algorithm with TRFs.

Methods : The AlzEye study includes 353,157 patients who visited Moorfields Eye Hospital (MEH), London from January 2008 to April 2018. We developed a fusion model that combined both tabular data and retinal features. TRFs were defined as demographic features (age, sex, ethnicity, index of multiple deprivation) and clinical features (hypertension, diabetes). Retinal features comprised of morphological retinal features (artery, vein & optic-disc measurements, calculated by Automorph), and automatically-learned features extracted from a convolutional neural network (CNN). Models were trained on data from 26,603 patients visiting four MEH hospitals, and validated on two test datasets; 2,946 independent patients visiting the same four hospitals, and 2,966 patients visiting three separate MEH hospitals (Figure 1, 2).

Results : In predicting all-cause dementia, the Area Under the Reciever-Operator Curve (AUROC) of the TRFs was 0.808 (95% CI 0.758-0.854), and 0.776 (95% CI 0.738-0.814) in test sets 1 and 2, while the deep learning image-based model achieved 0.795 (95% CI 0.745-0.843), and 0.731 (95% CI 0.689-0.770) respectively. The fusion of deep learning features with the TRFs gave AUROCs of 0.819 (95% CI 0.771-0.864), and 0.759 (95% CI 0.719-0.796) in test sets 1 and 2 (Table 1, Figure 3).

Conclusions : We show in two retrospective independent validation datasets that retinal data can assist traditional non-retinal risk factors in detecting all-cause dementia. Interpretable fundus features defined by physicians have some diagnostic importance, while black-box retinal features extracted by deep learning improved performance the most, highlighting the potential benefits of using retinal images in community dementia screening.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

 

Fig 1. The dataset curation approach. Fig 2. The modelling strategies used in this work.

Fig 1. The dataset curation approach. Fig 2. The modelling strategies used in this work.

 

Table 1. AUROC results with 95% confidence intervals (from 1000 sample bootstrapping). Fig 3. Violin plots displaying the AUROC distributions for select models.

Table 1. AUROC results with 95% confidence intervals (from 1000 sample bootstrapping). Fig 3. Violin plots displaying the AUROC distributions for select models.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×