June 2022
Volume 63, Issue 7
Open Access
ARVO Annual Meeting Abstract  |   June 2022
Grading and Quantification of Papilledema from Fundus Photographs Using Convolutional Neural Networks
Author Affiliations & Notes
  • Elena Solli
    Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Wang Jui-Kai
    Ophthalmology, The University of Iowa Hospitals and Clinics Department of Pathology, Iowa City, Iowa, United States
  • Joseph Branco
    Ophthalmology, The University of Iowa Hospitals and Clinics Department of Pathology, Iowa City, Iowa, United States
  • Tobias Elze
    Schepens Eye Research Institute of Massachusetts Eye and Ear, Boston, Massachusetts, United States
  • Randy H Kardon
    Ophthalmology, The University of Iowa Hospitals and Clinics Department of Pathology, Iowa City, Iowa, United States
  • Louis Pasquale
    Ophthalmology, The University of Iowa Hospitals and Clinics Department of Pathology, Iowa City, Iowa, United States
  • Mark J Kupersmith
    Ophthalmology, The University of Iowa Hospitals and Clinics Department of Pathology, Iowa City, Iowa, United States
    Neurology, Icahn School of Medicine at Mount Sinai, New York, New York, United States
  • Footnotes
    Commercial Relationships   Elena Solli None; Wang Jui-Kai None; Joseph Branco None; Tobias Elze None; Randy Kardon None; Louis Pasquale None; Mark Kupersmith None
  • Footnotes
    Support  New York Eye and Ear Infirmary Foundation
Investigative Ophthalmology & Visual Science June 2022, Vol.63, 733 – F0461. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Elena Solli, Wang Jui-Kai, Joseph Branco, Tobias Elze, Randy H Kardon, Louis Pasquale, Mark J Kupersmith; Grading and Quantification of Papilledema from Fundus Photographs Using Convolutional Neural Networks. Invest. Ophthalmol. Vis. Sci. 2022;63(7):733 – F0461.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Convolutional neural networks (CNNs) trained on large datasets of fundus photographs differentiate normal from eyes with papilledema but quantifying and grading the papilledema has been limited. Our study investigated if: (1) CNN models can quantify optical coherence tomography measurements for each photo and grade the amount of papilledema in photos using Frisén grading, (2) the CNN model can be adequately trained on a relatively small dataset.

Methods : We used fundus photographs from 165 subjects (330 eyes) in the Idiopathic Intracranial Hypertension Treatment Trial (IIHTT) as input data. Because the IIHTT had few eyes with severe papilledema, we included additional fundus photos from 24 subjects (43 eyes) taken from our clinic for the Frisén grade analysis, and grouped photos showing Frisén grade five papilledema with the Frisén grade four photos. Each photo was labeled according to its Frisén grade (by expert reader), retinal nerve fiber layer thickness (RNFLT), total retinal thickness (TRT), and optic nerve head volume (ONHV) derived from 3D segmentation of corresponding optical coherence tomography images. We trained separate models for quantification of RNFL, TRT, and ONHV, and Frisén grading, using the Densely Connected Convolutional Neural Network “DenseNet161” as the base architecture.

Results : We found a moderate correlation between the true Frisén grades and those predicted by the model (R=0.483, <0.001). We found a strong correlation between the true RNFLT (R=0.793, p<0.001), TRT (R=0.831, p<0.001), and ONHV (R=0.825, <0.001) values and those predicted by the model.

Conclusions : CNN models can identify Frisén grade and provide quantification of OCT-derived RNFLT, TRT, and ONHV values from fundus photographs with moderate success. With supervised learning, it is possible to create reasonable models using relatively small datasets.

This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×