June 2020
Volume 61, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2020
Automated Quality Assessment of Ultra-Widefield Fluorescein Angiography Images using Deep Learning
Author Affiliations & Notes
  • Henry Li
    Case Western Reserve University, Ohio, United States
    Cleveland Clinic, Ohio, United States
  • Joseph R Abraham
    Cleveland Clinic, Ohio, United States
  • Jenna Hach
    Cleveland Clinic, Ohio, United States
  • Sunil K Srivastava
    Cleveland Clinic, Ohio, United States
  • Jon Whitney
    ERT, Ohio, United States
  • Amit Vasanji
    ERT, Ohio, United States
  • Jamie Reese
    Cleveland Clinic, Ohio, United States
  • Justis Ehlers
    Cleveland Clinic, Ohio, United States
  • Footnotes
    Commercial Relationships   Henry Li, None; Joseph Abraham, None; Jenna Hach, None; Sunil Srivastava, Allergan (F), Bausch and Lomb (C), Gilead (F), Leica (P), Regeneron (F), Santen (C); Jon Whitney, ERT (E); Amit Vasanji, ERT (E); Jamie Reese, None; Justis Ehlers, Aerpio (F), Aerpio (C), Alcon (F), Alcon (C), Allergan (F), Allergan (C), Allergro (C), Genetech (F), Genetech/Roche (C), Leica (C), Leica (P), Novartis (F), Novartis (C), Regeneron (F), Regeneron (C), Santen (C), Thrombogenics/Oxurion (F), Thrombogenics/Oxurion (C), Zeiss (C)
  • Footnotes
    Support  RPB Unrestricted Grant to the Cole Eye Institute RPB1508DM; NIH K23 -EY022947
Investigative Ophthalmology & Visual Science June 2020, Vol.61, 1636. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Henry Li, Joseph R Abraham, Jenna Hach, Sunil K Srivastava, Jon Whitney, Amit Vasanji, Jamie Reese, Justis Ehlers; Automated Quality Assessment of Ultra-Widefield Fluorescein Angiography Images using Deep Learning. Invest. Ophthalmol. Vis. Sci. 2020;61(7):1636.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Ultra-widefield fluorescein angiography (UWFA) is utilized to assess retinal vascular and choroidal abnormalities in retinal disease. During image acquisition, a series of numerous angiographic images are obtained. These images are subject to high variability in image quality due to multiple factors (i.e., patient-related, imaging-related). This variability can limit image utility and delay care. The purpose of this study was to evaluate the feasibility of a deep learning model for automated classification of UWFA quality.

Methods : The UWFA dataset was composed of 5658 UWFA obtained during routine retinal care. Ground truth image quality was assessed by expert image review, and classified into one of four categories (un-gradable, poor, good, or best) based on key factors such as contrast, field of view, media opacity, and obscuration from external features. A randomized set of 3543 images to train the model. The initial testing set was composed of 615 images and the validation set included 1500 images.

Results : By expert review of 5658 images, 153 (2.7%) were graded as best, 1514 (26.8%) as good, 1682 (29.7%) as poor and 2309 (40.8%) were ungradable. In the testing set, our classifier showed an overall accuracy of 87.1% for recognizing between gradable (including best, good, and poor) and ungradable images, a sensitivity of 92.7% and specificity of 82.1%. The receiver operating characteristic (ROC) curve measuring performance of two-class classification (non-gradable and gradable) had an AUC of 0.945.

Conclusions : A deep learning model demonstrates successful automatic classification of UWFA image quality. This method may greatly reduce manual image grading workload and also provide near-instantaneous feedback on image quality during image acquisition.

This is a 2020 ARVO Annual Meeting abstract.

 

U: Ungradable, 1: Poor, 2: Good, 3: Best.

Representative images are selected to demonstrate the quality characteristics of each grade, as determined by our expert image reader. Column A represents images with varying field of view, Column B shows the ranges of visualization of the optic disc/macula region, Column C shows the degrees of optic disc centering, and Column D shows the various levels of image contrast.

U: Ungradable, 1: Poor, 2: Good, 3: Best.

Representative images are selected to demonstrate the quality characteristics of each grade, as determined by our expert image reader. Column A represents images with varying field of view, Column B shows the ranges of visualization of the optic disc/macula region, Column C shows the degrees of optic disc centering, and Column D shows the various levels of image contrast.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×