September 2016
Volume 57, Issue 12
Open Access
ARVO Annual Meeting Abstract  |   September 2016
Automated diabetic retinopathy image assessment softwares: large scale, real world evaluation of diagnostic accuracy and cost-effectiveness compared to human graders
Author Affiliations & Notes
  • Catherine A Egan
    Medical Retina, Moorfields Eye Hospital, London, United Kingdom
    University College London, London, United Kingdom
  • Alicja Rudnicka
    St George's University, London, United Kingdom
  • Christopher Owen
    St George's University, London, United Kingdom
  • Caroline Rudisill
    Department of Social Policy, London School of Economics and Political Science, London, United Kingdom
  • Sebastian Salas-Vega
    Department of Social Policy, London School of Economics and Political Science, London, United Kingdom
  • Paul Taylor
    Institute of Health Informatics, University College London, London, United Kingdom
  • Gerald Liew
    Medical Retina, Moorfields Eye Hospital, London, United Kingdom
  • Aaron Lee
    Department of Ophthalmology, University of Washington Seattle, Seattle, Washington, United States
  • Clare Bailey
    Bristol Eye Hospital, Bristol, United Kingdom
  • john anderson
    Homerton University Hospital , London, United Kingdom
  • Adnan Tufail
    Medical Retina, Moorfields Eye Hospital, London, United Kingdom
    University College London, London, United Kingdom
  • Footnotes
    Commercial Relationships   Catherine Egan, None; Alicja Rudnicka, None; Christopher Owen, None; Caroline Rudisill, None; Sebastian Salas-Vega, None; Paul Taylor, None; Gerald Liew, None; Aaron Lee, None; Clare Bailey, None; john anderson, None; Adnan Tufail, None
  • Footnotes
    Support  This project was funded by the National Institute for Health Research HTA programme (project no. 11/21/02); Fight for Sight Hirsch grant award; and the Department of Health’s NIHR Biomedical Research Centre for Ophthalmology at Moorfields Eye Hospital and UCL Institute of Ophthalmology. The views expressed are those of the authors, not necessarily those of the Department of Health
Investigative Ophthalmology & Visual Science September 2016, Vol.57, No Pagination Specified. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Catherine A Egan, Alicja Rudnicka, Christopher Owen, Caroline Rudisill, Sebastian Salas-Vega, Paul Taylor, Gerald Liew, Aaron Lee, Clare Bailey, john anderson, Adnan Tufail; Automated diabetic retinopathy image assessment softwares: large scale, real world evaluation of diagnostic accuracy and cost-effectiveness compared to human graders. Invest. Ophthalmol. Vis. Sci. 2016;57(12):No Pagination Specified.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Diabetic retinopathy screening involves labour intensive manual grading of retinal images. Automated Retinal Image Analysis (ARIA) software can determinine whether diabetic retinal disease is present as an alternative to human graders. We aim to evaluate the clinical effectiveness/ cost-effectiveness of three CE marked ARIA softwares in a large number of patient images acquired as part of routine diabetic retinopathy screening in a NHS (National Health Service) setting in the United Kingdom.

Methods : Observational measurement comparison study of 20,258 consecutive patients and a decision analytic model was undertaken to determine the effectiveness and cost-effectiveness of three ARIA systems (Retmarker, iGRading, and EyeART) in replacing one or more steps of human grading in a NHS Diabetic Eye Screening Programme (DESP). Images were graded by human graders as well as the ARIA systems, before being sent for an arbitration, Secondary analysis explored the influence of patients’ ethnicity, age, sex and camera on screening performance.

Results : The sensitivity point estimates (95% confidence interval) of the ARIA systems are as follows: EyeArt 94.7% (94.2-95.2) for any retinopathy, 93.8% (92.9-94.6) for referable retinopathy 99.6% (97.0-99.9) for R3 proliferative retinopathy; Retmarker 73.0% (72.0-74.0) for any retinopathy, 85.0% (83.6-86.2) for referable retinopathy 97.9% (94.9-99.1) for R3 proliferative retinopathy. iGradingM classified all images as either having disease or being ungradeable, this limited further analyses for iGradingM . Both EyeArt and Retmarker are cost saving relative to manual grading either as a replacement for Level 1 human grading or as a filter prior to Level 1 human grading.

Conclusions : Retmarker and EyeArt achieved acceptable sensitivity for referable retinopathy when compared with a quality-assured, real world human grader working in a high volume clinical setting as a reference standard and had specificity sufficient to make them cost effective alternatives to a purely manual grading approach.

This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×