April 2014
Volume 55, Issue 13
Free
ARVO Annual Meeting Abstract  |   April 2014
Rapid grading of fundus photos for diabetic retinopathy using crowdsourcing
Author Affiliations & Notes
  • Christopher J Brady
    Wills Eye Institute, Jefferson Univ School of Medicine, Philadelphia, PA
  • Andrea C Villanti
    The Schroeder Institute for Tobacco Research and Policy Studies, Legacy, Washington, DC
  • Jennifer L Pearson
    The Schroeder Institute for Tobacco Research and Policy Studies, Legacy, Washington, DC
  • Thomas R Kirchner
    The Schroeder Institute for Tobacco Research and Policy Studies, Legacy, Washington, DC
  • Omesh Gup
    Wills Eye Institute, Jefferson Univ School of Medicine, Philadelphia, PA
  • Chirag Shah
    Ophthalmic Consultants of Boston, Boston, MA
  • Footnotes
    Commercial Relationships Christopher Brady, None; Andrea Villanti, None; Jennifer Pearson, None; Thomas Kirchner, None; Omesh Gup, None; Chirag Shah, None
  • Footnotes
    Support None
Investigative Ophthalmology & Visual Science April 2014, Vol.55, 4826. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Christopher J Brady, Andrea C Villanti, Jennifer L Pearson, Thomas R Kirchner, Omesh Gup, Chirag Shah; Rapid grading of fundus photos for diabetic retinopathy using crowdsourcing. Invest. Ophthalmol. Vis. Sci. 2014;55(13):4826.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To develop and validate a novel method for fundus photo grading.

Methods: An interface for fundus photo classification was developed for Amazon.com’s crowd-sourcing interface (AMT), including 6 annotated training images. Sixteen expert-graded images were posted to AMT for grading by anonymous workers (AWs), with 10 repetitions per photo for an initial proof-of-concept (Phase I). AW’s were paid 0.10 USD per image. Four sequential tasks were posted asking AW’s to grade images as normal vs. abnormal, then normal vs. mild to moderate vs. severe, then normal vs. mild vs. moderate vs. severe. In Phase II, one image from each of 4 grading category was then posted for 500 unique AW interpretations to determine the ideal number of graders. Fifty random draws of 1-50 unique coders were used to compute area under the curve (AUC) of the receiver-operator characteristic (ROC) for the AW grade compared to the expert grading.

Results: Across 200 grading instances in the normal vs abnormal arm of Phase I, the correct diagnosis was reached 81.5% of the time by AWs. Among abnormal images, consensus among workers ranged from 80%-100%. The negative predictive value of worker consensus was 100%. Average time to grade each image was 25 seconds, including time to review training images. In Phase II, the maximum AUC was reached after 7 graders for normal versus abnormal (AUC=0.98). AUC was asymptotic after 10 graders, but did not approach 1 for mild (AUC=0.57), moderate (AUC=0.85), or severe (AUC=0.73).

Conclusions: With minimal training, the AMT workforce can rapidly and correctly categorize fundus photos of diabetic patients as normal or abnormal, though further refinement of the methodology is needed to resolve the degree of retinopathy. Images were interpreted for a total cost of 1 USD per eye. There may be a role for this method as a means to reduce the skilled grader burden in massive public health screening in developing world contexts. To provide real-time interpretation, AW’s in the same time zone as such a screening would be ideal, so there may be the possibility of microenterprise within local economies. Crowd-sourcing represents a novel and inexpensive means to screen for diabetic retinopathy with possible public health implications.

Keywords: 499 diabetic retinopathy • 498 diabetes • 496 detection  
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×