Abstract
Purpose: :
To quantitatively compare clinical evaulation of fundus images for diabetic retinopathy (DR) using a tablet computer to evaluation using a standard desktop computer.
Methods: :
Two retinal specialists independently graded a set of 1200 color fundus images for the presence and severity of DR. Each expert graded the images using both a tablet computer (iPad) and a desktop computer with a high-definition color display. The DR grades were assigned based the specialists evaluation of whether and how urgently the patient should be referred from primary care to a retinal expert based on the severity of DR indicated by the image. The specialists viewed the images in a random order and were masked to any previous evaluation of the images. DR grades assigned using the tablet were compared to desktop-based grades using cross-platform, intra-platform kappa as the primary outcome measure. Kappa values were also calculated to determine intra-platform, intra-observer and inter-observer agreement. Additionally, the sensitivity, specificity, and area under ROC of the tablet-based grades were determined.
Results: :
The intra-observer agreement for the two specialists was high (kappa=0.778 and kappa=0.812) for the comparison of grades assigned using different the platforms. Intra-platform, intra-observer agreement was comparable (kappa=0.800 and kappa=0.784). For the two platforms, inter-observer agreement was similar (kappa=0.544 and kappa=0.625 for tablet and desktop, respectively). Finally, a sensitivity of 0.848, a specificity of 0.987, and an AUC of 0.950 was achieved by the tablet-based grades compared to desktop-based grades.
Conclusions: :
The tablet-based grading of color fundus images for DR was consistent with desktop-based grading. The results of this pilot study indicate that tablets are equivalent to standard desktop computers with respect to clinical evaluation of fundus images for DR and can be reliably used for this task.
Keywords: imaging/image analysis: clinical • diabetic retinopathy • retina