Purchase this article with an account.
Michael Aaberg, Tyson Kim, Patrick Li, Leslie Niziol, Malavika Bhaskaranand, Sandeep Bhat, Chaithanya Ramachandra, Kaushal Solanki, Jose Davila, Frankie Myers, Clay Reber, David C Musch, Todd Margolis, Daniel Fletcher, Yannis Mantas Paulus; Deep neural network and human evaluation of referral-warranted diabetic retinopathy using smartphone-based retinal photographs. Invest. Ophthalmol. Vis. Sci. 2019;60(9):1444.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Diabetic retinopathy remains a leading cause of vision loss in working-age adults due to low screening rates. Smartphone-based retinal photography has emerged as a portable tool capable of accessing greater patient populations and increasing screening rates. The aim of this study is to investigate the efficacy of a mobile platform that combines high-quality, smartphone-based retinal imaging with automated grading for determining the presence of referral-warranted diabetic retinopathy (RWDR).
Adult patients were recruited at the University of Michigan Kellogg Eye Center Retina Clinic. A smartphone-based camera (RetinaScope) was used to image the retina of patients with diabetes with no significant media opacity. Images were analyzed with the Eyenuk EyeArtTM software, which generates referral recommendations based on presence of moderate or worse diabetic retinopathy (DR) or markers for clinically significant macular edema (CSME) through autonomous, cloud based, deep neural network software. Images were then independently evaluated by two masked readers and similarly categorized as RWDR or non-RWDR. The sensitivity and specificity of the masked graders and automated interpretation were determined by comparing the results to the treating clinician’s dilated slit-lamp fundus examination.
A total of 119 eyes from 69 patients were included for analysis. By slit-lamp examination, RWDR was present in 86 eyes (72.3%). At the eye-level, automated interpretation had a sensitivity of 77.9% and specificity of 69.7%; grader 1 had a sensitivity of 94.2% and specificity of 51.5%; grader 2 had a sensitivity of 89.3% and specificity of 63.6%. At the patient-level, RWDR was present in 53 subjects (76.8%). Automated interpretation had a sensitivity of 86.8% and specificity of 73.3%; grader 1 had a sensitivity of 96.2% and specificity of 40.0%; grader 2 had a sensitivity of 92.3% and specificity of 46.7%.
Smartphone-based retinal photography combined with autonomous, deep neural network software is an effective screening tool for RWDR and, at the patient level, it achieves a slightly higher specificity but lower sensitivity than trained human graders. Future studies following more patients in randomized, community-based cohorts of undifferentiated diabetic patients are needed.
This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.
This PDF is available to Subscribers Only