Purchase this article with an account.
Sajib Kumar Saha, Matthew Watson, Cirous Dehghani, Katie Edwards, Nicola Pritchard, Anthony Russell, Rayaz A Malik, Nathan Efron, Shaun Frost, Yogesan Kanagasingam; Artificial Intelligence-based Detection of Diabetic Retinopathy using Retinal Imaging and Traditional Risk Factors. Invest. Ophthalmol. Vis. Sci. 2021;62(8):114.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Diabetic retinopathy (DR) is one of the major causes of blindness in the developed world. Early detection is crucial to reduce the risk of blindness in DR. Considering this, we have developed an artificial intelligence-based method to reliably detect DR using retinal imaging and traditional risk factors.
A deep convolutional neural network (CNN) was explicitly trained on the EyePACS dataset [https://www.kaggle.com/c/diabetic-retinopathy-detection/data] to perform ‘disease’, ‘no-disease’ grading of DR using color fundus photographs. A second, longitudinal dataset of retinal images and clinical data of 2~6 visits of 326 LANDMark study participants was utilised to identify the traditional risk factors most strongly associated with DR development and progression - age, HbA1c, systolic blood pressure, weight, body mass index, and waist circumference.A support vector machine (SVM) classifier was independently trained to classify ‘disease’ versus ‘no-disease’ relying upon these traditional risk factors. In different to native SVMs that do not output probabilities, Platt scaling, a probability calibration method, was used in this work to obtain a “probability” out of an SVM.The probabilities returned by the CNN and SVM were finally combined to detect DR.We randomly divided the LANDMark dataset into a training set (90%) and testing set (10%). Since the dataset contained longitudinal data of multiple visits, in order to avoid reverse pair-wise bias or type-I error, we made sure same participant data were not split into training and test set.
The SVM achieved an accuracy of 94% in detecting DR relying upon traditional risk factors. The CNN achieved an accuracy of 92% in detecting DR using color fundus photographs. When combined, an achieved an accuracy of 96%, with sensitivity, specificity and AUC respectively of 96%, 95% and 96% was observed.
A novel approach for the automated grading of DR is proposed. In comparison to state-of-the-art AI-based methods that solely use images to perform DR grading, the proposed method combines the risk factors-based assessment with the image-based assessment, providing an overall improved accuracy, sensitivity and AUC. The proposed method is found to be 4% more accurate and 2% more sensitive (without compromising specificity) than the solely image-based approach.
This is a 2021 ARVO Annual Meeting abstract.
Flow chart of the proposed system.
This PDF is available to Subscribers Only