Purchase this article with an account.
Augustinus Laude, Karthikeyan Ganesan, Rajendra Acharya, Chua K Chua, Eddie Y Ng, Tong Boon Tang; Computer-aided analysis of diabetic retinopathy using trace transform functionals and optimization of data training-testing sizes.. Invest. Ophthalmol. Vis. Sci. 2014;55(13):4823.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Diabetic retinopathy (DR) is characterized by changes in the retinal features on fundus photography. We explored the use of trace transform functionals to extract the salient features to classify the normal and DR images. Our computer-assisted diagnosis (CAD) system was evaluated using 300 images from each of two different databases (i) Tan Tock Seng Hospital (TTSH) and (ii) MESSIDOR (open source dataset) and by varying the proportion of training and testing datasets to determine the minimum percentage of training data required to obtain the highest accuracy.
We extracted 712 trace transform features from 600 images after enhancing the contrast of the images. Forward feature selection technique with a Mahalanobis distance measure was used to rank the features. The ranked features were fed to three classifiers: Nearest Mean (NM), Fisher's Linear Discriminant (FLD) and Support Vector Machine (SVM) to select the best classifier using ten-fold cross validation. Each classifier was tested with various combinations of training-testing data sizes. We also explored ways to improve the results of our classification by rejecting outlier data ranging from (5%-30%) for every training and testing data size.
We got maximal accuracies with mean sensitivities (99.18%) and specificities (99.32%) at training set sizes of 75% for the TTSH dataset using the SVM classifier. We got maximal accuracies with mean sensitivities (97.02%) and specificities (97.70%) at training set sizes of 65% for the MESSIDOR dataset using SVM classifier.
We have shown that trace transform functionals features are effective in classifying normal and DR classes with accuracies ranging to 97.6% and 99.1% respectively using SVM classifier, for images from two different datasets. Also, we found that for this algorithm, a training set size of 50% gives a near optimal accuracy though an average of 70% training data size provided the best classification accuracies. We also found that with only 30% of available data for training, when used in combination with a reject classifier, we were able to obtain nearly the same accuracy levels as obtained with a 70% training data size. The use of ten-fold cross validation technique meant that the classifiers were evaluated using unknown images that were not included in the training set and hence, are robust enough for analyzing new data.
This PDF is available to Subscribers Only