Abstract
Abstract: :
Purpose: To automatically classify fundus images into categories of referable or non–referable diabetic maculopathy. Methods: Using an image–set of 1000 randomly selected fundus images from a diabetic screening population, an ophthalmologist graded the image–set, providing a benchmark for the algorithms evaluation. To define the size and position of the macular within a fundus image, an optic nerve head [1] and fovea algorithm is used extract the diameter of the optic nerve, and fovea centre. Within the macular, possible red and white lesions are identified using peak detection and contrast evaluation. From region grown peak points, features relating to shape, size, and colour are extracted and fed into a multi–layer neural network. Two neural networks have been development, one each for white (exudate) and red (microaneurysms, haemorrhage) lesion classification. Network features were selected using sensitivity analysis. Results: The algorithm’s performance is evaluated against an ophthalmologist’s screening classification in accordance with his own screening criteria for referable maculopathy (one exudate or three microaneurysms in the macular). The presented algorithm correctly identifies referable diabetic maculopathy with 92 sensitivity and 71 specificity. Conclusions: It has been demonstrated that the presented algorithm is comparable with the ophthalmologist’s classification regarding sensitivity, but is inferior when it comes to specificity, misclassifying almost 1/5 of the images as referable maculopathy.
Keywords: diabetic retinopathy • macula/fovea