Purchase this article with an account.
H. Narasimha–Iyer, A. Can, B. Roysam, C.V. Stewart, H.L. Tanenbaum, A. Majerovics, H. Singh; Semantic Change Understanding of Vascular and Non–Vascular Changes From Multi–Temporal Color Retinal Fundus Images . Invest. Ophthalmol. Vis. Sci. 2005;46(13):4287.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Purpose: Digital photography of the retina is widely used for diagnosis, patient care and management of eye disorders. The present research is motivated by the need for automated, objective, quantitative approaches to detect and analyze vascular and non–vascular changes from multi–temporal color retinal fundus images and give a semantic description of the changes. Methods:Retinal features, including the vasculature, vessel branching/crossover locations, optic disk, and fovea are extracted automatically. The images are registered to sub–pixel accuracy using a 12–dimensional mapping that accounts for the unknown retinal curvature and camera parameters. The color images are corrected for non–uniform illumination using a robust homomorphic surface fitting algorithm. The changes are then segmented using an algorithm that is robust to relevant artifacts such as dust particles in the optical path. The changes in the non–vascular regions are classified into five clinically significant categories using a Bayesian algorithm constrained by Markov Random Fields. A segment–wise model selection algorithm is used to describe different kinds of vascular changes. The output of the system is a semantic description of the changes and a flicker animation overlaid with change analysis results for easy assessment by the user. Results: A multi–observer validation was done on 43 image pairs from 22 eyes involving non–proliferative and proliferative diabetic retinopathies. For non–vascular regions, the algorithm showed a 96.83 % change detection rate, a 3.17 % miss rate, and a 17.65 % false alarm rate. The performance in correctly classifying the changes was 97.39 %. For the vascular changes, the algorithm produced the correct semantic description, 92.3% of the times. Conclusions: The primary contribution of this work is in developing a fully automated semantic change understanding system for retinal images. Automated analysis of changes can enable novel applications including screening by doctors of optometry, and rapid referral to specialists, especially in remote under–serviced areas. Though we have experimentally demonstrated the effectiveness of the method in dealing with manifestations related to Diabetic Retinopathy, the techniques are extensible to changes associated with several other retinal disorders.
This PDF is available to Subscribers Only