Purpose
Automated retinal vessel analysis from fundus images, including central retinal artery and venous equivalent estimations, requires automated labelling of arteries and veins. We previously developed a graph-based framework to separate overlapping arterial-venous (A/V) trees (Hu et al., MICCAI 2013). Here we present a new version to determine arteries and veins in overlapping as well as non-overlapping vessels by including a pixel classification algorithm.
Methods
An expert annotated vessel pixels in the public dataset DRIVE (40 images from 40 subjects, equally divided into a training and a test set) as artery or vein (Fig. 1). Our approach first uses the topology of the vasculature to separate overlapping vessels into A/V trees with a graph-based algorithm. Then a support-vector-machine classifier (trained on the training set using 19 local intensity features) is used to classify independent vessels into A/V vessels. The approach is validated on the test set, with both manual and automatic vesselness maps as inputs. The coverage rate (ratio of classified vessel pixels over all vessel pixels defined in the A/V tree reference standard) and the accuracy (ratio of correctly classified vessel pixels over all classified vessel pixels) are used in the evaluation.
Results
An example result is shown in Fig. 2. The mean accuracy/coverage rate with 95% confidence interval was 88.2% (84.3, 92.1) / 88.5% (86.8, 90.2) for the manual segmentation; and 82.5% (77.4, 87.6) / 85.19% (83.7, 86.7) for the automatic segmentation. Using our previously developed approach, the mean accuracy/coverage rate with 95% CI was 89.06% (84.9, 93.3) / 82.03 (80.0, 84.1) for the manual segmentation; and 83.08% (77.6, 88.5) / 78.6% (77.0, 80.2) for the automatic segmentation. Thus the approach improves the coverage rate significantly with similar accuracy for both inputs.
Conclusions
Here we present a method to automatically construct the A/V trees in retinal images given a vessel segmentation. The test on a publicly available dataset and the comparison with our previous method demonstrate its better performance.