February 2016
Volume 57, Issue 2
Open Access
Retina  |   February 2016
Human Vision–Motivated Algorithm Allows Consistent Retinal Vessel Classification Based on Local Color Contrast for Advancing General Diagnostic Exams
Author Affiliations & Notes
  • Iliya V. Ivanov
    Vision Rehabilitation Research Unit Centre for Ophthalmology, University Eye-Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
    Division of Experimental Ophthalmology, University of Tübingen, Centre for Ophthalmology, Institute for Ophthalmic Research, Tübingen, Germany
    Zeiss Vision Science Lab, Institute for Ophthalmic Research, Centre for Ophthalmology, University of Tübingen, Tübingen, Germany
  • Martin A. Leitritz
    University Eye-Hospital, Centre for Ophthalmology, Eberhard Karls University of Tübingen, Tübingen, Germany
  • Lars A. Norrenberg
    Klinikum am Steinenberg, Department of Obstetrics and Gynecology, District Hospital Reutlingen, Reutlingen, Germany
  • Michael Völker
    University Eye-Hospital, Centre for Ophthalmology, Eberhard Karls University of Tübingen, Tübingen, Germany
  • Marek Dynowski
    Zentrale Systeme, Zentrum für Datenverarbeitung, University of Tübingen, Tübingen, Germany
  • Marius Ueffing
    Division of Experimental Ophthalmology, University of Tübingen, Centre for Ophthalmology, Institute for Ophthalmic Research, Tübingen, Germany
  • Johannes Dietter
    Division of Experimental Ophthalmology, University of Tübingen, Centre for Ophthalmology, Institute for Ophthalmic Research, Tübingen, Germany
  • Correspondence: Iliya V. Ivanov, Zeiss Vision Science Lab, Institute for Ophthalmic Research, Centre for Ophthalmology, University of Tübingen, Roentgenweg 11, 72076 Tübingen, Germany; iliya.ivanov@uni-tuebingen.de. 
Investigative Ophthalmology & Visual Science February 2016, Vol.57, 731-738. doi:10.1167/iovs.15-17831
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Iliya V. Ivanov, Martin A. Leitritz, Lars A. Norrenberg, Michael Völker, Marek Dynowski, Marius Ueffing, Johannes Dietter; Human Vision–Motivated Algorithm Allows Consistent Retinal Vessel Classification Based on Local Color Contrast for Advancing General Diagnostic Exams. Invest. Ophthalmol. Vis. Sci. 2016;57(2):731-738. doi: 10.1167/iovs.15-17831.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Abnormalities of blood vessel anatomy, morphology, and ratio can serve as important diagnostic markers for retinal diseases such as AMD or diabetic retinopathy. Large cohort studies demand automated and quantitative image analysis of vascular abnormalities. Therefore, we developed an analytical software tool to enable automated standardized classification of blood vessels supporting clinical reading.

Methods: A dataset of 61 images was collected from a total of 33 women and 8 men with a median age of 38 years. The pupils were not dilated, and images were taken after dark adaption. In contrast to current methods in which classification is based on vessel profile intensity averages, and similar to human vision, local color contrast was chosen as a discriminator to allow artery vein discrimination and arterial–venous ratio (AVR) calculation without vessel tracking.

Results: With 83% ± 1 standard error of the mean for our dataset, we achieved best classification for weighted lightness information from a combination of the red, green, and blue channels. Tested on an independent dataset, our method reached 89% correct classification, which, when benchmarked against conventional ophthalmologic classification, shows significantly improved classification scores.

Conclusions: Our study demonstrates that vessel classification based on local color contrast can cope with inter- or intraimage lightness variability and allows consistent AVR calculation. We offer an open-source implementation of this method upon request, which can be integrated into existing tool sets and applied to general diagnostic exams.

Changes in the retinal vasculature hallmark several retinal diseases. Moreover, abnormalities of retinal vessel anatomy, morphology, and ultrastructure can point to serious systemic health risks and diseases,13 such as diabetes, stroke, or atherosclerosis. Preventive measures on an individual as well as on a population level require standardized clinical examination of patients. Standardized methods are prerequisites for the development of novel diagnostic tools and therapeutic strategies. With the advent of large-scale population screening for common health risks, the automatic detection of blood vessels in retinal images has gained importance.4 Retinal vessel diameters and arterial–venous ratio (AVR), that is, the relation between arterial and venous vessel width, are extensively used to identify an elevated risk of disease.511 To allow a standardized clinical assessment, blood vessel detection and segmentation schemes have been developed, as well as algorithms for precise classification of vessels into veins and arteries to support diagnosis.59,11 Color fundus images have been used as an image source to differentiate retinal arteries and veins. Along with some semiautomated classification techniques, which involve manual vessel detection and tracking,1214 most recent classification methods have been automatic and based on color features analysis.59,11 Based on pixel classification schemes, pixel patterns were annotated to an artery or vein class, based on color analysis of image features at the proximity of the pixel under consideration.57,10 After pixel classification, vessel segments are assigned as belonging to either artery or vein class based on the prevailing number of pixels classified within.57,10 A tracking procedure joins vessel segments that belong to the same vessel, which is then classified as either vein or artery. Finally, the AVR is calculated within a region of interest (ROI) in the vicinity of the optic disc.57 
An established principle toward vessel classification in fundus images has been to differentiate vessels via patterns in color intensities since arteries appear lighter than veins.57,10 However, both kinds of vessels may significantly vary in color or lightness due to uneven image lightness and lack of color constancy, which makes the classification a difficult task.57,10 To circumvent this problem, techniques such as image background equation,6,7 image division and rotation,6,7 vessel tracking from their origin,6,7,10 and structural mapping8 have been put forward. It has also been considered which monochromatic channel can yield optimal contrast and resolution.57,10 Vazquez and colleagues6 investigated different color spaces and channels. Comparing red, green, blue (RGB) with respect to saturation (HSL) and lightness (gray level), they reported superior results from the green channel with approximately 86% correctly classified vessels. To circumvent the uneven illumination, they divided the image into four quadrants, rotated in steps of 20°, and subsequently classified each vessel segment multiple times. In a subsequent study,5 by the use of the minimal path vessel tracking technique, they improved their classification success ratio by 2% and concluded that vessel tracking and image division in combination with rotation techniques efficiently solved the uneven image illumination problem. Similarly, Joshi et al.,8 when integrating the color properties of green and blue channels in combination with vessel tracking and crossing information for pixel clustering, achieved comparable results. In an earlier study by the same group,10 a supervised learning algorithm used a vessel tracking algorithm to solve the image illumination problem. They used higher-order image derivatives to classify the pixels within vessels and achieved 88% success when classifying arteries and veins. Their results also showed that green channel intensities without any normalization are an insufficient basis for a supervised learning classification method because of local intensity variations within and between images. 
This study is motivated by a clinical application. Toward this goal, we develop a method to classify vessels into arteries and veins and consequently estimate the AVR completely automatically. The estimation of the AVR can be applied to detect retinal microvascular abnormalities, such as generalized arteriolar narrowing. The AVR is clinically used already as a diabetic retinopathy marker,13 and here we demonstrate that our machine estimations are consistent with estimations achieved by ophthalmology experts. 
The clinical application of the work presented here is clearly aimed at advancing general diagnostic exams by assisting, rather than substituting for, the ophthalmic specialist in the tedious technical job of calculating the AVR. More importantly, our algorithm will also be beneficial in large epidemiologic studies to detect statistical correlation between disease phenotypes and the AVR. We also address principal AVR estimation limitations caused by within- and between-image intensity variations, hindering performance in previous studies.57,10 To enable a reliable AVR calculation across different image illuminations, we take advantage of information processing principles in human vision. Color constancy is the process that makes objects appear the same color under changes in illumination in the visual system.15 Here, local color contrast was found to play the key role in achieving constancy. A more detailed explanation of this complex visual processing is given in the Methods section. Our machine method is implemented closely following the local contrast processing required to achieve color constancy in human vision. The generated vessel classification algorithm is simple to implement and provided as open-source technology at no cost. We share the point of view that it may further provide the groundwork for analyzing a number of ophthalmic diseases that involve changes to blood vessels. 
Methods
The study was approved by the University of Tübingen Medical Faculty ethics committee, and informed written consent was obtained from all participants. The research adhered to the tenets of the Declaration of Helsinki. The following two datasets were used to evaluate the performance of the new algorithm. First, specifically for the needs of the study, a dataset of 61 images was acquired with the help of a nonmydriatic automated camera system (DRS; CenterVue S.p.a., Padua, Italy). Image resolution was 2592 × 1944 pixels, which represents a retinal area of approximately 45° × 40°. Images of a normal cohort of participants were taken after informed consent. To create our own image dataset, escorting persons of pregnant women of the District Hospital Reutlingen, Germany, were enrolled for fundus imaging. Additionally, employees of the clinic were enrolled. Participants were enrolled only if they had no known pathologies of the retinal vessels, the retina, or the optic nerve head. Only images with good quality (sharpness, correct ROI, illumination) were included in the dataset. In total, images from 33 women and 8 men with a median age of 38 years (minimum 18, maximum 64) were collected. The pupils were not dilated, and images were taken after a dark adaption time of 2 minutes. Secondly, to confirm that our classification method is robust against variations in image intensity, we tested our algorithm on the publicly available independent dataset VICAVR, a set of retinal images used for the computation of the AVR.5,6 
Vessel Segmentation
For vessel segmentation we used the technique described by Bankhead and colleagues,16 in which vessel segmentation is achieved by thresholding wavelet coefficients produced by the wavelet transformation. The centerline for each vessel segment was defined using a morphologic thinning operation and least squares spline fitting. Finally, the pixel profiles at each centerline were created perpendicularly across the vessel using linear interpolation. Vessel edges (vessel diameter) were detected using the zero crossings of the second derivative from the estimated profiles. We used the Matlab implementation code made publicly available by Bankhead16 to ensure robust segmentation of retinal vessels and accurate calculation of their width followed by our method of vessel classification. 
Vessel Classification
In contrast to previous studies in which taking average image intensities to differentiate vessels was the approach mainly involved, the human retina has been shown to not simply record light intensities. Due to center-surround receptive fields of retinal ganglion cells, retinal responses depend on the surrounding context: Responses depend on the difference between light intensity in the center and that in the immediate surround. Thus local contrast is calculated by the visual system, for instance, when the task is to determine the gray shade of the checks on a floor; just estimating the light intensity of a surface is not enough. A striking example of the effect of the local color contrast on visual perception is the checkershadow illusion, published by Adelson.17 In this case two checks with identical gray color are put into different contexts, one in a shadow and the other not, and they are perceived as having a different color. Local color contrast has been also found to contribute to color constancy, a process in the human visual system that makes objects appear the same color under changes in illumination.15 Therefore, quantification and integration of local color contrast as the ratio between the mean intensity from a given vessel diameter and its flanks was chosen as a main discriminator (see Fig. 1) in our study. 
Figure 1
 
Figure is for illustration purpose only. Blue solid line represents the estimated vessel profile (data fit) (l) at each centerline pixel, while the red line and circles represent the actual intensities (raw pixel intensity data) at each pixel as a function of the distance from the vessel centerline. Local color contrast for each vessel profile l is calculated as the ratio between the mean intensities (I) from a given vessel diameter (d) and its flanks (f1 and f2). For accurate vessel profile (l) estimations of the real intensity information, we require at least three informative pixels in each flanker f1, f2 and vessel diameter d and thus only profiles larger than 10 pixels are considered for color feature estimations.
Figure 1
 
Figure is for illustration purpose only. Blue solid line represents the estimated vessel profile (data fit) (l) at each centerline pixel, while the red line and circles represent the actual intensities (raw pixel intensity data) at each pixel as a function of the distance from the vessel centerline. Local color contrast for each vessel profile l is calculated as the ratio between the mean intensities (I) from a given vessel diameter (d) and its flanks (f1 and f2). For accurate vessel profile (l) estimations of the real intensity information, we require at least three informative pixels in each flanker f1, f2 and vessel diameter d and thus only profiles larger than 10 pixels are considered for color feature estimations.
The vessels resulting from segmentation were classified within a concentric ROI, whose width equals the optic disc radius (see Fig. 2) as specified previously.9,18,19 Our own routine was implemented for detecting the optic disc and the segmentation of the optic disc border. 
Figure 2
 
Vessels are segmented and classified by our medical expert within the largest concentric region of interest (ROI) enclosed by the green circumferences. Blue denotes veins, red denotes arteries. Vessel fragments colored green were not classified by the expert. Our algorithmic classification is restricted to the narrower ROI (within the dotted gray circumferences) centered at the optic disc (ODc). The size of the ROI is determined by the radius of the optic disc (r). We tested the classification rate of our algorithm at several ROI widths w (w = r/3; w = r/2; w = r) and distances d from the optic disc (d = r; d = r/3; d = r/2).
Figure 2
 
Vessels are segmented and classified by our medical expert within the largest concentric region of interest (ROI) enclosed by the green circumferences. Blue denotes veins, red denotes arteries. Vessel fragments colored green were not classified by the expert. Our algorithmic classification is restricted to the narrower ROI (within the dotted gray circumferences) centered at the optic disc (ODc). The size of the ROI is determined by the radius of the optic disc (r). We tested the classification rate of our algorithm at several ROI widths w (w = r/3; w = r/2; w = r) and distances d from the optic disc (d = r; d = r/3; d = r/2).
Classification based on average vessel color information, when pixel intensities from the vessels are taken as means from the vessel diameters only and not contrasted to the local background, proved to be intricate and complicated since the color may vary significantly within and between the images.5,6 
To circumvent this problem, the vessel color features extracted from the RGB channels were normalized to the mean of the local image background. This is adjacent to the vessel diameter (d), image background depicted in Figure 1 with the shaded area (flankers f1 and f2). For each estimated centerline profile (see Fig. 1) we calculated the ratio between the mean intensities (I) falling within the vessel edges (d) and the mean intensities from the vessel flanks (f), for example:  where fij is a vector containing color features for vessel segment i with centerline pixels j. Since vessel centerline profiles are estimations of the real intensity information, for reliable local color contrast estimation we require at least three informative pixels in each flank f and center d of the estimated profile. Thus for informative color features, only profiles larger than 10 pixels are considered. We then classify vessel color features from each vessel segment i using the k-means algorithm to find two clusters, with higher intensities for arteries and lower for veins. The k-means method classifies each feature vector into artery or vein first on features calculated from the red channel of the RGB image. The results of this first classification are stored, and then a new set of color features is extracted from the weighted intensities of the red, green, and blue channels such that  where Idij is the intensity at the jth vessel segment diameter and IR, IB, and IG are the intensities from the red, green, and blue image channels, respectively. Thus each vessel segment is classified a second time with color features calculated according to Equation 1 and intensities as specified in Equation 2. Finally, the classification results are combined to obtain the class of the vessel segment by a voting strategy, which decides the class of the vessel segment based on the prevailing number of color features in it.6 In other words, a vessel segment is considered a vein when its probability to be a vein is greater than 0.5. For instance, a vessel segment with a higher number of color features classified as belonging to the vein class will be considered a vein and vice versa. When the number of vein and artery features within a vessel segment is the same, it is not classified.  
Image Quality Effect on Classification
Since image quality has been shown to affect greatly the ability of computer-based algorithms that extract the structure of fundus images,20 we have investigated the effect of image quality and properties on classification: We divided our dataset into excellent- and average-quality images, according to criteria proposed by Paulus and colleagues,20 and assessed classification performance separately for each of the subsets. We have also investigated the effect on classification of standard image quality improvement techniques, such as white balance, histogram stretching, and matching and adaptive channel matching, where the histogram matching is used to modify the histogram of the red channel by using the histogram of the green channel of the same retinal image. This last manipulation has been used to investigate whether possible oversaturations (image areas in which pixels have highest possible value of 255) of the red channel in fundus photographs can impact classification. 
AVR Calculation
We calculated the AVR in the ROI centered at the optic disc according to Knudtson and colleagues.21 The AVR then is estimated as AVR = CRAE/CRVE, where CRAE is the central retinal artery equivalent and CRVE is the central retinal vein equivalent. We have done so by implementing an algorithm given by Niemeijer and colleagues.9 This procedure takes the widest six veins and arteries into account. Smaller numbers of vessels are also consistent with the algorithm, if the system detects fewer vessels. The vessels are pre-examined in order to avoid the inclusion of vessels with errors in the estimation of the width. Vessels were excluded if they were too small (centerline smaller than 6 pixels), if they were not at least twice as long as they were broad, and if there was too much diameter variation in a vessel segment (if the size of the centerline is smaller than 100, the standard deviation of the vessel segment diameters must be smaller than 2.5; if the size of the centerline is 100 or bigger, the standard deviation of the vessel segment diameters must be smaller than 5). This is necessary since image artifacts like dust particles with which we have to cope in our dataset can be interpreted as vessels. After applying these filters, 70% of the vessels remained for AVR estimation. Twenty-seven percent of the vessels were excluded due to the fact that they were not longer than twice their width. The vessels that were filtered out were also not included in the estimation of the statistical results of the vessel classification, that is, the number of correctly and falsely classified vessels. 
Results
We tested the classification algorithm on our dataset of 61 images. We compared our accuracy rate of correctly classified vessels (number of correctly classified vessels divided by number of all expert-classified vessels) with a manual method, in which a trained expert graded the images. On average, the accuracy achieved was 83% ± 1 standard error of the mean (SEM) for the 61 images (the full dataset) that were successfully segmented by our algorithm. On individual images, worst performance was at 60% (Fig. 3A), still higher than chance, while approximately one-third of the images reached 90% and higher (Fig. 3B). We also tested the algorithm performance while using the image division and rotation technique described previoualy6,7 and could not find a statistically significant improvement (paired t-test, P = 0.67) in performance of vessel classification. The classification rate also was not influenced by testing several ROI widths (see Fig. 2). 
Figure 3
 
Vessel classification tested on individual images. Blue denotes veins, red denotes arteries. Green bars represent vessel fragments unclassified by the expert/algorithm. On the left, images are classified by our medical expert, while those on the right are machine classified. (A) An example of our worst result, with 60% correctly classified vessels (number of correctly classified vessels divided by number of all expert classified vessels), still higher than chance. In (B) is shown a typical example of our best classification performance, 93% successfully classified vessels in this case. On more than 30% of our dataset images, classification performance reached 90% and higher.
Figure 3
 
Vessel classification tested on individual images. Blue denotes veins, red denotes arteries. Green bars represent vessel fragments unclassified by the expert/algorithm. On the left, images are classified by our medical expert, while those on the right are machine classified. (A) An example of our worst result, with 60% correctly classified vessels (number of correctly classified vessels divided by number of all expert classified vessels), still higher than chance. In (B) is shown a typical example of our best classification performance, 93% successfully classified vessels in this case. On more than 30% of our dataset images, classification performance reached 90% and higher.
As our local color contrast is a function of vessel flank f and centerline d intensities (see Equation 1, Fig. 1), we investigated whether varying the number of pixels included in the flanks would influence classification performance. In order to do so, we reduced the width of the vessel environment by 5 to 20 pixels in steps of 5. A reduction by 5 pixels of the vessel environment does not change the classification rate. Further reduction of the width of the vessel environment leads to a reduction of the classification rate from 83% to 78.5% if the width of the vessel environment is reduced by 20 pixels (Fig. 4). When applying our classification method to the VICAVR dataset, from the 58 images available, 55 were successfully segmented and classified. We compared our classification with the reference images graded by one of their experts and achieved 89% correctly classified vessels. To evaluate whether our methods performed better than chance we used a receiver operating characteristic (ROC) analysis based on the classifications of vessels on our own dataset of images. The resulting curve is shown in Figure 5, and the calculated area under the curve is 0.88, which indicates that our algorithm is able to reliably discriminate between veins and arteries. On average, each image was classified in 49s ± 11s (standard deviation) on a Core i7 3.2GHz machine running under Linux OS (Ubuntu 13.10, kernel 3.11.0-20-generic). 
Figure 4
 
Percentage of correctly classified vessels in dependence of the width of the vessel environment. The x-axis is the number of pixels by which the width is reduced, compared with the width of the vessel environment actually used.
Figure 4
 
Percentage of correctly classified vessels in dependence of the width of the vessel environment. The x-axis is the number of pixels by which the width is reduced, compared with the width of the vessel environment actually used.
Figure 5
 
Receiver operating characteristic curve of the proposed algorithm, computed using different threshold probability values for assigning centerline pixels to either vein or artery. The area under the curve is 0.88.
Figure 5
 
Receiver operating characteristic curve of the proposed algorithm, computed using different threshold probability values for assigning centerline pixels to either vein or artery. The area under the curve is 0.88.
As to the impact of image quality on the classification, there was no significant (P > 0.05) difference between excellent- and average-quality image datasets. On average, the image manipulations applied, such as automatic image white balance, histogram stretching, and matching and adaptive channel matching, had no significant effect (1-way ANOVA, F = 0.44, P = 0.8178) on the classification. On individual images, however, performance varied with the different image processing applied and thus maximum classification performance increased up to 87% when only the maximum from each classification was taken into account. Whether individual image performance could be related to some image statistics is a question that is worth investigation and will be addressed in future research. 
We further calculated the AVR for our dataset of 60 images (see Methods), which were manually graded by two ophthalmology experts (blinded). The AVR was calculated for all vessel segments found by the method and have corresponding vessels graded by the experts using the IMEDOS software (IMEDOS Systems UG, Jena, Germany) according to their standards. The results from the two experts were further calibrated by a semiautomated computer-assisted program (Singapore "I" Vessel Assessment [SIVA], software version 3.0). The average error rate that was obtained is 0.02, which demonstrates a good agreement between machine and expert calculations. Figure 6 shows a Bland-Altman plot that graphically demonstrates the distribution of errors and the agreement between ours and the manual methods. The points are scattered about the mean and do not show any systematic trend in the data that would indicate a significant deviation of one method (machine) from the other (manual expert AVR estimation), such as a proportional error or significant absolute difference. This analysis indicates that both machine and manual methods are able to consistently estimate the AVR. 
Figure 6
 
Bland-Altman plot of the agreement between our machine algorithm and the reference expert-graded standard. The red lines represent 95% limits of the agreement. The dotted black line represents the mean difference (0.02) between AVR calculations.
Figure 6
 
Bland-Altman plot of the agreement between our machine algorithm and the reference expert-graded standard. The red lines represent 95% limits of the agreement. The dotted black line represents the mean difference (0.02) between AVR calculations.
Discussion
The demand for automated, quantitative image analysis of the retina has emerged since the advent of large cohort studies on retinal diseases such as AMD or diabetic retinopathy. While diseases manifest themselves with changes in the retinal vasculature, abnormalities of vessel anatomy, morphology, and ratio can serve as important diagnostic markers. However, variations in local light and color intensities due to inhomogeneous illumination, both inter- and intraimage lightness variability, present serious problems for retinal image analysis. This problem has been recognized as the most challenging step in the process of achieving correct vessel classification into arteries and veins.5,6 In this study we established a fast and simple-to-implement method for vessel classification into arteries and veins with a main goal of robustness against inhomogeneous image illumination. This was achieved by employing strategies used by the human visual system to extract features from local color contrast. Our method uses efficient local contrast-based features followed by k-means clustering to technically advance current classification algorithms by reducing the execution time and complexity. It can also easily be integrated into existing, publicly available, vessel segmentation packages such as ARIA17 to provide for the calculation of important eye disease diagnostic markers, which would ultimately improve therapeutic intervention in retinal diseases. 
In more detail, our algorithm is based on features that are constructed by the usage of vessel profiles, that is, the intensities across the centerline of a vessel. Similar to some previous researchers, we analyzed vessel profiles in concentric regions around the optic disc. Vessel profiles are usually employed when designing features for vessel classification.57,10 The crucial and effective step in our new method presented here is to extend these profiles into the vicinity, past the vessel border. In this way the immediate environment of a vessel is explored, in a way similar to the type of local contrast analysis that takes place in the human visual system. At the same time all color channels are weighted and the strongest signal concerning our feature is taken into account. The k-means clustering algorithm was utilized for the classification based on the local color feature, which simplified the analysis. 
We have chosen local color features analysis, since typical vessel properties are known to vary globally.11 For example, often toward the outer image regions, both arteries and veins are very dark along with uneven background illumination.57 This ultimately leads to vessel misclassification. Similarly, the central reflex, which is larger in arteries and smaller in veins, vanishes toward the periphery. As a consequence vessel profiles at a greater distance from the optic disc may not be informative for classification. 
Testing our simple classification algorithm on our own and VICAVR datasets (87% and 89% correctly classified vessels, respectively) showed results in the range of the current and most advanced algorithms, which require full vessel tree extraction and classification.68 Although the sensitivity of current algorithms and of our algorithm are in the same range, the former require complex implementations not available to the scientific community. The sensitivity of our algorithm is assessed as in other state-of-the-art methodologies59,11 of vessel classification and further corroborated by the ROC curve analysis, which yielded an area under the curve of 0.88. To be more specific, Vazquez et al.6 reported an area under the ROC curve of 0.93, and with their improved method on a larger dataset the area is 0.89.5 Niemeijer et al.9 achieved an area under the ROC curve of 0.88. However, it should be mentioned that while our ROC curve analysis and that of Vazquez et al.5,6 are based on vessels, Neimeijer et al.9 used the assessment of single-vessel pixels for the ROC curve analysis. In addition, these results also show the robustness of the approach to use local contrast features to solve the illumination problem for vessel classification, since the algorithm performed without modifications on images with uneven illumination, both inter- and intraimage lightness variability, acquired by two different camera systems. Furthermore, utilizing the image division and rotation technique to reduce uneven lightness as described previously6 did not show significant improvement, which also indicates the robustness of our newly devised features. In contrast to previous works, where the analysis was based on information extracted from the green channel,59 our algorithm performed worse when classification analysis was based on color information extracted from a single channel: We achieved a 72% success rate from the green channel versus 77% from the red channel. Our implementation of the algorithm described by Vazquez and colleagues,6 which relies on classification based on information extracted from the green channel, reached as high as a 76% success rate on our dataset. Including feature classification data from a second channel (Equation 2) enhanced classification performance by approximately 10%, which is in the range of the improvement achieved by including meta-information about vessel crossings reported by Kondermann and colleagues.11 
We used the Matlab-implemented vessel segmentation technique (ARIA) by Bankhead and colleagues,17 which is a fully automated algorithm to extract and analyze retinal vessels. Their state-of-the-art algorithm allows fast calculation of parameters along vessels rather than at specific points of interest, as in manually or interactive computer-assisted segmentation. Quality of vessel segmentation has been shown to play an important role in automatic vessel classification algorithms.11,22,23 Kondermann and colleagues11 demonstrated an impressive 95% correctly classified pixels while applying their neural network classification algorithm on manually segmented images. However, their performance dropped by approximately 10% when automatically segmented images were used. While automated segmentation methods seem to hinder classification performance,11,22,23 they have been shown12 to provide better vessel analysis and diameter estimation, which is our main goal. 
We have implemented the algorithm given by Niemeijer et al.9 for AVR estimation and applied it in conjunction with our newly developed vessel classification algorithm to a dataset where expert-graded AVR estimation was available. The agreement between our and the expert-graded method was confirmed by the resulting Bland-Altman plot, which clearly indicates that our new vessel classification method yields results suitable for further quantitative processing. 
As to limitations of the study and future work, our current results raise the question of whether individual image performance could be related to some image statistics, which will be addressed in future research. Another interesting example to be addressed in future work based on our open source technology may be to discriminate plus disease in retinopathy of prematurity (ROP). However, we find this outside the scope of this study, where we concentrate on AVR calculation applied to detect retinal microvascular abnormalities. The main reason is that plus disease in ROP and diabetic retinopathy detection are based on different vessel parameters: In the former these are vessel width and tortuosity, while in the latter vessel width ratios are calculated. The precise calculation of the vessel width and tortuosity parameters demands processing optimized for image segmentation. While to calculate AVR, vessel width ratios are important and image processing is optimized for correct classification, segmentation here is less important. Therefore, our algorithm in its current state would not be well suited to discriminate plus disease in ROP. What makes the use of ratios so special is that errors in vessel width estimations cancel out in the ratio. This is not the case when the width itself is taken as a disease indicator: Usually vessel widths are estimated as a minimum line distance between the vessel edges in a binary vessel image, created in the segmentation process. Segmentation is usually achieved by thresholding the original image. This thresholding is a subjective process, which limits the objectivity of the width measurement. A good review of this problem is highlighted in Aslam et al.24 Again, this problem is overcome when width ratios are calculated, as in the AVR case. Additionally, it has been suggested that AVR estimation in fundus images is based only on the inner vessel diameters and the light reflected from the walls is not included.25 However, to the best of our knowledge, to date there is no systematic study that investigates the effect of the reflected or scattered light from vessel walls under imperfect illumination conditions (such as inhomogeneous image background in fundus imaging) on the vessel width estimations. This poses a serious confound on vessel width estimations, especially on small vessels. 
Our algorithm is based on discrimination of peripapillary veins and arteries, which is not useful to follow up development of diabetic retinopathy or AMD. Further, the percent accuracy achieved may not be reliable enough to allow for a complete medical diagnosis. Rather, our method provides an automated system to calculate, complementary to existing ophthalmology practices, important eye disease diagnostic markers. Another important application of our algorithm will be in large epidemiologic studies, where less than 100% classification rates are more permissible, to detect statistical correlation between disease phenotypes and the AVR. 
In summary, we propose a human visual system–motivated vessel classification algorithm, which is simple to implement and allows consistent AVR calculation, across independent large datasets, regardless of image intensity variation. The method is free, open source, and available upon request to the authors. The method is simple to implement into existing vessel segmentation techniques and provided an automatic optic disc detection implementation, able to support a completely automated system to calculate important eye disease diagnostic markers, which could improve therapeutic intervention in retinal diseases. 
Acknowledgments
This work was performed on the computational resource bwGRiD Cluster Tübingen funded by the Ministry of Science, Research and the Arts Baden-Württemberg and the Universities of the State of Baden-Württemberg, Germany, within the framework program bwHPC. MU received funding from the Tistou und Charlotte Kerstan Foundation. 
Disclosure: I.V. Ivanov, None; M.A. Leitritz, None; L.A. Norrenberg, None; M. Völker, None; M. Dynowski, None; M. Ueffing, None; J. Dietter, None 
References
Wong TY, Klein R, Klein BEK, Tielsch JM, Hubbard L, Nieto FJ. Retinal microvascular abnormalities and their relationship with hypertension cardiovascular disease, and mortality. Surv Ophthalmol. 2001; 46: 59–80.
Wong T, Mitchell P. The eye in hypertension. Lancet. 2007; 369: 425–435.
Wong TY, Klein R, Couper DJ, et al. Retinal microvascular abnormalities and incident stroke: the atherosclerosis risk in communities study. Lancet. 2001; 358: 1134–1140.
Abràmoff MD, Garvin MK, Sonka M. Retinal imaging and image analysis. IEEE Trans Med Imaging. 2010; 3: 169–208.
Vazquez SG, Cancela B, Barreira N, et al. Improving retinal artery and vein classification by means of a minimal path approach. Mach Vis Appl. 2013; 24: 919–930.
Vazquez SG, Barreira N, Penedo MG, Ortega M, Pose-Reino A. Improvements in retinal vessel clustering techniques: towards the automatic computation of the arterio venous ratio. Computing. 2010; 90: 197–217.
Grisan E, Ruggeri A. A divide et impera strategy for automatic classification of retinal vessels into arteries and veins. Engineering in Medicine and Biology Society, Proceedings of the 25th Annual International Conference of the IEEE. 2003; 1: 890–893.
Joshi VS, Garvin MK, Reinhardt JM, Abramoff MD. Automated artery-venous classification of retinal blood vessels based on structural mapping method. Proc SPIE Medical Imaging: Computer-Aided Diagnosis. 2012; 8315: 83150I.
Niemeijer M, Xu X, Dumitrescu AV, et al. Automated measurement of the arteriolar-to-venular width ratio in digital color fundus photographs. IEEE Trans Med Imaging. 2011; 30: 1941.
Niemeijer M, van Ginneken B, Abràmoff MD. Automatic classification of retinal vessels into arteries and veins. Proc SPIE 7260, Medical Imaging: Computer-Aided Diagnosis. 2009; 72601F.
Kondermann C, Kondermann D, Yan M. Blood vessel classification into arteries and veins in retinal images. Proc SPIE 6512 Medical Imaging: Image Processing. 2007; 651247.
Aguilar W, Elena Martínez-Pérez M, Frauel Y, Escolano F, Lozano MA, Espinosa-Romero A. Graph-based methods for retinal mosaicing and vascular characterization. In: GbRPR: Lecture Notes in Computer Science. Berlin: Springer; 2007; 4538: 25–36.
Chráastek R, Wolf M, Donath K, Niemann H, Michelsont G. Automated calculation of retinal arteriovenous ratio for detection and monitoring of cerebrovascular disease based on assessment of morphological changes of retinal vascular system. Proceedings of the IAPR Workshop on Machine Vision Applications. Nara Japan: IAPR; 2002: 240–243.
Rothaus K, Jiang X, Rhiem P. Separation of the retinal vascular graph in arteries and veins based upon structural knowledge. Image Vis Comput. 2009; 27: 864–875.
Adelson EH. Checkershadow illusion. Available at http://persci.mit.edu/gallery/checkershadow. Accessed March 27 2015.
Hurlbert A, Wolf K. Color contrast: a contributory mechanism to color constancy. Prog Brain Res. 2004; 144: 147–160.
Bankhead P, Scholfield CN, McGeown JG, Curtis TM. Fast retinal vessel detection and measurement using wavelets and edge location refinement. PLoS One. 2012; 7: e32435.
Hubbard LD, Brothers RJ, King WN, et al. Methods for evaluation of retinal microvascular abnormalities associated with hypertension/sclerosis in the atherosclerosis risk in communities study. Ophthalmology. 1999; 106: 2269–2280.
Chrástek R, Wolf M, Donath K, Niemann H, Michelson G. Automated calculation of retinal arteriovenous ratio for detection and monitoring of cerebrovascular disease based on assessment of morphological changes of retinal vascular system. Mach Vis Appl. 2002; 240–243.
Paulus J, Meier J, Bock R, Hornegger J, Michelson G. Automated quality assessment of retinal fundus photos. Int J Comput Assist Radiol Surg. 2010; 5: 557–564.
Knudtson MD, Lee KE, Hubbard LD, Yin Wong T, Klein R, Klein BEK. Revised formulas for summarizing retinal vessel diameters. Curr Eye Res. 2003; 27: 143–149.
Niemeijer M, Staal J, Ginneken B, Loog M, Abramoff M. Comparative study of retinal vessel segmentation methods on a new publicly available database. Proc SPIE. 2004; 5370: 648–656.
Joshi V, Garvin M, Reinhardt J, Abramoff M. Automated method for the identification and analysis of vascular tree structures in retinal vessel network. Proc SPIE Medical Imaging: Computer-Aided Diagnosis. 2011; 7963: 79630I.
Aslam T, Fleck B, Patton N, Trucco M, Azegrouz H. Digital image analysis of plus disease in retinopathy of prematurity. Acta Ophthalmol. 2009; 87: 368–377.
Fischer MD, Huber G, Feng Y, et al. In vivo assessment of retinal vascular wall dimensions. Invest Ophthalmol Vis Sci. 2010; 51: 5254–5259.
Figure 1
 
Figure is for illustration purpose only. Blue solid line represents the estimated vessel profile (data fit) (l) at each centerline pixel, while the red line and circles represent the actual intensities (raw pixel intensity data) at each pixel as a function of the distance from the vessel centerline. Local color contrast for each vessel profile l is calculated as the ratio between the mean intensities (I) from a given vessel diameter (d) and its flanks (f1 and f2). For accurate vessel profile (l) estimations of the real intensity information, we require at least three informative pixels in each flanker f1, f2 and vessel diameter d and thus only profiles larger than 10 pixels are considered for color feature estimations.
Figure 1
 
Figure is for illustration purpose only. Blue solid line represents the estimated vessel profile (data fit) (l) at each centerline pixel, while the red line and circles represent the actual intensities (raw pixel intensity data) at each pixel as a function of the distance from the vessel centerline. Local color contrast for each vessel profile l is calculated as the ratio between the mean intensities (I) from a given vessel diameter (d) and its flanks (f1 and f2). For accurate vessel profile (l) estimations of the real intensity information, we require at least three informative pixels in each flanker f1, f2 and vessel diameter d and thus only profiles larger than 10 pixels are considered for color feature estimations.
Figure 2
 
Vessels are segmented and classified by our medical expert within the largest concentric region of interest (ROI) enclosed by the green circumferences. Blue denotes veins, red denotes arteries. Vessel fragments colored green were not classified by the expert. Our algorithmic classification is restricted to the narrower ROI (within the dotted gray circumferences) centered at the optic disc (ODc). The size of the ROI is determined by the radius of the optic disc (r). We tested the classification rate of our algorithm at several ROI widths w (w = r/3; w = r/2; w = r) and distances d from the optic disc (d = r; d = r/3; d = r/2).
Figure 2
 
Vessels are segmented and classified by our medical expert within the largest concentric region of interest (ROI) enclosed by the green circumferences. Blue denotes veins, red denotes arteries. Vessel fragments colored green were not classified by the expert. Our algorithmic classification is restricted to the narrower ROI (within the dotted gray circumferences) centered at the optic disc (ODc). The size of the ROI is determined by the radius of the optic disc (r). We tested the classification rate of our algorithm at several ROI widths w (w = r/3; w = r/2; w = r) and distances d from the optic disc (d = r; d = r/3; d = r/2).
Figure 3
 
Vessel classification tested on individual images. Blue denotes veins, red denotes arteries. Green bars represent vessel fragments unclassified by the expert/algorithm. On the left, images are classified by our medical expert, while those on the right are machine classified. (A) An example of our worst result, with 60% correctly classified vessels (number of correctly classified vessels divided by number of all expert classified vessels), still higher than chance. In (B) is shown a typical example of our best classification performance, 93% successfully classified vessels in this case. On more than 30% of our dataset images, classification performance reached 90% and higher.
Figure 3
 
Vessel classification tested on individual images. Blue denotes veins, red denotes arteries. Green bars represent vessel fragments unclassified by the expert/algorithm. On the left, images are classified by our medical expert, while those on the right are machine classified. (A) An example of our worst result, with 60% correctly classified vessels (number of correctly classified vessels divided by number of all expert classified vessels), still higher than chance. In (B) is shown a typical example of our best classification performance, 93% successfully classified vessels in this case. On more than 30% of our dataset images, classification performance reached 90% and higher.
Figure 4
 
Percentage of correctly classified vessels in dependence of the width of the vessel environment. The x-axis is the number of pixels by which the width is reduced, compared with the width of the vessel environment actually used.
Figure 4
 
Percentage of correctly classified vessels in dependence of the width of the vessel environment. The x-axis is the number of pixels by which the width is reduced, compared with the width of the vessel environment actually used.
Figure 5
 
Receiver operating characteristic curve of the proposed algorithm, computed using different threshold probability values for assigning centerline pixels to either vein or artery. The area under the curve is 0.88.
Figure 5
 
Receiver operating characteristic curve of the proposed algorithm, computed using different threshold probability values for assigning centerline pixels to either vein or artery. The area under the curve is 0.88.
Figure 6
 
Bland-Altman plot of the agreement between our machine algorithm and the reference expert-graded standard. The red lines represent 95% limits of the agreement. The dotted black line represents the mean difference (0.02) between AVR calculations.
Figure 6
 
Bland-Altman plot of the agreement between our machine algorithm and the reference expert-graded standard. The red lines represent 95% limits of the agreement. The dotted black line represents the mean difference (0.02) between AVR calculations.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×