May 2011
Volume 52, Issue 6
Free
Retina  |   May 2011
Computational Quantification of Complex Fundus Phenotypes in Age-Related Macular Degeneration and Stargardt Disease
Author Affiliations & Notes
  • Gwenole Quellec
    From the Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa;
    the Departments of Biomedical Engineering and
  • Stephen R. Russell
    From the Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa;
    the Institute for Vision Research, University of Iowa, Iowa City, Iowa;
  • Todd E. Scheetz
    From the Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa;
    the Departments of Biomedical Engineering and
    the Institute for Vision Research, University of Iowa, Iowa City, Iowa;
  • Edwin M. Stone
    From the Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa;
    the Institute for Vision Research, University of Iowa, Iowa City, Iowa;
    the Howard Hughes Medical Institute, University of Iowa, Iowa City, Iowa; and
  • Michael D. Abràmoff
    From the Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, Iowa City, Iowa;
    the Departments of Biomedical Engineering and
    Electrical and Computer Engineering, and
    the Institute for Vision Research, University of Iowa, Iowa City, Iowa;
    the Department of Veterans Affairs, Iowa City VA Medical Center, Iowa City, Iowa.
  • Corresponding author: Michael D. Abràmoff, Department of Ophthalmology and Visual Sciences, University of Iowa Hospitals and Clinics, 200 Hawkins Drive, Iowa City, IA 52242; [email protected]
Investigative Ophthalmology & Visual Science May 2011, Vol.52, 2976-2981. doi:https://doi.org/10.1167/iovs.10-6232
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Gwenole Quellec, Stephen R. Russell, Todd E. Scheetz, Edwin M. Stone, Michael D. Abràmoff; Computational Quantification of Complex Fundus Phenotypes in Age-Related Macular Degeneration and Stargardt Disease. Invest. Ophthalmol. Vis. Sci. 2011;52(6):2976-2981. https://doi.org/10.1167/iovs.10-6232.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose.: To describe an automated method of quantification of specific fundus phenotypes and evaluate its performance in differentiating drusen, the hallmark lesions of age-related macular degeneration (AMD), from similar-looking bright lesions, the pisciform deposits or flecks typical of Stargardt disease (SD).

Methods.: Fundus macular images of 30 eyes of 30 subjects were studied. Fifteen subjects had a clinical diagnosis of AMD with at least 10 intermediate and/or 1 large drusen, and the other 15 had SD. As a test of bright-lesion separation, AMD and SD subjects were chosen from the heterogeneous phenotypes of each disorder, to be as visually similar as possible. Drusen and fleck properties were quantified from the color images by using an automated method, and a shape classifier was used to divide the images as characteristic of either AMD or SD. Image identification performance was quantified by using the area under the receiver operating characteristic curve (AUC).

Results.: All SD subjects demonstrated at least one disease-associated variant of the ABCA4 gene. The method achieved an AUC of 0.936 for differentiating AMD from SD.

Conclusions.: Automated quantification of fundus phenotypes was achieved, and the results show that the method can differentiate AMD from SD, two distinctly different genetically associated disorders, by quantifying the properties of the bright lesions (drusen and flecks) in their fundus images, even when the images were visually selected to be similar. Quantification of fundus phenotypes may allow recognition of new phenotypes, correlation with new genotypes and may measure disease-specific biomarkers to improve management of patients with AMD or SD.

Age-related macular degeneration (AMD) and Stargardt disease (SD) are the leading causes of visual loss in the United States from adult-onset and juvenile macular degeneration, respectively. Subsets of AMD and SD subjects share clinical symptoms and bear similarities of fundus manifestations, due to lightly colored, subretinal pigment epithelial (sub-RPE) deposits, although the disorders otherwise differ by age of onset, mechanism, and tempo of visual loss. 1,2 Drusen, the bright lesions typical of AMD, and the pisciform deposits or flecks typical of SD, are both similar in color and size but differ in shape, with drusen displaying a more rounded shape, whereas the flecks in SD often have a more elongated, fishtail (or pisciform) shape. 
At least eight genes and genetic loci have been reproducibly shown to contribute to the development of the AMD phenotype, although only two of these account for large population-attributable risk in the United States: complement factor H (CFH) and ARMS2/LOC387715. 3 21 SD is a juvenile macular degeneration, recessive in 90% of cases, caused by mutations in the ABCA4 gene. 
Quantifying the characteristic AMD and SD fundus phenotypes is of great interest, because it may allow a better understanding of the relation between the genes responsible for these diseases and the variable gene expression, as demonstrated in the phenotype. It can be used to enrich groups for specific endophenotypes, thereby allowing increased power for gene discovery studies. Differentiating between AMD and SD exclusively by examination of the retina is challenging, especially if only drusen or flecks are present. 
Several methods of detecting drusen and other bright lesions in fundus images have been proposed, using mathematical morphology, 22 histogram-based adaptive local thresholding, 23 background removal and histogram-based thresholding, 24 or pixel classification. 25 In prior attempts at drusen segmentation and identification, thresholding techniques were used or methods that reduced choroidal and illumination variation. 
We have recently developed a new method to detect flecks and differentiate them from drusen. It uses unsupervised model matching instead of supervised classification, where the models are defined mathematically. The advantage of using unsupervised model matching for this application is that no training set is required, given the small number of available SD images from molecularly confirmed ABCA4-bearing patients. 26 Thus, all available fundus images can be used to test the performance of the system. We use a single phenotype metric, the flatness value, to differentiate drusen from flecks. 
The purpose of the present study was to describe this automated method of quantification of specific fundus phenotypes and to evaluate its performance in differentiating patients with drusen, the hallmark lesions of age-related macular degeneration (AMD), from patients with similar-looking, pisciform deposits or flecks, while excluding patients with images showing other funduscopic signs of AMD (geographic atrophy and CNV scars) and of SD (bull's eye maculopathy). 
Methods
Subjects, Genetic Status, and Genetic Testing
In this retrospective study, 30 subjects were included: 15 from a large Iowa cohort of AMD and 15 from a clinically and genetically characterized SD cohort. The study protocol adhered to the tenets of the Declaration of Helsinki. The Institutional Review Board of the University of Iowa approved the study protocol, and, because only deidentified images were used, a waiver of written informed consent was granted. AMD subjects were evaluated if they had sufficient quality fundus images, had a clinical diagnosis of nonexudative AMD with at least one fundus image containing multiple intermediate and/or large drusen (AREDS simple scale grade 3), and were at least 50 years of age. SD subjects were evaluated for inclusion if they had reduced visual acuity before age 35 associated with multiple fundus flecks. In equivocal clinical cases, dyschromatopsia and/or mildly reduced ERGs were used as SD criteria. A single retinal specialist (SRR) selected the subject images, choosing from a pool of more than 100 AMD and 100 SD subjects with heterogeneous phenotypes, to identify a pool of AMD and SD subject images that were as visually similar as possible, and choosing the highest quality single fundus image from each subject (see Figs. 3, 4). This selection was verified by two additional retinal specialists (MDA, EMS). 
All SD subjects had molecular genetic evaluation of the ABCA4 gene, as previously described, 27 and all 15 SD subjects had at least one disease-causing ABCA4 allele identified, whereas in 7, both alleles were found. 
Fundus Photography
Images were acquired with a 30° camera (FF3; Carl Zeiss Meditec, Inc., Dublin, CA) and recorded on ASA 100 film (Kodachrome; Eastman Kodak, Rochester, NY). Film-based images were captured at 3456 × 2304 pixels and 12 bits per channel with a novel, high-throughput slide-scanning device. Digital images were stored as raw pixel values (.CR2) and converted to uncompressed TIFF format for processing: The green channel was used exclusively for image analysis in this study. 
Modeling the Shape of Drusen and Flecks
Drusen are round, white-to-yellow lesions that are deposited between the basement membrane of the RPE and Bruch's membrane and can be modeled by ellipsoid shapes of low eccentricity, representing the contours and potential drusen fusion. A drusen model, with ellipsoid SD α1 and α2 along the minor and the major axes, respectively, follows:   where I(u,v) is the intensity of the image at the pixel location (u,v), β is a shape parameter modeling the sharpness of the drusen boundary, and δ models drusen contrast with the fundus background. To model elliptical shapes with angular orientation, θ, (u,v) can be replaced in equation 1 by rotationally transformed coordinates (u′,v′) = R θ(u,v). 
Flecks or pisciform lesions are pisciform, yellowish deposits in the sub-RPE potential space. Typically, in SD, flecks vary in shape, including ellipsoid shapes, but at least a few are always more elongated than drusen. A fleck model, with SD α1 and α2 along the minor and the major axes, respectively, follows:   Typical examples of images containing either AMD-associated drusen or SD-associated flecks are shown in Figure 1, and examples of elliptical and pisciform lesion models are shown in Figure 2A. 
Figure 1.
 
Examples of representative fundus images with drusen (left) and flecks (right). Top: original color image. Middle: contrast-enhanced green channel with drusen (left) or flecks (right). Bottom: matching lesion templates displayed at each location where a lesion was detected, only if within 375 pixels from the fovea. If an elliptic or pisciform lesion with standard deviations α1 and α2, orientation θ, amplitude δ, and boundary shape parameter β was detected in the middle images at location (u,v), then an elliptic or a pisciform shape with such parameters was drawn in the bottom image at that pixel location (u,v). The intensity of the lesions represents the distribution of the intensity over the lesion template. The intensity has no relationship to the confidence in the detection. As can be seen, the algorithm undersegments the lesions, showing its high specificity; few false-positive lesions were detected.
Figure 1.
 
Examples of representative fundus images with drusen (left) and flecks (right). Top: original color image. Middle: contrast-enhanced green channel with drusen (left) or flecks (right). Bottom: matching lesion templates displayed at each location where a lesion was detected, only if within 375 pixels from the fovea. If an elliptic or pisciform lesion with standard deviations α1 and α2, orientation θ, amplitude δ, and boundary shape parameter β was detected in the middle images at location (u,v), then an elliptic or a pisciform shape with such parameters was drawn in the bottom image at that pixel location (u,v). The intensity of the lesions represents the distribution of the intensity over the lesion template. The intensity has no relationship to the confidence in the detection. As can be seen, the algorithm undersegments the lesions, showing its high specificity; few false-positive lesions were detected.
Figure 2.
 
(A) Two lesion models, for ellipsoid and pisciform lesions, and their representations in transformed space, indicated by H for the horizontal Haar filter and V for the vertical Haar filter with which they were convolved. In transformed images, black pixels represent negative, and white pixels represent positive coefficients. (B) The average and next eight Eigen components of a compact filter set, generated from the example ellipsoid lesion model in (A). The top (horizontal Haar filter) and bottom (vertical Haar filter) rows show the results after convolution with those respective filters, resulting in the Eigen filters or templates used in the study.
Figure 2.
 
(A) Two lesion models, for ellipsoid and pisciform lesions, and their representations in transformed space, indicated by H for the horizontal Haar filter and V for the vertical Haar filter with which they were convolved. In transformed images, black pixels represent negative, and white pixels represent positive coefficients. (B) The average and next eight Eigen components of a compact filter set, generated from the example ellipsoid lesion model in (A). The top (horizontal Haar filter) and bottom (vertical Haar filter) rows show the results after convolution with those respective filters, resulting in the Eigen filters or templates used in the study.
Overview of Bright-Lesion Detection and Quantification
We first detected the bright lesions (flecks and drusen) by trying to match image regions to the above rotation- and scale-invariant models 28 —a process called template matching. The shape of each of the detected bright lesions was then quantified as its flatness value, and a histogram of all flatness values in the image was created, encoding the shape of all bright lesions in the image. Differentiating AMD patients from SD patients then became a task of finding the discriminant in the histogram space found through linear discriminant analysis. 
Lesion Model Matching
The goal of lesion model or template matching is to find the parameters Image not available of a template that minimize the pixel-to-pixel distance, or difference, between a region in an image, centered on a pixel (u,v) and that of the lesion model. The distance is defined as the averaged sum of the squared pixel-by-pixel differences. If the distance is below a set threshold, then a lesion model of parameter Image not available is present at the pixel location (u,v). 
The template-matching process was applied to each pixel location in the image, within a region adjusted for the scale of the lesion model. The scale was adapted to α1 and α2
To make the algorithm more robust and less sensitive to noise and background variability of the retinal images, we first convolved the green-channel image with two 4 × 4 Haar (step) filters: a horizontal (H) and vertical (V) filter. Two transformed images were thus obtained, corresponding to the convolution with those two filters. In our experience, this preprocessing step leads to the highest signal-to-noise ratio when characterizing candidate bright lesions in fundus images; while preserving the shape of the bright lesions. The lesion models were also convolved with the H and V filters, and examples of the resulting transformed models are shown in Figure 2A. The transformed lesion models were matched to image regions in the two transformed images, respectively. The matching was thus performed in Haar space; in other words, the H-transformed template was matched with the H-transformed image, and the V-transformed template was matched with the V-transformed template. 
Compact Filter Set of Lesion Models
Instead of matching image regions directly to the parameterized models in equations 1 and 2, which would require matching to a large number of lesion models with different Image not available, we created a compact filter set derived from a large number of instances of the models, as follows: instances of lesion models were generated from equations 1 and 2, setting β = 2 and δ = 1, based on preliminary studies, and by varying sizes α1 and α2 and rotation θ. Once generated, lesion model instances were grouped according to their sizes α1 and α2, and within each group, a single circular analysis image region was used. The radii for the three scales were 7, 13, and 26 pixels (∼35, 65, and 130 μm, respectively), three times as large as the typical lesion size in each scale group, as this was the optimal region size for microaneurysm detection in our previous work. 29,30 Thus, we obtained 4080 lesion model instances (1360 for each scale), up to a size of 2096 pixels, 1048 in each transformed image (after convolution with the Haar filters). Principal component analysis (PCA) was then applied to the 1360 model instances of each scale. See Figure 2B for an example of the resulting first few principal components for the 13-pixel scale lesion models. The number (n) of principal components retained for the compact filter set was chosen so that 99% of the variance in the model instances was retained, resulting in n = 13, 26, and 46 filters, respectively, for the three different scales. 
Finding the Closest Matching Lesion Model for Each Image Pixel
The distance between the image region around each pixel and each filter is obtained by the Euclidian distance between their projections onto the compact filter set, after normalization. Approximate nearest-neighbor classification is then used to find the closest lesion model instance among the k = 1360 instances, for each scale, using the collection of all lesion model instances as the training set. 26,31  
Thus, three closest lesion model instances are found for each pixel, one for each scale, and if their Euclidean distance is below some threshold t, then we define a match for that lesion model instance. Figure 1 (bottom row) shows the closest lesion model instance for each pixel if there was a match for that pixel. The specificity of the lesion-detection algorithm is thus controlled by parameter t. In preliminary experiments, we found that the choice of t usually did not affect performance. 
Quantification of the Shape of the Bright Lesions
After detection of matching pixels, using the compact filter set, the pixels were clustered by assigning a label to all connected centers of detected templates, in order to prevent the detection of confluent drusen as a single, elongated lesion. Lesions were then reconstructed using the nearest-neighbor template (of 1360) for the pixels in each cluster, and its shape was quantified by applying standard deviational ellipses to the reconstructed lesion: (1) The main direction was identified; (2) the SD of the pixel intensities along the main axis and along the axis orthogonal to the main axis were computed; and (3) the flatness, defined as the ratio between these two standard deviations, was recorded. 32 A 5-bin histogram was then created from the flatness values of all reconstructed lesions in an image, to estimate the distribution of the flatness of all lesions in an image. 
Outcome Parameter
A signed flatness value was obtained by projecting the location of each image in the five-dimensional flatness histogram space onto the discriminant direction, as found by linear discriminant analysis: the normal vector to the hyperplane that best separates SD and AMD. 
A positive flatness value indicates that the subject has lesions predominantly similar to the model in equation 1, or round drusen, whereas a negative flatness value indicates that the subject has lesions predominantly similar to the model in equation 2, or flecks, because the lesions are more flat. We created a receiver-operating characteristic (ROC) curve of the performance of the flatness value to predict whether the patient has AMD or SD, based on the fundus images, by varying the threshold on the flatness value. 
Implementation
The system was implemented in C++, and run on a Windows XP 2.4 GHz laptop with 4 GB of RAM. The time required to quantify the shape of bright lesions in each image was approximately 12 seconds. 
Results
Fifteen AMD and 15 molecularly confirmed SD subjects were studied. Examples of AMD and SD images, with automatically detected lesions and segmentation results are shown in Figure 1. The average and SD of the five-flatness-value histogram levels, computed for both AMD images and SD images, are given in Table 1. There are significant differences at both extremes of the flatness value. A ranking of the 30 images of the 30 subjects by flatness value (from the most likely to contain drusen to the most likely to contain flecks) is displayed in Figure 3. The image in Figure 3A, with specificity of 0.20, was the most challenging of all 30 images: It contained two large, irregular, elongated flecks that the algorithm split and detected as several smaller round lesions. The other misclassifications occurred near the AMD/SD classification boundary. An area under the ROC curve of 0.936 was achieved on the 30-image data set (Fig. 4), achieving a sensitivity of 100% at a specificity of 80%. 
Table 1.
 
Distribution of Bright-Lesion Shapes over the Images of the 15 AMD and 15 Stargardt Patients
Table 1.
 
Distribution of Bright-Lesion Shapes over the Images of the 15 AMD and 15 Stargardt Patients
Flatness Range (1–1.125) (1.125–1.25) (1.25–1.5) (1.5–1.75) (1.75–+∞)
Mean SD Mean SD Mean SD Mean SD Mean SD
AMD 0.5064 0.0438 0.3065 0.0388 0.1477 0.0298 0.0276 0.0177 0.0075 0.0134
Stargardt 0.4457 0.0659 0.3145 0.0349 0.1813 0.0361 0.0384 0.0196 0.0152 0.0134
z-Test (AMD) −5.3629 0.7949 4.3656 2.3490 2.2288
z-Test (Stargardt) 3.5692 −0.8824 −3.6084 −2.1193 −2.2268
Figure 3.
 
(A, B) The contrast-enhanced green channel of 15 fundus images from 15 AMD patients automatically ranked in decreasing probability of being AMD, as well as a zoom view of a characteristic lesion for each image. For the enlarged images, a small region with fleck lesions was chosen (left, black box). The numbers next to each image are the specificity of the classification of all SD and AMD images, if the threshold was set so that that image was classified as AMD. In other words, the closer the number to 1.0, the easier that image was to classify as AMD. The image with specificity 0.20 (A) is the most challenging of all 30 because it contains two large elongated flecks that are detected as several smaller round lesions.
Figure 3.
 
(A, B) The contrast-enhanced green channel of 15 fundus images from 15 AMD patients automatically ranked in decreasing probability of being AMD, as well as a zoom view of a characteristic lesion for each image. For the enlarged images, a small region with fleck lesions was chosen (left, black box). The numbers next to each image are the specificity of the classification of all SD and AMD images, if the threshold was set so that that image was classified as AMD. In other words, the closer the number to 1.0, the easier that image was to classify as AMD. The image with specificity 0.20 (A) is the most challenging of all 30 because it contains two large elongated flecks that are detected as several smaller round lesions.
Figure 4.
 
Receiver-operating characteristics curve of the performance of the system to separate AMD from Stargardt images. The x-axis shows the false-positive rate (1 − specificity) and the y-axis shows the sensitivity of AMD detection.
Figure 4.
 
Receiver-operating characteristics curve of the performance of the system to separate AMD from Stargardt images. The x-axis shows the false-positive rate (1 − specificity) and the y-axis shows the sensitivity of AMD detection.
Discussion
The results of this preliminary study show that our automated system for quantification of specific phenotypes is capable of quantifying the drusen and fleck phenotypes in images from AMD and SD patients and can predict whether a patient has AMD or SD from these phenotypes in a single fundus image. Differentiating SD flecks from drusen is not a trivial task for most clinicians, but the subtle phenotypic difference between these lesions was successfully caught by template matching, which characterized the shape of each lesion. Although we expect the main use of computational quantification of phenotypes to be for phenotype characterization and aiding in the discovery of additional genotypes for AMD and SD, the specific automated system used in this study may have potential for assisting nonretinal specialists in determining whether patients have SD or AMD from a single color fundus image. 
Clinicians tend to think of the difference between flecks and drusen as only the extremes (i.e., either flecks or drusen). Inspection of Figure 1 in consideration of our results, may help illustrate that, even in SD, many round (drusenlike) lesions occur. It is the distribution (the histogram, or the visual Gestalt) of the flatness of the lesions in aggregate and not of a single lesion that determines the diagnosis. 
To our knowledge, this is the first study to use a computational approach to characterize fundus phenotypes. Our approach is not limited to quantification of AMD and Stargardt phenotypes, but instead, can be used anywhere that complex phenotypes are suspected to be present in images, as long as the phenotypes are based on the properties of objects in the images. 
We found our approach to be robust to changes in the only parameter, t, and adjusting or fine-tuning this parameter was not necessary for discrimination. We believe this is most likely because we used the histogram of flatness values to classify an entire image: A local error of interpreting a single drusen as a fleck minimally affects the histogram; it is the distribution of the shapes of all lesions in the image that determines its classification as either AMD or SD. This study should be regarded as a proof-of-concept study. A study on a much larger dataset of retinal images is in preparation, and we plan to use our standard machine-learning approaches to more fully refine the value of t. Simple disease phenotypes, such as visual acuity or time of onset of visual loss, are simple to quantify, (i.e., express as a number), because they are one-dimensional. So far, however, quantifying complex phenotypes, such as two-dimensional retinal images, or three-dimensional ones, such as retinal OCT, MRI, or CT, has been more challenging. Commonly, such complex phenotypes are quantified through some form of collaborative reading by experts—for example, through the Wisconsin Reading Center. 33 However, computational quantification, as presented in this study, is objective, allows for finer characterization, and allows for rapid hypothesis generation and testing, because the existence of suspected phenotypes can be confirmed or discarded in a matter of hours. 
In addition, to avoid bias as much as possible, we used filters that were programatically—through PCA—optimized for detecting the lesion models in equations 1 and 2, instead of selecting them from a predefined filter bank, as we and others have done previously. 25,34 However, bias still exists because of the clinical expert knowledge that was essential for both modeling the drusen and flecks in equations 1 and 2, as well as for choosing the flatness value as the aspect of the lesions to be quantified, rather than any other aspects. We are currently investigating how to extend our approach to allow automated discovery of other, shape-independent lesion aspects. 
In this study, we used only retinal slide images that were subsequently digitized. We do not expect a major difference in performance on native digital images, given our experience with this approach on large sets of digital images. 29,30  
Quantification, discovery, and confirmation of new complex fundus phenotypes is of great interest, because it allows grouping of patients according to phenotype, such as endophenotypes. This method makes discovery of new genotypes for that phenotype possible and allows us to discover mutations within the gene. Other phenotype metrics beyond the scope of this preliminary study, such as spatial distribution of lesions, can also be quantified using our approach and may be relevant as well. 
In conclusion, we have developed an approach for computational quantification of complex fundus phenotypes in AMD and SD, and the results show that quantitative phenotyping of but a single type of lesion in a single image of a patient is highly reliable in separating AMD from SD. 
Footnotes
 Supported by National Eye Institute Grants R01 EY017066, R01 EY11309, and R01 EY16822; Research to Prevent Blindness, NY; and the Department of Veterans Affairs.
Footnotes
 Disclosure: G. Quellec, None; S.R. Russell, None; T.E. Scheetz, None; E.M. Stone, None; M.D. Abràmoff, None
References
Koenekoop RK . The gene for Stargardt disease, ABCA4, is a major retinal gene: a mini-review. Ophthalmic Genet. 2003;24(2):75–80. [CrossRef] [PubMed]
Walia S Fishman GA . Natural history of phenotypic changes in Stargardt macular dystrophy. Ophthalmic Genet. 2009;30(2):63–68. [CrossRef] [PubMed]
Ayyagari R Zhang K Hutchinson A . Evaluation of the ELOVL4 gene in patients with age-related macular degeneration. Ophthalmic Genet. 2001;22(4):233–239. [CrossRef] [PubMed]
Baird PN Guida E Chu DT Vu HT Guymer RH . The 2 and 4 alleles of the apolipoprotein gene are associated with age-related macular degeneration. Invest Ophthalmol Vis Sci. 2004;45:1311–1315. [CrossRef] [PubMed]
Dewan A Liu M Hartman S . HTRA1 promoter polymorphism in wet age-related macular degeneration. Science. 2006;314(5801):989–992. [CrossRef] [PubMed]
Edwards AO Ritter RIII Abel KJ Manning A Panhuysen C Farrer LA . Complement factor H polymorphism and age-related macular degeneration Science. 2005;308(5720):421–424. [CrossRef] [PubMed]
Fisher SA Santangelo SL Weeks DE . Meta-analysis of genome scans of age-related macular degeneration. Hum Mol Genet. 2005;14(15):2257–2264. [CrossRef] [PubMed]
Hageman GS Anderson DH Johnson LV . A common haplotype in the complement regulatory gene factor H (HF1/CFH) predisposes individuals to age-related macular degeneration Proc Natl Acad Sci. 2005;102(20):7227–7232. [CrossRef] [PubMed]
Haines JL Hauser MA Schmidt S . Complement factor H variant increases the risk of age-related macular degeneration. Science. 2005;308(5720):419–421. [CrossRef] [PubMed]
Jakobsdottir J Conley YP Weeks DE Mah TS Ferrell RE Gorin MB . Susceptibility genes for age-related macular degeneration on choromosome 10q26. Am J Hum Genet. 2006;77(3):389–407. [CrossRef]
Klein RJ Zeiss C Chew EY . Complement factor H polymorphism in age-related macular degeneration. Science. 2005;308(5720):385–389. [CrossRef] [PubMed]
Rivera A Fisher SA Fritsche LG . Hypothetical LOC387715 is a second major susceptibility gene for age-related macular degeneration, contributing independently of complement factor H to disease risk. Hum Mol Genet. 2005;14(21):3227–3236. [CrossRef] [PubMed]
Schultz DW Klein ML Humpert AJ . Analysis of the ARMD1 locus: evidence that a mutation in HEMICENTIN-1 is associated with age-related macular degeneration in a large family. Hum Mol Genet. 2003;12(24):3315–3323. [CrossRef] [PubMed]
Stone EM Braun TA Russell SR . Missense variations in the fibulin 5 gene in association with age-related macular degeneration. N Engl J Med. 2004;351:20–27. [CrossRef]
Yang Z Camp NJ Sun H . A variant of the HTRA1 gene increases susceptibility to age-related macular degeneration. Science. 2006;314(5801):992–993. [CrossRef] [PubMed]
Zareparsi S Buraczynska M Branham KE . Toll-like receptor 4 variant D299G is associated with susceptibility to age-related macular degeneration. Hum Mol Genet. 2005;14(11):1449–1455. [CrossRef] [PubMed]
Gold B Merriam JE Zernant J . Variation in factor B (BF) and complement component 2 (C2) genes is associated with age-related macular degeneration. Nat Genet. 2006;38(4):458–462. [CrossRef] [PubMed]
Fagerness JA Maller JB Neale BM Reynolds RC Daly MJ Seddon JM . Variation near complement factor I is associated with risk of advanced AMD. Eur J Hum Genet. 2009;17(1):100–104. [CrossRef] [PubMed]
Haddad S Chen CA Santangelo SL Seddon JM . The genetics of age-related macular degeneration: a review of progress to date. Surv Ophthalmol. 2006;51(4):316–363. [CrossRef] [PubMed]
Maller J George S Purcell S . Common variation in three genes, including a noncoding variant in CFH, strongly influences risk of age-related macular degeneration. Nat Genet. 2006;38(9):1055–1059. [CrossRef] [PubMed]
Maller JB Fagerness JA Reynolds RC Neale BM Daly MJ Seddon JM . Variation in complement factor 3 is associated with risk of age-related macular degeneration. Nat Genet. 2007;39(10):1200–1201. [CrossRef] [PubMed]
Barthes A Conrath J Rasigni M Adel M Petrakian JP . Mathematical morphology in computerized analysis of angiograms in age-related macular degeneration. Med Phys. 2001;28(12):2410–2419. [CrossRef] [PubMed]
Rapantzikos K Zervakis M Balas K . Detection and segmentation of drusen deposits on human retina: potential in the diagnosis of age-related macular degeneration. Med Image Anal. 2003;7(1):95–108. [CrossRef] [PubMed]
Smith RT Chan JK Nagasaki T . Automated detection of macular drusen using geometric background leveling and threshold selection. Arch Ophthalmol. 2005;123(2):200–206. [CrossRef] [PubMed]
Niemeijer M van Ginneken B Russell SR Suttorp-Schulten MS Abramoff MD . Automated detection and differentiation of drusen, exudates, and cotton-wool spots in digital color fundus photographs for diabetic retinopathy diagnosis. Invest Ophthalmol Vis Sci. 2007;48(5):2260–2267. [CrossRef] [PubMed]
Abramoff MD Alward WL Greenlee EC . Automated segmentation of the optic disc from stereo color photographs using physiologically plausible features. Invest Ophthalmol Vis Sci. 2007;48(4):1665–1673. [CrossRef] [PubMed]
Schindler EI Nylen EL Ko AC . Deducing the pathogenic contribution of recessive ABCA4 alleles in an outbred population. Hum Mol Genet. 2010;19:3693–3701. [CrossRef] [PubMed]
Quellec G Lamard M Josselin PM Cazuguel G Cochener B Roux C . Optimal wavelet transform for the detection of microaneurysms in retina photographs. IEEE Trans Med Imaging. 2008;27(9):1230–1241. [CrossRef] [PubMed]
Abramoff MD Reinhardt JM Russell SR . Automated early detection of diabetic retinopathy. Ophthalmology. 2010;117(6):1147–1154. [CrossRef] [PubMed]
Abramoff MD Garvin M Sonka M . Retinal imaging and image analysis. IEEE Rev Biomed Eng. 2010;3:169–208. [CrossRef] [PubMed]
Arya S Mount D Netanyahu N Silverman R Wu A . An optimal algorithm for approximate nearest neighbor searching in fixed dimensions. J ACM. 1998;45(6):891–923. [CrossRef]
Ebdon D . Statistics in Geography. 2nd ed. Oxford Oxfordshire; New York: Blackwell Scientific; 1985:232.
Seddon JM Cote J Page WF Aggen SH Neale MC . The US twin study of age-related macular degeneration: relative roles of genetic and environmental influences. Arch Ophthalmol. 2005;123(3):321–327. [CrossRef] [PubMed]
Fleming AD Philip S Goatman KA Williams GJ Olson JA Sharp PF . Automated detection of exudates for diabetic retinopathy screening. Phys Med Biol. 2007;52(24):7385–7396. [CrossRef] [PubMed]
Figure 1.
 
Examples of representative fundus images with drusen (left) and flecks (right). Top: original color image. Middle: contrast-enhanced green channel with drusen (left) or flecks (right). Bottom: matching lesion templates displayed at each location where a lesion was detected, only if within 375 pixels from the fovea. If an elliptic or pisciform lesion with standard deviations α1 and α2, orientation θ, amplitude δ, and boundary shape parameter β was detected in the middle images at location (u,v), then an elliptic or a pisciform shape with such parameters was drawn in the bottom image at that pixel location (u,v). The intensity of the lesions represents the distribution of the intensity over the lesion template. The intensity has no relationship to the confidence in the detection. As can be seen, the algorithm undersegments the lesions, showing its high specificity; few false-positive lesions were detected.
Figure 1.
 
Examples of representative fundus images with drusen (left) and flecks (right). Top: original color image. Middle: contrast-enhanced green channel with drusen (left) or flecks (right). Bottom: matching lesion templates displayed at each location where a lesion was detected, only if within 375 pixels from the fovea. If an elliptic or pisciform lesion with standard deviations α1 and α2, orientation θ, amplitude δ, and boundary shape parameter β was detected in the middle images at location (u,v), then an elliptic or a pisciform shape with such parameters was drawn in the bottom image at that pixel location (u,v). The intensity of the lesions represents the distribution of the intensity over the lesion template. The intensity has no relationship to the confidence in the detection. As can be seen, the algorithm undersegments the lesions, showing its high specificity; few false-positive lesions were detected.
Figure 2.
 
(A) Two lesion models, for ellipsoid and pisciform lesions, and their representations in transformed space, indicated by H for the horizontal Haar filter and V for the vertical Haar filter with which they were convolved. In transformed images, black pixels represent negative, and white pixels represent positive coefficients. (B) The average and next eight Eigen components of a compact filter set, generated from the example ellipsoid lesion model in (A). The top (horizontal Haar filter) and bottom (vertical Haar filter) rows show the results after convolution with those respective filters, resulting in the Eigen filters or templates used in the study.
Figure 2.
 
(A) Two lesion models, for ellipsoid and pisciform lesions, and their representations in transformed space, indicated by H for the horizontal Haar filter and V for the vertical Haar filter with which they were convolved. In transformed images, black pixels represent negative, and white pixels represent positive coefficients. (B) The average and next eight Eigen components of a compact filter set, generated from the example ellipsoid lesion model in (A). The top (horizontal Haar filter) and bottom (vertical Haar filter) rows show the results after convolution with those respective filters, resulting in the Eigen filters or templates used in the study.
Figure 3.
 
(A, B) The contrast-enhanced green channel of 15 fundus images from 15 AMD patients automatically ranked in decreasing probability of being AMD, as well as a zoom view of a characteristic lesion for each image. For the enlarged images, a small region with fleck lesions was chosen (left, black box). The numbers next to each image are the specificity of the classification of all SD and AMD images, if the threshold was set so that that image was classified as AMD. In other words, the closer the number to 1.0, the easier that image was to classify as AMD. The image with specificity 0.20 (A) is the most challenging of all 30 because it contains two large elongated flecks that are detected as several smaller round lesions.
Figure 3.
 
(A, B) The contrast-enhanced green channel of 15 fundus images from 15 AMD patients automatically ranked in decreasing probability of being AMD, as well as a zoom view of a characteristic lesion for each image. For the enlarged images, a small region with fleck lesions was chosen (left, black box). The numbers next to each image are the specificity of the classification of all SD and AMD images, if the threshold was set so that that image was classified as AMD. In other words, the closer the number to 1.0, the easier that image was to classify as AMD. The image with specificity 0.20 (A) is the most challenging of all 30 because it contains two large elongated flecks that are detected as several smaller round lesions.
Figure 4.
 
Receiver-operating characteristics curve of the performance of the system to separate AMD from Stargardt images. The x-axis shows the false-positive rate (1 − specificity) and the y-axis shows the sensitivity of AMD detection.
Figure 4.
 
Receiver-operating characteristics curve of the performance of the system to separate AMD from Stargardt images. The x-axis shows the false-positive rate (1 − specificity) and the y-axis shows the sensitivity of AMD detection.
Table 1.
 
Distribution of Bright-Lesion Shapes over the Images of the 15 AMD and 15 Stargardt Patients
Table 1.
 
Distribution of Bright-Lesion Shapes over the Images of the 15 AMD and 15 Stargardt Patients
Flatness Range (1–1.125) (1.125–1.25) (1.25–1.5) (1.5–1.75) (1.75–+∞)
Mean SD Mean SD Mean SD Mean SD Mean SD
AMD 0.5064 0.0438 0.3065 0.0388 0.1477 0.0298 0.0276 0.0177 0.0075 0.0134
Stargardt 0.4457 0.0659 0.3145 0.0349 0.1813 0.0361 0.0384 0.0196 0.0152 0.0134
z-Test (AMD) −5.3629 0.7949 4.3656 2.3490 2.2288
z-Test (Stargardt) 3.5692 −0.8824 −3.6084 −2.1193 −2.2268
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×