Investigative Ophthalmology & Visual Science Cover Image for Volume 54, Issue 6
June 2013
Volume 54, Issue 6
Free
Multidisciplinary Ophthalmic Imaging  |   June 2013
Automated Segmentation of Pathological Cavities in Optical Coherence Tomography Scans
Author Affiliations & Notes
  • Matthäus Pilch
    Department of Ophthalmology, Justus Liebig University Gießen, Gießen, Germany
    Department of Informatics, University of Applied Sciences Gießen, Gießen, Germany
  • Knut Stieger
    Department of Ophthalmology, Justus Liebig University Gießen, Gießen, Germany
  • Yaroslava Wenner
    Department of Ophthalmology, Justus Liebig University Gießen, Gießen, Germany
  • Markus N. Preising
    Department of Ophthalmology, Justus Liebig University Gießen, Gießen, Germany
  • Christoph Friedburg
    Department of Ophthalmology, Justus Liebig University Gießen, Gießen, Germany
  • Erdmuthe Meyer zu Bexten
    Department of Informatics, University of Applied Sciences Gießen, Gießen, Germany
  • Birgit Lorenz
    Department of Ophthalmology, Justus Liebig University Gießen, Gießen, Germany
  • Correspondence: Knut Stieger, Department of Ophthalmology, Justus Liebig University, Friedrichstraße 18, 35392 Gießen, Germany; [email protected]
Investigative Ophthalmology & Visual Science June 2013, Vol.54, 4385-4393. doi:https://doi.org/10.1167/iovs.12-11396
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Matthäus Pilch, Knut Stieger, Yaroslava Wenner, Markus N. Preising, Christoph Friedburg, Erdmuthe Meyer zu Bexten, Birgit Lorenz; Automated Segmentation of Pathological Cavities in Optical Coherence Tomography Scans. Invest. Ophthalmol. Vis. Sci. 2013;54(6):4385-4393. https://doi.org/10.1167/iovs.12-11396.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose.: To develop and evaluate a method for automated segmentation and quantitative analysis of pathological cavities in the retina visualized by spectral-domain optical coherence tomography (SD-OCT) scans.

Methods.: The algorithm is based on the segmentation of the gray-level intensities within a B-scan by a k-means cluster analysis and subsequent classification by a k-nearest neighbor algorithm. Accuracy was evaluated against three clinical experts using 130 bullous cavities identified on eight SD-OCT B-scans of three patients with wet age-related macular degeneration (AMD) and five patients with X-linked retinoschisis, as well as on one volume scan of a patient with X-linked retinoschisis. The algorithm calculated the surface area of the cavities for the B-scans and the volume of all cavities for the volume scan. In order to validate the applicability of the algorithm in clinical use, we analyzed 31 volume scans taken over the course of 4 years for one AMD patient with a serous retinal detachment.

Results.: Discrepancies in area measurements between the segmentation results of the algorithm and the experts were within the range of the area deviations among the experts. Volumes interpolated from the B-scan series of the volume scan were comparable among experts and algorithm (0.249 mm3 for the algorithm, 0.271 mm3 for expert 1, 0.239 mm3 for expert 2, and 0.262 mm3 for expert 3). Volume changes of the serous retinal detachment were quantifiable.

Conclusions.: The segmentation algorithm represents a method for the automated analysis of large numbers of volume scans during routine diagnostics and in clinical trials.

Introduction
The prevalence of neovascular disorders is increasing due to ageing of the society and longer life expectancy. In ophthalmology, one of the main consequences leading to blindness in the industrialized world is age-related macular degeneration (AMD). 1 The advanced form of the disease (i.e., wet AMD) is driven by vascular endothelial growth factor (VEGF) overexpression, which results in changes to the vessel endothelium leading to complications like edema, hemorrhage, and subfoveal neovascularization. 2  
Current therapeutic management includes repetitive intravitreal injections of anti-VEGF molecules such as bevacizumab, ranibizumab, or aflibercept. 35 The frequency of injections required to halt disease progression without inducing pathologies at the same time is still at the height of debate. 6,7 One possibility to ensure the highest safety would be the paradigm of PRN (pro re nata), in which the decision for an injection is dependent on the outcome of visual acuity and a thorough optical coherence tomography (OCT) investigation of the patient's retina for the presence or absence of retinal fluids. 8 Moreover, manual segmentation of fluids within volume scans of the retina is very time consuming, limiting the evaluation to only relatively small number of scans even in clinical studies. 9 Therefore, a robust automated method to analyze pathological bullous cavities within the retina is essential. 
Retinoschisis is a retinal structural disorder, in which the synaptic connections between photoreceptors and bipolar cells disintegrate and retinal layers separate and split either early in life (X-linked juvenile retinoschisis) or later in life in association with AMD, diabetic macular edema, or other macular diseases. 10,11 Juvenile retinoschisis is the leading cause of juvenile maculopathies in males and leads to a schisis (splitting) of retinal layers. 11 In general, large bullous lesions that can be observed by OCT imaging in childhood disappear and become undetectable in older patients. 12,13 The underlying cause is a hemizygous mutation in the RS1 gene located on the X chromosome. Currently no treatment is available, but recent successful gene therapeutic applications in mouse models of the disease raise some hope for a rapid transfer into clinical trials. 10 Therefore, analysis of the volume of the intraretinal cavities in this disorder is highly needed for characterizing the natural history prior to and following treatment. 
During the past 20 years, OCT technique has evolved to be a key element in clinical examinations of retinal disorders, providing a large volume load of high definition data on retinal structures. 14 OCT measures different reflection characteristics on retinal tissue. The transition from time domain–based to spectral domain–based acquisition technique (TD-OCT versus SD-OCT) has increased the sensitivity, resolution (from 15 μm axial resolution to 6 μm), and recording speed (from 400 axial scans/s to 40,000 axial scans/s). 15  
Even though image data on retinal structures are present at high resolution, tools for their analysis in most commercial OCT devices are still rare and often restricted to the segmentation of the entire retina or the nerve fiber layer, but details of their implementation are not available. 
Several groups have developed algorithms for evaluating pathological cavities. 1618 One approach identifies the presence of the normal macular structure and macular pathologies centered at the fovea by a machine learning algorithm based on image descriptors. 16 A semi-automated approach for quantitative segmentation was developed by Fernández 17 based on an active contour model (also known as snake). In this algorithm, an initial contour must be placed near the fluid-filled region boundaries by the user, enabling the algorithm to move iteratively to the boundaries. The first entirely automated approach by Wilkins et al. 18 is based on thresholding of OCT scans with a certain gray-level intensity. All pixels with a gray-level intensity below 31 are determined as fluid-filled lesions. 
In this study we present a novel approach for fully automated segmentation and analysis of pathologic cavities without interevaluator variability and time effort of manual segmentations. We validated the algorithm against three expert graders and applied the method for the volumetric quantification of a serous retinal detachment in one patient with wet AMD over the course of anti-VEGF treatment. 
Methods
OCT Raw Data and Hardware
OCT data were acquired from three patients with AMD and six patients with X-linked retinoschisis using the Spectralis OCT system (Heidelberg Engineering, Heidelberg, Germany) with an axial resolution of 4 μm, a transversal resolution of 14 μm, and an acquisition speed of 40,000 A-Scans/s. The volume scan protocol was used for acquisition of images with sizes 1024 × 496 pixels and 1536 × 496 pixels. The images were exported as raw data to avoid processing steps by the Spectralis OCT system itself. The algorithms were written in the programming language C/C+ + for Windows operating systems (Microsoft, Redmond, WA). 
Noise Reduction
OCT images present with different degrees of speckle noise, which needs to be minimized for optimal pattern analysis. 19 Therefore, prior to the segmentation procedure, a speckle noise reduction algorithm was applied as presented by Wong and colleagues. 20 The algorithm is based on a general Bayesian estimation, in which the image is projected into the logarithmic space to estimate the noise-free data using a posterior sampling approach. 
Segmentation of the Retina
To approximate the boundaries of the retina we used an active contour model. 21 The method proposes a region-based contour detection on the gray-level intensities as an energy minimization model, which makes the use of a gradient-based detector for edge propagation dispensable. The parameters of the active contour model were determined empirically based on different degrees of pathological cavities and different image qualities including B-scans with a smaller number on averaged scans. The algorithm starts with an initial contour defined by 10 pixels distance to the B-scan borders. The contour moves iteratively to the retina and fits the top boundary. Due to similar reflection gray-level intensities of the choroid and retina, the bottom retinal boundary is difficult to delineate accurately. Based on the initial approximation, the bottom boundary is set to the maximum pixel intensity within a window of 40 pixels in axial direction of each A-scan. Irregularities due to blood vessel shadowing effects within the retina are detected and eliminated automatically based on the shadowgraph of gray-level centers of each A-scan. 22 Finally, the retinal boundaries are smoothed by a mean filter to remove small irregularities. 
k-Means Cluster Analysis
A histogram represents the frequency distribution of numeric data. Applied to a B-scan I with width X and height Z the frequency of each gray-level intensity is determined. The histogram is defined by the function h(g), that indicates the frequency for each gray-level intensity g ∈ [g min, g max] of I
Cluster analysis methods are used to group large amount of data into a set of objects, the so-called clusters. The aim of cluster analysis methods is to determine the best partition of the data into k clusters. The objects should be grouped by the criteria that the homogeneity within a cluster should be as high as possible, thus, pixels should be grouped into a cluster that are similar to each other. 
A quality criterion for evaluation of a partition C1, C2, …, Ck is the square error criterion:    
The mean vector i of cluster Ci is called cluster center and d(m i ) 2 determines the square Euclidean distance. An optimal partition of the k clusters is reached with minimal square error criterion. 
The k-means method is an effective iterative partitioning algorithm to analyze large data amounts to achieve an optimal partition in respect to the square error criterion E. 23  
k-Nearest Neighbor Classification
Classification algorithms group objects into classes based on preclassified sample data. Within a preclassified sample, each image object is defined by a feature vector x 1,…,x s ∈ ℜ n , and each feature vector is assigned to an object class ω manually. The classification procedure of the k-nearest neighbor (k-NN) algorithm is based on a majority decision taking into account a specific number of nearest neighbors. To illustrate the difference to the k-mean clustering algorithm and the number of cluster centers k the number of nearest neighbors is defined by the parameter p. The classification algorithm for a new object x new can be described in pseudo-code as follows: 
  1.  
    Determine the p nearest objects x i 1,…,x ip to x new of the sample.
  2.  
    Classify x new with the majority decision of the object class of x i 1,…,x ip
The distances are determined by the Euclidean distance. 
Evaluation
The algorithm was evaluated for accuracy against three expert readers. 
The surface areas in the B-scans and the volume of the segmented contours in the C-scan were compared. The area with the corresponding scaling factors was calculated by Equation 1 and the volume by Equation 2:   where n is the number of pixels within a segmented contour; sx, the scaling in x-direction; sz, scaling in z-direction; and sy, the scaling in y-direction.  
Results
Following initial preprocessing, the segmentation of pathologic cavities in any given B-scans was performed in three steps (Fig. 1). First, the retinal boundaries were segmented using an active contour model. Second, the segmentation of the respective cavities was performed based on a k-means cluster analysis of the gray-level intensities. Third, the obtained segmentations were classified with a k-NN approach to reject false-positive (FP) segmentation results. 
Figure 1
 
Summarized algorithm flow. The segmentation algorithm consists of three major processing steps in the solid workflow. First, the locations of the retina boundaries are determined based on an active contour model. Second, pathological cavities are segmented by a cluster analysis of the gray-level intensity histogram. Last, FP segmentations are rejected by classification based on a preclassified sample. The sample is created and saved in the dotted workflow. Based on the segmentations, features were manually extracted and assigned into classes.
Figure 1
 
Summarized algorithm flow. The segmentation algorithm consists of three major processing steps in the solid workflow. First, the locations of the retina boundaries are determined based on an active contour model. Second, pathological cavities are segmented by a cluster analysis of the gray-level intensity histogram. Last, FP segmentations are rejected by classification based on a preclassified sample. The sample is created and saved in the dotted workflow. Based on the segmentations, features were manually extracted and assigned into classes.
Segmentation of Pathologic Cavities by a k-Means Cluster Analysis
Applied to the procedure of image segmentation, cluster analysis methods group pixels to objects that have similar features. The segmentation of fluid-filled regions and ruptures is based on the gray-level intensities histogram of the B-scan by the k-means clustering algorithm. 23 The histogram represents the feature space for the clustering algorithm. Pixels with the same gray-level intensity are assigned into the same cluster but do not have to be connected in the image implicitly. 
In Figures 2A–2C the clustering by the k-means method is illustrated in an example for one iteration step. Based on the initial position of four cluster centers, the objects (crosses) were assigned to the cluster with the minimal distance to a cluster center (Figs. 2A, 2B). Depending on the assigned objects, the cluster centers were recalculated, and based on the recalculated cluster centers, a re-assignment of the set was performed (Fig. 2C). The assignment of objects differed within the blue and brown clusters. 
Figure 2
 
Illustration of the k-means clustering algorithm and segmentation results for different number of cluster centers k. (A) Initial positions of the cluster centers (colored circles) defined with the same distances to the corners. (B) Objects are assigned to the cluster centers with minimal distance. Assigned objects were labeled according to their cluster. (C) The cluster centers are recalculated and the objects are re-assigned to the new centers with minimal distance. The assignment of the blue and brown objects differs from the previous step (marked by arrows). (D) Original B-scan that is clustered based on the gray-level intensity histogram. Clustering with the k-means algorithm depends largely on the number of cluster centers k. (E) A low number of cluster centers (k = 2) separates the bright retinal gray-level intensities with high reflectivity from the dark gray-level intensities with low reflectivity. (F) A medium number of clusters (k = 6) separates more structures and the segmentation of pathological cavities satisfies visually examination. (G) A high number of clusters (k = 20) over segments retinal structures.
Figure 2
 
Illustration of the k-means clustering algorithm and segmentation results for different number of cluster centers k. (A) Initial positions of the cluster centers (colored circles) defined with the same distances to the corners. (B) Objects are assigned to the cluster centers with minimal distance. Assigned objects were labeled according to their cluster. (C) The cluster centers are recalculated and the objects are re-assigned to the new centers with minimal distance. The assignment of the blue and brown objects differs from the previous step (marked by arrows). (D) Original B-scan that is clustered based on the gray-level intensity histogram. Clustering with the k-means algorithm depends largely on the number of cluster centers k. (E) A low number of cluster centers (k = 2) separates the bright retinal gray-level intensities with high reflectivity from the dark gray-level intensities with low reflectivity. (F) A medium number of clusters (k = 6) separates more structures and the segmentation of pathological cavities satisfies visually examination. (G) A high number of clusters (k = 20) over segments retinal structures.
Cluster analysis results obtained by the iterative k-means algorithm depend in large part on the number of cluster centers k, which must be defined prior to the calculation. However, a definition of a “correct” or “incorrect” clustering is difficult to find. Any partitioning may reflect the structure of the underlying data and depending on the context the correctness of a clustering must be defined by the operator. In Figures 2D–2G the segmentation by the k-means algorithm was processed for different numbers of cluster centers k based on a SD-OCT scan from the Spectralis OCT device. With a number of cluster centers k = 2, the bright retinal gray-level intensities with high reflectivity were separated from the dark intensities with low reflectivity (Fig. 2E). With increasing numbers of clusters centers (i.e., k = 6; Fig. 2F), more structures were separated. If the value for k was too high, an oversegmentation resulted and individual structures that belonged together were segmented separately (Fig. 2G). A possible way to determine the optimal number of cluster centers k is represented by the elbow criteria, in which the clustering is processed with successive increased numbers of cluster centers and validated by an error function. 23 In such test scenarios the elbow-criteria showed an optimal value for the number of cluster centers with k = 6 (Fig. 2F). 
Classification of False-Positive Segmentations
Two approaches were developed to classify FP segmentations. To determine the shadowing effects under retinal blood vessels in lateral direction a shadowgraph by the gray-level centers of each A-scan was used. 22 All segmentations within the cross-section of shadowing effects were removed by thresholding. 
FP segmentations within the ONL were rejected by the supervised k-NN classification algorithm based on preclassified sample data. In Figure 3, the k-NN algorithm was visualized for preclassified training data defined by two features and an assignment of each object to one of two classes (i.e., squares and circles in Fig. 3A). To determine the class of the new object (i.e., cross in Fig. 3B), a majority decision based on the distances from the five nearest neighbors was performed (Fig. 3B). The majority of the nearest neighbors represented the class the new object was assigned to (squares n = 3 objects). 
Figure 3
 
Illustration of supervised classification with the k-NN algorithm. (A) Preclassified training sample consists of two features (x- and y-axis) and two classes (squares and circles). (B) A new object (cross) is classified based on the sample by a major decision of the five nearest neighbors. The majority decision decides for the squares with three objects and thus the new object is assigned to the square class.
Figure 3
 
Illustration of supervised classification with the k-NN algorithm. (A) Preclassified training sample consists of two features (x- and y-axis) and two classes (squares and circles). (B) A new object (cross) is classified based on the sample by a major decision of the five nearest neighbors. The majority decision decides for the squares with three objects and thus the new object is assigned to the square class.
The feature vector for the classification of pathologic cavities is defined by the form and the minimal distance of the segmented contours to the bottom retinal boundary. The form of the contour is defined by Hu moments, and the distance to the boundary is calculated in micrometers because of different scaling factors of the acquired scans. Hu moments define rotation-, translation-, and scaling-invariant features for describing the form of image objects. 24 A total number of 1337 cavities were classified manually; 1045 of these were assigned as positive segmentations and 292 as negative (data not shown). 
Validation of the Algorithm on Area Segmentations of B-Scans
The proposed algorithm was validated against three expert graders. Figure 4 shows the results of the segmentations of three B-scans of the algorithm and expert 1 (Fig. 4B, blue and green lines) and experts 2 and 3 (Fig. 4C, red and cyan lines). The average processing time for the three B-scans with 24 pathologic cavities was approximately 11 seconds for the algorithm. The experts needed at least 4 minutes per B-scan for the segmentations. Deviations in the segmentation results were seen either with small cavities or at the borders of larger cavities when the surrounding tissue reflectivity was not clearly discernable. Nevertheless, the deviations at these small spots within the B-scan were minor and did not cause large deviations in area or volume calculations. 
Figure 4
 
Evaluation of the automated segmentation results by manual segmentations of three clinical experts with respect to visual accuracy. (A) B-scans segmented by the algorithm and the experts. (B) Comparison of segmentations by the algorithm and expert 1. The contours set by the algorithm are visualized with blue lines and contours set by expert 1 with green lines. (C) Comparison of segmentations by expert 2 and expert 3. The contours set by expert 2 are visualized with red lines and contours set by expert 3 with cyan lines.
Figure 4
 
Evaluation of the automated segmentation results by manual segmentations of three clinical experts with respect to visual accuracy. (A) B-scans segmented by the algorithm and the experts. (B) Comparison of segmentations by the algorithm and expert 1. The contours set by the algorithm are visualized with blue lines and contours set by expert 1 with green lines. (C) Comparison of segmentations by expert 2 and expert 3. The contours set by expert 2 are visualized with red lines and contours set by expert 3 with cyan lines.
The area deviations between the automated algorithm and the manual segmentations of the experts of all 130 segmented cavities are visualized in a Bland-Altman plot in Figure 5. 25 With increasing area the disagreement increased as well, exceeding the standard deviation more likely for larger segmentations. The smallest deviation between algorithm and the experts resulted in a mean area deviation of 769 ± 5366 μm2 (2 SD) for expert 2 and the biggest deviation resulted in a mean area deviation of 2250 ± 7755 μm2 (2 SD) for expert 3. Likewise, the smallest deviation among the experts was between expert 1 and expert 2, with a mean area deviation of 686 ± 5313 μm2 (2 SD), while the biggest deviation was between expert 2 and expert 3, with a mean area deviation of −1473 ± 5606 μm2 (2 SD). 
Figure 5
 
Evaluation of the area deviations between the algorithm and the three experts and the deviations among the experts represented by separate Bland-Altman plots. The smallest area deviation between the algorithm and the experts occurred for expert 2 and the highest deviation for expert 3. Likewise, the smallest deviation among the experts was between expert 1 and expert 2, while the highest deviation was between expert 2 and expert 3. All experts tend to overestimate the areal size in comparison with the algorithm.
Figure 5
 
Evaluation of the area deviations between the algorithm and the three experts and the deviations among the experts represented by separate Bland-Altman plots. The smallest area deviation between the algorithm and the experts occurred for expert 2 and the highest deviation for expert 3. Likewise, the smallest deviation among the experts was between expert 1 and expert 2, while the highest deviation was between expert 2 and expert 3. All experts tend to overestimate the areal size in comparison with the algorithm.
Validation of the Algorithm on a Volume Scan
Volume data obtained after the segmentation of a C-scan by the algorithm and the three experts are shown in Figure 6. The C-scan comprised of 19 B-scans with a distance between the B-scans of 237 μm. The minimal volume of the pathologic cavities was 0.239 mm3 for the segmentations of expert 2 (Fig. 6B, red line) and the maximal volume was 0.271 mm3 for the segmentations of expert 3 (Fig. 6, cyan line). The segmentation of the algorithm was well within the range of minimal and maximal volumes (0.249 mm3) (Fig. 6, blue line, expert 1 green line). Small deviations in the segmentation results were present at small cavities in the outer nuclear layer (ONL). However, these deviations caused only minimal volume changes. 
Figure 6
 
Evaluation of the segmented volumes within a C-scan for the automated algorithm and the three experts. (A) Illustration of several B-scans of the entire C-scan stack projected to their positions on the infrared fundus image. (B) One segmented B-scan to evaluate the visual accuracy. The segmentations of the automated algorithm are visualized with blue lines, of expert 1 with green lines, of expert 2 with red lines, and of expert 3 with cyan lines. (C) Volume data segmentation of the entire C-scan stack. The volume calculated by the automated algorithm lies within the volumes calculated from the contours set by expert 2 and expert 1.
Figure 6
 
Evaluation of the segmented volumes within a C-scan for the automated algorithm and the three experts. (A) Illustration of several B-scans of the entire C-scan stack projected to their positions on the infrared fundus image. (B) One segmented B-scan to evaluate the visual accuracy. The segmentations of the automated algorithm are visualized with blue lines, of expert 1 with green lines, of expert 2 with red lines, and of expert 3 with cyan lines. (C) Volume data segmentation of the entire C-scan stack. The volume calculated by the automated algorithm lies within the volumes calculated from the contours set by expert 2 and expert 1.
Intra-Evaluator Variability
The intra-evaluator variability for the three experts was evaluated through segmenting the same three B-scans at three different time points within 14 days using the intraclass correlation coefficient (ICC). 26 The ICC was 0.93 for expert 1, 0.98 for expert 2, and 0.97 for expert 3. In contrast, the ICC for the algorithm was always 1. 
Depiction of Volume Changes in a Patient With a Serous Retinal Detachment During Anti-VEGF Therapy
To demonstrate the potential practical use of the algorithm in the management of vascular retinal diseases, an automated evaluation of the therapeutic effect of the anti-VEGF drug ranibizumab in terms of volume changes of the bullous cavities was carried out for one eye of a patient with a serous retinal detachment associated with wet AMD (Fig. 7). The therapy was analyzed over a 4-year period, during which 18 intravitreal injections were administered to the patient's eye (vertical arrows in Fig. 7). During this period, 31 OCT scans were acquired with the Spectralis OCT device with the volume scan protocol (19 B-scans with approximately 243-μm distance in between) and all volumes were segmented automatically. It can be clearly seen that after each injection the volume decreased, but regrowth occurred after only few weeks. As a retrospective study, the analysis of the 31 OCT C scans was done automatically within 1 hour. Similar evaluations in larger patient cohorts could be performed in a reasonable amount of time. 
Figure 7
 
Automated analysis of volume changes within pathological cavities during anti-VEGF therapy. (A) All C-scan stacks were acquired with the volume scan patterns with a total number of 19 B-scans with approximately 243-μm distance in between and centered on the fovea. (B) Automated segmentation example of the B-scan at the position of the green arrow on the fundus infrared image (blue line) (A). (C) Automated volume segmentations. The therapeutic effect was analyzed over a time period of 4 years with 18 intravitreal ranibizumab injections. Each vertical arrow marks the administration. The plot visualizes the automatically segmented volumes of the 31 OCT measurements acquired over the therapy period.
Figure 7
 
Automated analysis of volume changes within pathological cavities during anti-VEGF therapy. (A) All C-scan stacks were acquired with the volume scan patterns with a total number of 19 B-scans with approximately 243-μm distance in between and centered on the fovea. (B) Automated segmentation example of the B-scan at the position of the green arrow on the fundus infrared image (blue line) (A). (C) Automated volume segmentations. The therapeutic effect was analyzed over a time period of 4 years with 18 intravitreal ranibizumab injections. Each vertical arrow marks the administration. The plot visualizes the automatically segmented volumes of the 31 OCT measurements acquired over the therapy period.
Discussion
In this study, we present for the first time an algorithm for automated segmentation of pathologic cavities within the retina based on a k-means cluster analysis. The aim was to develop and evaluate a fast and reliable segmentation method for clinical trials and routine clinical settings. The algorithm comprises three major steps to segment pathologic cavities: (i) isolation of the retinal area within the B-scan, (ii) k-means cluster analysis, and (iii) k-NN classification. In contrast to previously published algorithms, no user interaction was required for the approach presented here. 17  
In our algorithm, the cluster analysis is based on the gray-level intensities of the OCT B-scan. A number of FP segmentations may result. Often, FP objects occur under retinal blood vessels as shadowing effect on the underlying structures and within the ONL with weak reflectivity. FP segmentations were also described by another group, who rejected them upon occurrence within an area smaller then 7 pixels and a reasonably uniform gray-level intensity distribution of a standard deviation greater than a value of 45 within the segmented region. 18 To make our algorithm scaling invariant, we used other features. In our study the classification of the segmented objects was based on the form and the minimal distance to the bottom retinal boundary. 
The segmentation of pathological cavities within the retina is based on a cluster analysis by the k-means algorithm. Clustering results depend on the number of cluster centers k, which have to be defined before calculation. The number of clusters was evaluated by an approach presented by Duda et al., 23 that proposes an evaluation of cluster results by clustering calculation with successive increasing k. Other methods can be statistically motivated significance tests, including the F-ratio or the likelihood-ratio test. 27,28 These tests validate clustering partitions with different numbers of clusters, in which two clustering partitions are assessed against each other for the “significant better” data clustering. Furthermore, dynamic and nonparametric clustering algorithms that do not fix the number of clusters were developed with mechanisms of nonparametric clustering of the data (e.g., the Chinese restaurant process or hierarchical Dirichlet process). 29,30 However, the k-means clustering algorithm is a valuable and robust clustering method that has been widely used in medical image pattern analysis as well as applications in molecular biology. 3133  
In addition to k-means clustering, the k-NN classification was used to increase specificity of the results. Compared to other classification methods, the k-NN is a robust and not very complex classifier, but an important aim of the project was computational efficacy and robustness of the results for practical use. In line with this, the k-NN algorithm combines accurate implementation with fast computation of feature-based classification and, as presented in the validation of the algorithm, pathological cavities were identified with high accuracy and successfully delineated from surrounding tissue. 
The algorithm presented here was developed and evaluated with data acquired from the Spectralis OCT device, with B-scans averaged over a number of scans to reduce speckle noise and eye movements. Other OCT devices use single B-scan acquisition with higher speckle noise that may have a negative impact on the segmentation procedure. We therefore included B-scans with fewer scans in the evaluation set, and the algorithm performed well under such conditions (Fig. 4, bottom line). Furthermore, prior to the segmentation procedure, all B-scans were preprocessed for speckle noise reduction by a general Bayesian estimation, 20 because denoising by a nonlinear complex diffusion filter increases the robustness of a subsequently applied active contour model approach. 17 Nonetheless, for evaluation purposes B-scans with a clear differentiation of pathologic cavities from the rest of retinal structures were selected. 
In general, the area segmentations of the experts were slightly larger compared with the algorithm's segmentations. However, the deviations of expert 1 and expert 2 lay within the range of the area deviations among the three experts. The largest deviation occurred between the algorithm and expert 3. Likewise, the largest deviation among the experts occurred between expert 1 and 2 compared with expert 3. The larger segmented areas of all experts were caused by the detection methods, which are (a) greatest contrast drop using the human eye or (b) differences in gray-level intensities in the algorithm. Furthermore, the clustering algorithm depends on the a priori definition of the number of cluster centers k. A finer adjustment based on clustering of gray-level intensities is hardly feasible. If k is set too small, retinal structures cannot be delineated exactly, and if k is set too high, an oversegmentation of the structures results. 
Based on a comparison of the segmentations of one C-scan, the value produced by the algorithm of 0.249 mm3 remains within the minimal volume of 0.239 mm3 segmented by expert 2 and the maximal volume of 0.271 mm3 segmented by expert 1. The deviation between the volume segmentations of the experts demonstrates the problem of the interevaluator variability. Of course, the volume segmented by the algorithm does not represent the exact and true delineation of the cavity, but this is also not the case for the experts' results. In particular, the problem at the borders of large cavities and in certain pathologies in general, where gray-level intensities may not be absolutely null but significantly lower compared to the surrounding tissue, it largely depends on the experience of the expert to segment the pathological tissue. This will always cause differences among experts and cannot be solved by the algorithm. Only histological examination will resolve this question, but this is obviously not possible. The advantage of the segmentation results obtained from the algorithm is that they always will be identical, representing a possibility to remove the interevaluator variability from the set of errors. 
In addition, time also matters. Compared with the manual segmentations in the clinical trial of Golbaz et al., 9 in which only 11 B-scans of a stack of 128 B-scans were analyzed because of the very time-consuming procedure, here no B-scans were previously selected for automated segmentation, and the entire stack of 19 B-scans was quickly segmented by the automated algorithm. The availability of such algorithms opens a wide range of applications that were not possible until now simply because of time restrictions in clinical routine. 
Depiction of volume changes of a serous retinal detachment in one patient with wet AMD during anti-VEGF therapy demonstrated the potential practical use of the algorithm. Enabling the paradigm of PRN with a detailed OCT analysis generates as much safety as possible in therapeutic treatments. Furthermore, macular thickness information could be correlated with the segmented pathologic cavities providing new analysis tools of ocular diseases. 
In conclusion, the algorithm presented here for the segmentation of pathologic cavities demonstrates segmentation results comparable to manual segmentations by clinical experts, in a fast and reliable manner. This enables the ophthalmologist to analyze B- and C-scans for diagnostics and clinical trials in a fast and replicable way. 
Acknowledgments
Supported by the von Behring Röntgen Foundation (57-0016). 
The authors alone are responsible for the content and writing of the paper. 
Disclosure: M. Pilch, None; K. Stieger, None; Y. Wenner, None; M.N. Preising, None; C. Friedburg, None; E. Meyer zu Bexten, None; B. Lorenz, None 
References
Resnikoff S Pascolini D Etya'ale D Global data on visual impairment in the year 2002. Bull World Health Organ . 2004; 82: 844–851. [PubMed]
Lim LS Mitchell P Seddon JM Holz FG Wong TY. Age-related macular degeneration. Lancet . 2012; 379: 1728–1738. [CrossRef] [PubMed]
Rosenfeld PJ Brown DM Heier JS for the MARINA Study Group. Ranibizumab for neovascular age-related macular degeneration. N Engl J Med . 2006; 355: 1419–1431. [CrossRef] [PubMed]
CATT Research Group Martin DF Maguire MG Ranibizumab and bevacizumab for neovascular age-related macular degeneration. N Engl J Med . 2011; 364: 1897–1908. [CrossRef] [PubMed]
Dixon JA Oliver SC Olson JL Mandava N. VEGF Trap-Eye for the treatment of neovascular age-related macular degeneration. Expert Opin Investig Drugs . 2009; 18: 1573–1580. [CrossRef] [PubMed]
Keane PA Sadda SR. Development of anti-VEGF therapies for intraocular use: a guide for clinicians. J Ophthalmol . 2012; 2012: 483034. [PubMed]
Clemens CR Krohne TU Charbel Issa P High-resolution optical coherence tomography of subpigment epithelial structures in patients with pigment epithelium detachment secondary to age-related macular degeneration. Br J Ophthalmol . 2012; 96: 1088–1091. [CrossRef] [PubMed]
CATT Research Group Martin DF Maguire MG Ranibizumab and bevacizumab for treatment of neovascular age-related macular degeneration: two-year results. Ophthalmology . 2012; 119: 1388–1398. [CrossRef] [PubMed]
Golbaz I Ahlers C Stock G Quantification of the therapeutic response of intraretinal, subretinal, and subpigment epithelial compartments in exudative AMD during anti-VEGF therapy. Invest Ophthalmol Vis Sci . 2011; 52: 1599–1605. [CrossRef] [PubMed]
Molday RS Kellner U Weber BH. X-linked juvenile retinoschisis: clinical diagnosis, genetic analysis, and molecular mechanisms. Prog Retin Eye Res . 2012; 31: 195–212. [CrossRef] [PubMed]
Sikkink SK Biswas S Parry NR Stanga PE Trump D. X-linked retinoschisis: an update. J Med Genet . 2007; 44: 225–232. [CrossRef] [PubMed]
Wabbels B Demmler A Paunescu K Wegscheider E Preising MN Lorenz B. Fundus autofluorescence in children and teenagers with hereditary retinal diseases. Graefes Arch Clin Exp Ophthalmol . 2006; 244: 36–45. [CrossRef] [PubMed]
Renner AB Kellner U Fiebig B Cropp E Foerster MH Weber BH. ERG variability in X-linked congenital retinoschisis patients with mutations in the RS1 gene and the diagnostic importance of fundus autofluorescence and OCT. Doc Ophthalmol . 2008; 116: 97–109. [CrossRef] [PubMed]
Huang D Swanson EA Lin CP. Optical coherence tomography. Science . 1991; 254: 1178–1181. [CrossRef] [PubMed]
Geitzenauer W Hitzenberger C Schmidt-Erfurth U. Retinal optical coherence tomography: past, present and future perspectives. Br J Ophthalmol . 2010; 95: 171–177. [CrossRef] [PubMed]
Liu YY Chen M Ishikawa H Wollstein G Schuman JS Rehg JM. Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding. Med Image Anal . 2011; 15: 748–759. [CrossRef] [PubMed]
Fernández DC. Delineating fluid-filled region boundaries in optical coherence tomography images of the retina. IEEE Trans Med Imaging . 2005; 24: 929–945. [CrossRef] [PubMed]
Wilkins GR Houghton OM Oldenburg AL. Automated segmentation of intraretinal cystoid fluid in optical coherence tomography. IEEE Trans Biomedical Engineering . 2012; 59: 1109–1114. [CrossRef]
Schmitt JM Xiang SH Yung KM. Speckle in optical coherence tomography. J Biomed Opt . 1999; 4: 95–105. [CrossRef] [PubMed]
Wong A Mishra A Bizheva K Clausi D. General Bayesian estimation for speckle noise reduction in optical coherence tomography retinal imagery. Opt Express . 2010; 18: 8338–8352. [CrossRef] [PubMed]
Chan TF Vese LA. Active contours without edges. IEEE Trans Image Process . 2001; 10: 266–277. [CrossRef] [PubMed]
Pilch M Wenner Y Strohmayr E Automated segmentation of retinal blood vessels in spectral domain optical coherence tomography scans. Biomed Opt Express . 2012; 3: 1478–1491. [CrossRef] [PubMed]
Duda RO Hart PE Stork DG. Pattern Classification. 2nd ed. New York: John Wiley and Sons; 2001.
Hu M. Visual pattern recognition by moment invariants. IRE Trans Inf Theor . 1962; 8: 179–187.
Bland J Altman D. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet . 1986; 1: 307–310. [CrossRef] [PubMed]
Fleiss H. Statistical Methods for Rates and Proportions. 2nd ed. New York: Wiley; 1981.
Beale EML. Cluster Analysis . London: Scientific Control System; 1969.
Wolfe JH. Pattern clustering by multivariate mixture analysis. Multivar Behav Res . 1970; 5: 329–350. [CrossRef]
Teh YW Jordan MI Beal MJ Blei DM. Hierarchical Dirichlet processes. J Am Stat Assoc . 2006; 101: 1566–1581. [CrossRef]
Socher R Maas A Manning CD. Spectral Chinese restaurant processes: nonparametric clustering based on similarities. J Mach Learn Res Proc Track . 2011; 15: 698–706.
Zortea M Skrovseth SO Godtliebsen F. Automatic learning of spatial patterns for diagnosis of skin lesions. Conf Proc IEEE Eng Med Biol Soc . 2010: 5601–5604.
Doan NT van Rooden S Versluis MJ Combined magnitude and phase-based segmentation of the cerebral cortex in 7T MR images of the elderly. J Magn Reson Imaging . 2012; 36: 99–109. [CrossRef] [PubMed]
Bradbury L Wan JW. A spectral k-means approach to bright field cell image segmentation. Conf Proc IEEE Eng Med Biol Soc . 2010; 2010: 4748–4751. [PubMed]
Footnotes
 MP and KS contributed equally to the work presented here and should therefore be regarded as equivalent authors.
Figure 1
 
Summarized algorithm flow. The segmentation algorithm consists of three major processing steps in the solid workflow. First, the locations of the retina boundaries are determined based on an active contour model. Second, pathological cavities are segmented by a cluster analysis of the gray-level intensity histogram. Last, FP segmentations are rejected by classification based on a preclassified sample. The sample is created and saved in the dotted workflow. Based on the segmentations, features were manually extracted and assigned into classes.
Figure 1
 
Summarized algorithm flow. The segmentation algorithm consists of three major processing steps in the solid workflow. First, the locations of the retina boundaries are determined based on an active contour model. Second, pathological cavities are segmented by a cluster analysis of the gray-level intensity histogram. Last, FP segmentations are rejected by classification based on a preclassified sample. The sample is created and saved in the dotted workflow. Based on the segmentations, features were manually extracted and assigned into classes.
Figure 2
 
Illustration of the k-means clustering algorithm and segmentation results for different number of cluster centers k. (A) Initial positions of the cluster centers (colored circles) defined with the same distances to the corners. (B) Objects are assigned to the cluster centers with minimal distance. Assigned objects were labeled according to their cluster. (C) The cluster centers are recalculated and the objects are re-assigned to the new centers with minimal distance. The assignment of the blue and brown objects differs from the previous step (marked by arrows). (D) Original B-scan that is clustered based on the gray-level intensity histogram. Clustering with the k-means algorithm depends largely on the number of cluster centers k. (E) A low number of cluster centers (k = 2) separates the bright retinal gray-level intensities with high reflectivity from the dark gray-level intensities with low reflectivity. (F) A medium number of clusters (k = 6) separates more structures and the segmentation of pathological cavities satisfies visually examination. (G) A high number of clusters (k = 20) over segments retinal structures.
Figure 2
 
Illustration of the k-means clustering algorithm and segmentation results for different number of cluster centers k. (A) Initial positions of the cluster centers (colored circles) defined with the same distances to the corners. (B) Objects are assigned to the cluster centers with minimal distance. Assigned objects were labeled according to their cluster. (C) The cluster centers are recalculated and the objects are re-assigned to the new centers with minimal distance. The assignment of the blue and brown objects differs from the previous step (marked by arrows). (D) Original B-scan that is clustered based on the gray-level intensity histogram. Clustering with the k-means algorithm depends largely on the number of cluster centers k. (E) A low number of cluster centers (k = 2) separates the bright retinal gray-level intensities with high reflectivity from the dark gray-level intensities with low reflectivity. (F) A medium number of clusters (k = 6) separates more structures and the segmentation of pathological cavities satisfies visually examination. (G) A high number of clusters (k = 20) over segments retinal structures.
Figure 3
 
Illustration of supervised classification with the k-NN algorithm. (A) Preclassified training sample consists of two features (x- and y-axis) and two classes (squares and circles). (B) A new object (cross) is classified based on the sample by a major decision of the five nearest neighbors. The majority decision decides for the squares with three objects and thus the new object is assigned to the square class.
Figure 3
 
Illustration of supervised classification with the k-NN algorithm. (A) Preclassified training sample consists of two features (x- and y-axis) and two classes (squares and circles). (B) A new object (cross) is classified based on the sample by a major decision of the five nearest neighbors. The majority decision decides for the squares with three objects and thus the new object is assigned to the square class.
Figure 4
 
Evaluation of the automated segmentation results by manual segmentations of three clinical experts with respect to visual accuracy. (A) B-scans segmented by the algorithm and the experts. (B) Comparison of segmentations by the algorithm and expert 1. The contours set by the algorithm are visualized with blue lines and contours set by expert 1 with green lines. (C) Comparison of segmentations by expert 2 and expert 3. The contours set by expert 2 are visualized with red lines and contours set by expert 3 with cyan lines.
Figure 4
 
Evaluation of the automated segmentation results by manual segmentations of three clinical experts with respect to visual accuracy. (A) B-scans segmented by the algorithm and the experts. (B) Comparison of segmentations by the algorithm and expert 1. The contours set by the algorithm are visualized with blue lines and contours set by expert 1 with green lines. (C) Comparison of segmentations by expert 2 and expert 3. The contours set by expert 2 are visualized with red lines and contours set by expert 3 with cyan lines.
Figure 5
 
Evaluation of the area deviations between the algorithm and the three experts and the deviations among the experts represented by separate Bland-Altman plots. The smallest area deviation between the algorithm and the experts occurred for expert 2 and the highest deviation for expert 3. Likewise, the smallest deviation among the experts was between expert 1 and expert 2, while the highest deviation was between expert 2 and expert 3. All experts tend to overestimate the areal size in comparison with the algorithm.
Figure 5
 
Evaluation of the area deviations between the algorithm and the three experts and the deviations among the experts represented by separate Bland-Altman plots. The smallest area deviation between the algorithm and the experts occurred for expert 2 and the highest deviation for expert 3. Likewise, the smallest deviation among the experts was between expert 1 and expert 2, while the highest deviation was between expert 2 and expert 3. All experts tend to overestimate the areal size in comparison with the algorithm.
Figure 6
 
Evaluation of the segmented volumes within a C-scan for the automated algorithm and the three experts. (A) Illustration of several B-scans of the entire C-scan stack projected to their positions on the infrared fundus image. (B) One segmented B-scan to evaluate the visual accuracy. The segmentations of the automated algorithm are visualized with blue lines, of expert 1 with green lines, of expert 2 with red lines, and of expert 3 with cyan lines. (C) Volume data segmentation of the entire C-scan stack. The volume calculated by the automated algorithm lies within the volumes calculated from the contours set by expert 2 and expert 1.
Figure 6
 
Evaluation of the segmented volumes within a C-scan for the automated algorithm and the three experts. (A) Illustration of several B-scans of the entire C-scan stack projected to their positions on the infrared fundus image. (B) One segmented B-scan to evaluate the visual accuracy. The segmentations of the automated algorithm are visualized with blue lines, of expert 1 with green lines, of expert 2 with red lines, and of expert 3 with cyan lines. (C) Volume data segmentation of the entire C-scan stack. The volume calculated by the automated algorithm lies within the volumes calculated from the contours set by expert 2 and expert 1.
Figure 7
 
Automated analysis of volume changes within pathological cavities during anti-VEGF therapy. (A) All C-scan stacks were acquired with the volume scan patterns with a total number of 19 B-scans with approximately 243-μm distance in between and centered on the fovea. (B) Automated segmentation example of the B-scan at the position of the green arrow on the fundus infrared image (blue line) (A). (C) Automated volume segmentations. The therapeutic effect was analyzed over a time period of 4 years with 18 intravitreal ranibizumab injections. Each vertical arrow marks the administration. The plot visualizes the automatically segmented volumes of the 31 OCT measurements acquired over the therapy period.
Figure 7
 
Automated analysis of volume changes within pathological cavities during anti-VEGF therapy. (A) All C-scan stacks were acquired with the volume scan patterns with a total number of 19 B-scans with approximately 243-μm distance in between and centered on the fovea. (B) Automated segmentation example of the B-scan at the position of the green arrow on the fundus infrared image (blue line) (A). (C) Automated volume segmentations. The therapeutic effect was analyzed over a time period of 4 years with 18 intravitreal ranibizumab injections. Each vertical arrow marks the administration. The plot visualizes the automatically segmented volumes of the 31 OCT measurements acquired over the therapy period.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×