**Purpose.**:
To develop and evaluate a method for automated segmentation and quantitative analysis of pathological cavities in the retina visualized by spectral-domain optical coherence tomography (SD-OCT) scans.

**Methods.**:
The algorithm is based on the segmentation of the gray-level intensities within a B-scan by a k-means cluster analysis and subsequent classification by a k-nearest neighbor algorithm. Accuracy was evaluated against three clinical experts using 130 bullous cavities identified on eight SD-OCT B-scans of three patients with wet age-related macular degeneration (AMD) and five patients with X-linked retinoschisis, as well as on one volume scan of a patient with X-linked retinoschisis. The algorithm calculated the surface area of the cavities for the B-scans and the volume of all cavities for the volume scan. In order to validate the applicability of the algorithm in clinical use, we analyzed 31 volume scans taken over the course of 4 years for one AMD patient with a serous retinal detachment.

**Results.**:
Discrepancies in area measurements between the segmentation results of the algorithm and the experts were within the range of the area deviations among the experts. Volumes interpolated from the B-scan series of the volume scan were comparable among experts and algorithm (0.249 mm^{3} for the algorithm, 0.271 mm^{3} for expert 1, 0.239 mm^{3} for expert 2, and 0.262 mm^{3} for expert 3). Volume changes of the serous retinal detachment were quantifiable.

**Conclusions.**:
The segmentation algorithm represents a method for the automated analysis of large numbers of volume scans during routine diagnostics and in clinical trials.

^{ 1 }The advanced form of the disease (i.e., wet AMD) is driven by vascular endothelial growth factor (VEGF) overexpression, which results in changes to the vessel endothelium leading to complications like edema, hemorrhage, and subfoveal neovascularization.

^{ 2 }

^{ 3–5 }The frequency of injections required to halt disease progression without inducing pathologies at the same time is still at the height of debate.

^{ 6,7 }One possibility to ensure the highest safety would be the paradigm of PRN (pro re nata), in which the decision for an injection is dependent on the outcome of visual acuity and a thorough optical coherence tomography (OCT) investigation of the patient's retina for the presence or absence of retinal fluids.

^{ 8 }Moreover, manual segmentation of fluids within volume scans of the retina is very time consuming, limiting the evaluation to only relatively small number of scans even in clinical studies.

^{ 9 }Therefore, a robust automated method to analyze pathological bullous cavities within the retina is essential.

^{ 10,11 }Juvenile retinoschisis is the leading cause of juvenile maculopathies in males and leads to a schisis (splitting) of retinal layers.

^{ 11 }In general, large bullous lesions that can be observed by OCT imaging in childhood disappear and become undetectable in older patients.

^{ 12,13 }The underlying cause is a hemizygous mutation in the RS1 gene located on the X chromosome. Currently no treatment is available, but recent successful gene therapeutic applications in mouse models of the disease raise some hope for a rapid transfer into clinical trials.

^{ 10 }Therefore, analysis of the volume of the intraretinal cavities in this disorder is highly needed for characterizing the natural history prior to and following treatment.

^{ 14 }OCT measures different reflection characteristics on retinal tissue. The transition from time domain–based to spectral domain–based acquisition technique (TD-OCT versus SD-OCT) has increased the sensitivity, resolution (from 15 μm axial resolution to 6 μm), and recording speed (from 400 axial scans/s to 40,000 axial scans/s).

^{ 15 }

^{ 16–18 }One approach identifies the presence of the normal macular structure and macular pathologies centered at the fovea by a machine learning algorithm based on image descriptors.

^{ 16 }A semi-automated approach for quantitative segmentation was developed by Fernández

^{ 17 }based on an active contour model (also known as snake). In this algorithm, an initial contour must be placed near the fluid-filled region boundaries by the user, enabling the algorithm to move iteratively to the boundaries. The first entirely automated approach by Wilkins et al.

^{ 18 }is based on thresholding of OCT scans with a certain gray-level intensity. All pixels with a gray-level intensity below 31 are determined as fluid-filled lesions.

^{ 19 }Therefore, prior to the segmentation procedure, a speckle noise reduction algorithm was applied as presented by Wong and colleagues.

^{ 20 }The algorithm is based on a general Bayesian estimation, in which the image is projected into the logarithmic space to estimate the noise-free data using a posterior sampling approach.

^{ 21 }The method proposes a region-based contour detection on the gray-level intensities as an energy minimization model, which makes the use of a gradient-based detector for edge propagation dispensable. The parameters of the active contour model were determined empirically based on different degrees of pathological cavities and different image qualities including B-scans with a smaller number on averaged scans. The algorithm starts with an initial contour defined by 10 pixels distance to the B-scan borders. The contour moves iteratively to the retina and fits the top boundary. Due to similar reflection gray-level intensities of the choroid and retina, the bottom retinal boundary is difficult to delineate accurately. Based on the initial approximation, the bottom boundary is set to the maximum pixel intensity within a window of 40 pixels in axial direction of each A-scan. Irregularities due to blood vessel shadowing effects within the retina are detected and eliminated automatically based on the shadowgraph of gray-level centers of each A-scan.

^{ 22 }Finally, the retinal boundaries are smoothed by a mean filter to remove small irregularities.

**I**with width

*X*and height

*Z*the frequency of each gray-level intensity is determined. The histogram is defined by the function

*h*(

*g*), that indicates the frequency for each gray-level intensity

*g*∈ [

*g*

_{min},

*g*

_{max}] of

**I**.

*k*clusters. The objects should be grouped by the criteria that the homogeneity within a cluster should be as high as possible, thus, pixels should be grouped into a cluster that are similar to each other.

**m̄**

*of cluster*

_{i}*C*is called cluster center and

_{i}*d*(

**m**−

**m̄**

*)*

_{i}^{ 2 }determines the square Euclidean distance. An optimal partition of the

*k*clusters is reached with minimal square error criterion.

*E*.

^{ 23 }

**x**

_{1},…,

**x**

*∈ ℜ*

_{s}*, and each feature vector is assigned to an object class*

^{n}*ω*manually. The classification procedure of the k-nearest neighbor (k-NN) algorithm is based on a majority decision taking into account a specific number of nearest neighbors. To illustrate the difference to the k-mean clustering algorithm and the number of cluster centers

*k*the number of nearest neighbors is defined by the parameter

*p*. The classification algorithm for a new object

**x**

*can be described in pseudo-code as follows:*

_{new}*n*is the number of pixels within a segmented contour;

*s*, the scaling in

_{x}*x*-direction;

*s*, scaling in

_{z}*z*-direction; and

*s*, the scaling in

_{y}*y*-direction.

**Figure 1**

**Figure 1**

^{ 23 }The histogram represents the feature space for the clustering algorithm. Pixels with the same gray-level intensity are assigned into the same cluster but do not have to be connected in the image implicitly.

**Figure 2**

**Figure 2**

*k*, which must be defined prior to the calculation. However, a definition of a “correct” or “incorrect” clustering is difficult to find. Any partitioning may reflect the structure of the underlying data and depending on the context the correctness of a clustering must be defined by the operator. In Figures 2D–2G the segmentation by the k-means algorithm was processed for different numbers of cluster centers

*k*based on a SD-OCT scan from the Spectralis OCT device. With a number of cluster centers

*k*= 2, the bright retinal gray-level intensities with high reflectivity were separated from the dark intensities with low reflectivity (Fig. 2E). With increasing numbers of clusters centers (i.e.,

*k*= 6; Fig. 2F), more structures were separated. If the value for

*k*was too high, an oversegmentation resulted and individual structures that belonged together were segmented separately (Fig. 2G). A possible way to determine the optimal number of cluster centers

*k*is represented by the elbow criteria, in which the clustering is processed with successive increased numbers of cluster centers and validated by an error function.

^{ 23 }In such test scenarios the elbow-criteria showed an optimal value for the number of cluster centers with

*k*= 6 (Fig. 2F).

^{ 22 }All segmentations within the cross-section of shadowing effects were removed by thresholding.

*n*= 3 objects).

**Figure 3**

**Figure 3**

^{ 24 }A total number of 1337 cavities were classified manually; 1045 of these were assigned as positive segmentations and 292 as negative (data not shown).

**Figure 4**

**Figure 4**

^{ 25 }With increasing area the disagreement increased as well, exceeding the standard deviation more likely for larger segmentations. The smallest deviation between algorithm and the experts resulted in a mean area deviation of 769 ± 5366 μm

^{2}(2 SD) for expert 2 and the biggest deviation resulted in a mean area deviation of 2250 ± 7755 μm

^{2}(2 SD) for expert 3. Likewise, the smallest deviation among the experts was between expert 1 and expert 2, with a mean area deviation of 686 ± 5313 μm

^{2}(2 SD), while the biggest deviation was between expert 2 and expert 3, with a mean area deviation of −1473 ± 5606 μm

^{2}(2 SD).

**Figure 5**

**Figure 5**

^{3}for the segmentations of expert 2 (Fig. 6B, red line) and the maximal volume was 0.271 mm

^{3}for the segmentations of expert 3 (Fig. 6, cyan line). The segmentation of the algorithm was well within the range of minimal and maximal volumes (0.249 mm

^{3}) (Fig. 6, blue line, expert 1 green line). Small deviations in the segmentation results were present at small cavities in the outer nuclear layer (ONL). However, these deviations caused only minimal volume changes.

**Figure 6**

**Figure 6**

^{ 26 }The ICC was 0.93 for expert 1, 0.98 for expert 2, and 0.97 for expert 3. In contrast, the ICC for the algorithm was always 1.

**Figure 7**

**Figure 7**

^{ 17 }

^{ 18 }To make our algorithm scaling invariant, we used other features. In our study the classification of the segmented objects was based on the form and the minimal distance to the bottom retinal boundary.

*k*, which have to be defined before calculation. The number of clusters was evaluated by an approach presented by Duda et al.,

^{ 23 }that proposes an evaluation of cluster results by clustering calculation with successive increasing

*k*. Other methods can be statistically motivated significance tests, including the F-ratio or the likelihood-ratio test.

^{ 27,28 }These tests validate clustering partitions with different numbers of clusters, in which two clustering partitions are assessed against each other for the “significant better” data clustering. Furthermore, dynamic and nonparametric clustering algorithms that do not fix the number of clusters were developed with mechanisms of nonparametric clustering of the data (e.g., the Chinese restaurant process or hierarchical Dirichlet process).

^{ 29,30 }However, the k-means clustering algorithm is a valuable and robust clustering method that has been widely used in medical image pattern analysis as well as applications in molecular biology.

^{ 31–33 }

*k*-means clustering, the

*k*-NN classification was used to increase specificity of the results. Compared to other classification methods, the

*k*-NN is a robust and not very complex classifier, but an important aim of the project was computational efficacy and robustness of the results for practical use. In line with this, the k-NN algorithm combines accurate implementation with fast computation of feature-based classification and, as presented in the validation of the algorithm, pathological cavities were identified with high accuracy and successfully delineated from surrounding tissue.

^{ 20 }because denoising by a nonlinear complex diffusion filter increases the robustness of a subsequently applied active contour model approach.

^{ 17 }Nonetheless, for evaluation purposes B-scans with a clear differentiation of pathologic cavities from the rest of retinal structures were selected.

*k*. A finer adjustment based on clustering of gray-level intensities is hardly feasible. If

*k*is set too small, retinal structures cannot be delineated exactly, and if

*k*is set too high, an oversegmentation of the structures results.

^{3}remains within the minimal volume of 0.239 mm

^{3}segmented by expert 2 and the maximal volume of 0.271 mm

^{3}segmented by expert 1. The deviation between the volume segmentations of the experts demonstrates the problem of the interevaluator variability. Of course, the volume segmented by the algorithm does not represent the exact and true delineation of the cavity, but this is also not the case for the experts' results. In particular, the problem at the borders of large cavities and in certain pathologies in general, where gray-level intensities may not be absolutely null but significantly lower compared to the surrounding tissue, it largely depends on the experience of the expert to segment the pathological tissue. This will always cause differences among experts and cannot be solved by the algorithm. Only histological examination will resolve this question, but this is obviously not possible. The advantage of the segmentation results obtained from the algorithm is that they always will be identical, representing a possibility to remove the interevaluator variability from the set of errors.

^{ 9 }in which only 11 B-scans of a stack of 128 B-scans were analyzed because of the very time-consuming procedure, here no B-scans were previously selected for automated segmentation, and the entire stack of 19 B-scans was quickly segmented by the automated algorithm. The availability of such algorithms opens a wide range of applications that were not possible until now simply because of time restrictions in clinical routine.

**M. Pilch**, None;

**K. Stieger**, None;

**Y. Wenner**, None;

**M.N. Preising**, None;

**C. Friedburg**, None;

**E. Meyer zu Bexten**, None;

**B. Lorenz**, None

*. 2004; 82: 844–851. [PubMed]*

*Bull World Health Organ**. 2012; 379: 1728–1738. [CrossRef] [PubMed]*

*Lancet**. 2006; 355: 1419–1431. [CrossRef] [PubMed]*

*N Engl J Med**. 2011; 364: 1897–1908. [CrossRef] [PubMed]*

*N Engl J Med**. 2009; 18: 1573–1580. [CrossRef] [PubMed]*

*Expert Opin Investig Drugs**. 2012; 2012: 483034. [PubMed]*

*J Ophthalmol**. 2012; 96: 1088–1091. [CrossRef] [PubMed]*

*Br J Ophthalmol**. 2012; 119: 1388–1398. [CrossRef] [PubMed]*

*Ophthalmology**. 2011; 52: 1599–1605. [CrossRef] [PubMed]*

*Invest Ophthalmol Vis Sci**. 2012; 31: 195–212. [CrossRef] [PubMed]*

*Prog Retin Eye Res**. 2007; 44: 225–232. [CrossRef] [PubMed]*

*J Med Genet**. 2006; 244: 36–45. [CrossRef] [PubMed]*

*Graefes Arch Clin Exp Ophthalmol**. 2008; 116: 97–109. [CrossRef] [PubMed]*

*Doc Ophthalmol**. 1991; 254: 1178–1181. [CrossRef] [PubMed]*

*Science**. 2010; 95: 171–177. [CrossRef] [PubMed]*

*Br J Ophthalmol**. 2011; 15: 748–759. [CrossRef] [PubMed]*

*Med Image Anal**. 2005; 24: 929–945. [CrossRef] [PubMed]*

*IEEE Trans Med Imaging**. 2012; 59: 1109–1114. [CrossRef]*

*IEEE Trans Biomedical Engineering**. 1999; 4: 95–105. [CrossRef] [PubMed]*

*J Biomed Opt**. 2010; 18: 8338–8352. [CrossRef] [PubMed]*

*Opt Express**. 2001; 10: 266–277. [CrossRef] [PubMed]*

*IEEE Trans Image Process**. 2012; 3: 1478–1491. [CrossRef] [PubMed]*

*Biomed Opt Express**. New York: John Wiley and Sons; 2001.*

*Pattern Classification*. 2nd ed*. 1962; 8: 179–187.*

*IRE Trans Inf Theor**. 1986; 1: 307–310. [CrossRef] [PubMed]*

*Lancet**. New York: Wiley; 1981.*

*Statistical Methods for Rates and Proportions*. 2nd ed*. London: Scientific Control System; 1969.*

*Cluster Analysis**. 1970; 5: 329–350. [CrossRef]*

*Multivar Behav Res**. 2006; 101: 1566–1581. [CrossRef]*

*J Am Stat Assoc**. 2011; 15: 698–706.*

*J Mach Learn Res Proc Track**. 2010: 5601–5604.*

*Conf Proc IEEE Eng Med Biol Soc**. 2012; 36: 99–109. [CrossRef] [PubMed]*

*J Magn Reson Imaging**. 2010; 2010: 4748–4751. [PubMed]*

*Conf Proc IEEE Eng Med Biol Soc*