April 2013
Volume 54, Issue 4
Free
Retina  |   April 2013
Comparative Analysis of Repeatability of Manual and Automated Choroidal Thickness Measurements in Nonneovascular Age-Related Macular Degeneration
Author Affiliations & Notes
  • Sieun Lee
    School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
  • Nader Fallah
    Rick Hansen Institute, Vancouver, British Columbia, Canada
  • Farzin Forooghian
    Department of Ophthalmology, University of British Columbia, Vancouver, British Columbia, Canada
  • Ashley Ko
    Department of Ophthalmology, University of British Columbia, Vancouver, British Columbia, Canada
  • Kaivon Pakzad-Vaezi
    Department of Ophthalmology, University of British Columbia, Vancouver, British Columbia, Canada
  • Andrew B. Merkur
    Department of Ophthalmology, University of British Columbia, Vancouver, British Columbia, Canada
  • Andrew W. Kirker
    Department of Ophthalmology, University of British Columbia, Vancouver, British Columbia, Canada
  • David A. Albiani
    Department of Ophthalmology, University of British Columbia, Vancouver, British Columbia, Canada
  • Mei Young
    Department of Ophthalmology, University of British Columbia, Vancouver, British Columbia, Canada
  • Marinko V. Sarunic
    School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
  • Mirza Faisal Beg
    School of Engineering Science, Simon Fraser University, Burnaby, British Columbia, Canada
  • Correspondence: Mirza Faisal Beg, School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC, Canada V5A 1S6; mfbeg@sfu.ca
  • Marinko V. Sarunic, School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC, Canada V5A 1S6; msarunic@sfu.ca  
  • Farzin Forooghian, Department of Ophthalmology, University of British Columbia, 1081 Burrard Street, Vancouver, BC, Canada V6Z 1Y6; farzin.forooghian@gmail.com
  • Sieun Lee, School of Engineering Science, Simon Fraser University, 8888 University Drive, Burnaby, BC, Canada V5A 1S6; leeau@sfu.ca
Investigative Ophthalmology & Visual Science April 2013, Vol.54, 2864-2871. doi:https://doi.org/10.1167/iovs.12-11521
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sieun Lee, Nader Fallah, Farzin Forooghian, Ashley Ko, Kaivon Pakzad-Vaezi, Andrew B. Merkur, Andrew W. Kirker, David A. Albiani, Mei Young, Marinko V. Sarunic, Mirza Faisal Beg; Comparative Analysis of Repeatability of Manual and Automated Choroidal Thickness Measurements in Nonneovascular Age-Related Macular Degeneration. Invest. Ophthalmol. Vis. Sci. 2013;54(4):2864-2871. https://doi.org/10.1167/iovs.12-11521.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose.: We compared the reproducibility and mutual agreement of the subfoveal choroidal thickness measurements by expert raters and an automated algorithm in enhanced depth imaging optical coherence tomography (EDI-OCT) images of eyes with nonneovascular age-related macular degeneration (AMD).

Methods.: We recruited 44 patients with nonneovascular AMD and EDI-OCT images were acquired. Subfoveal choroidal thickness was measured manually by two expert raters and automatically by a graph-cut–based algorithm. Drusen area was measured using the automated software (version 6) of Cirrus SD-OCT. The manual and automated choroidal thickness measurements were compared in reproducibility, mutual agreement, and correlation with drusen area.

Results.: The mean subfoveal choroidal thickness was 246 ± 63 μm for the first rater, 214 ± 68 for the second rater, and 209 ± 53 for the automated algorithm. Intraclass correlation coefficients (ICC) and 95% confidence intervals (CI) were 0.96 (CI 0.94–0.98) between the raters, 0.85 (CI 0.77–0.90) between the first rater and the automated algorithm, and 0.84 (CI 0.75–0.89) between the second rater and the automated algorithm. Repeat scan measurement ICCs were 0.91 (CI 0.86–0.94) for the first rater, 0.96 (CI 0.94–0.97) for the second rater, and 0.87 (CI 0.80–0.92) for the automated algorithm. Manual and automated measurements were correlated with drusen area.

Conclusions.: The automated algorithm generally yielded smaller choroidal thickness than the raters with a moderate level of agreement. However, its repeat scan measurement repeatability was comparable to that of the manual measurements. The mean difference between the raters indicated possible biases in different raters and rating sessions. The correlation of the automated measurements with the drusen area was comparable to that of the manual measurements. Automated subfoveal choroidal thickness measurement has potential use in clinical practice and clinical trials, with possibility for reduced time and labor cost.

Introduction
The potential role of the choroid in the development of nonneovascular age-related macular degeneration (AMD) is yet an underexplored aspect of the disease. Studies have shown decreased blood volume and abnormal flow in eyes with nonneovascular AMD compared to normal eyes. 1,2 Berenberg et al. reported a significant association between increased drusen extent, and decreased choroidal blood volume and flow in patients with nonneovascular AMD. 3 The critical function of the choroid in providing nutrients to photoreceptors and removing waste products from the RPE suggests any abnormalities in the layer affecting the metabolic support may contribute to the development of AMD. 4,5  
Over the past decade, optical coherence tomography (OCT) has become an essential tool in clinical and experimental ophthalmology, providing high-resolution, cross-sectional, and three-dimensional images of the retina in vivo. Imaging the full choroid, however, has been a more difficult challenge due to the sharp decrease in signal intensity beyond the RPE, caused by the pigment in the RPE and choroid, and light scattering in the vasculature. Enhanced depth imaging (EDI) OCT 6 improves the visualization of the choroid by positioning the spectral-domain (SD) OCT closer to the eye. This pushes back the peak signal location (zero delay) from the conventional vitreoretinal interface towards the outer scleral border. Since the choroid is placed closer to the zero delay, the imaging sensitivity in the region is enhanced. In SD-OCT, the Fourier transform produces two equivalent mirror images. Conventional instruments display only one of the images in which the anterior portion of the retina is facing up in the screen. In EDI-OCT, the physical displacement of the instrument results in the inverted mirror image being displayed. EDI-OCT has been used to measure the choroid in normal eyes, 7 high myopia, 8 central serous chorioretinopathy, 9,10 and AMD. 11 In the majority of studies, choroidal thickness has been measured manually, although automated segmentation of the choroid-sclera interface and measurement of the choroidal thickness have been implemented in three-dimensional 1060 nm OCT, 12 three-dimensional 890 nm OCT, 13 and in polarization sensitive OCT. 14  
We present manual and automated measurements of subfoveal choroidal thickness measurement in EDI-OCT scans from patients with nonneovascular AMD. Measurement repeatability and correlation to drusen area were examined. Automated segmentation of choroidal boundaries used a graph-cut–based algorithm, which has been used successfully to delineate retinal layers 15,16 and segment choroid in normal subjects. 13  
Methods
A total of 88 eyes from 44 patients under the care of the retina service at University of British Columbia (UBC) was included in the study based on the diagnosis of bilateral nonneovascular AMD and age of 55 years or older. Participants were enrolled between February 2012 and May 2012. We recruited patients with the full spectrum of drusen size, including small drusen, intermediate drusen, large drusen, and drusenoid pigment epithelial detachments. 17,18 Patients with other macular pathology, including neovascular AMD, geographic atrophy, and macular dystrophy, were excluded. Participants with significant ocular media opacity also were excluded. Each patient underwent a full ophthalmic examination, including biomicroscopy, best-corrected visual acuity using Snellen charts, intraocular pressure and dilated fundus examination. For statistical analysis, Snellen acuity was converted to logMAR equivalents. Informed consent was obtained from all patients, and the study was approved by the Research Ethics Board at the UBC and adhered to the tenets of the Declaration of Helsinki. 
Cirrus SD-OCT (software version 6; Carl Zeiss Meditec, Inc., Dublin, CA) was used for acquisition of two or more 3D macular cube scans (200 A-scans per B-scan, 200 B-scans) and 2D EDI raster scans, all centered on the fovea. The wavelength of the Cirrus machine was 840 nm, and the EDI scans used an average of 20 lines with proprietary Selective Pixel Profiling. This is different from Heidelberg EDI-OCT, which averages 100 lines with eye tracking. The macular cube scans were acquired over a region of 6 × 6 mm and selected for minimum signal strength of 6 out of 10. For the 3D macular cube scan, the scans with the highest signal strength were used for further analysis. For the EDI raster scans, the two scans with the highest signal strength were used. Drusen area and volume data in macular cube scans were obtained automatically by the Cirrus SD-OCT software. 19 The software first segments the RPE and determines by interpolation a virtual RPE free of deformation, referred to as the RPE floor. The drusen area and volume are computed based on the distance between the real RPE and the virtual RPE at each measurement data point. The algorithm ignores RPE deformation below a given threshold to reduce noise 
For manual measurement of subfoveal choroidal thickness, two independent investigators (FF and MY) measured the distance between Bruch's membrane equivalent and the choroid-sclera interface at a single point below the fovea using Cirrus SD-OCT software's built-in caliper function in three separate sessions. One investigator was a fellowship-trained medical retinal specialist, and the other was a biomedical engineer trained on choroidal evaluation by the medical retinal specialist. The observers met beforehand to agree on the definition of the choroid-sclera interface as the outermost dark-to-bright boundary. Only the 2D EDI scans were used for the measurement. For each eye two different EDI scans were measured once each by each investigator, and then the EDI scan with higher signal strength was measured two more times to assess intrarater variability. In each session the investigators were blinded to each other's measurements and also their own previous measurements of the same scans. Repeated measurements were 2 to 4 weeks apart. 
Automated measurements were made on the same EDI scans that were measured by the investigators. The algorithm ran in two stages. First, the inner limiting membrane (ILM), Bruch's membrane equivalent (BM), and the choroid-sclera boundary were segmented automatically using a three-dimensional graph-cut algorithm. 20 Briefly, in the graph-cut technique, a node was assigned to each voxel in the volume and arcs were defined between the nodes based on surface smoothness and distance between surfaces. The intensity gradient in the axial direction was used as the cost function and the minimum s-t cut of the graph yielded strong intensity contrast edges with smoothness constraints. Maxflow software (version 3.01; V. Kolmogorov, available in the public domain at http://www.uclb-elicensing.com/optimisation_software/maxflow_computervision.html) was used to compute the minimum cut. The ILM and anterior boundary of the inner/outer segment were segmented first for their strong edge contrast. The latter was smoothed, fitted to a convex hull, and redrawn by piecewise cubic interpolation to serve as a reference layer to segment BM and limit the search region. Convex hull and interpolation were required to handle images with retinal pigment epithelial detachments caused by drusen, which results in reduced intensity at Bruch's membrane equivalent. After the BM was found, the choroid-sclera boundary was searched for in the region posterior to BM as a smooth surface with an intensity gradient. The initial choroid-sclera boundary segmentation was smoothed and interpolated similarly as in BM, based on the assumption that the boundary is smoothly varying and encompasses all choroid capillaries, which creates the dark/bright contrast within the choroid. An example of the final segmentation result is shown in Figure 1. After the segmentation was obtained, the center of the fovea was located as the point on the ILM segmentation with minimum slope, and the vertical distance between BM and the choroid-sclera boundary was measured as the subfoveal choroidal thickness. In some scans the center of the fovea could not be identified reliably, either due to severe deformation in the foveal region caused by the disease or failure to center the scan at the fovea during the acquisition stage. In such cases choroidal thickness was measured at the center of the scan for manual and automated measurements. 
Figure 1
 
Example of an automated subfoveal choroidal thickness measurement. Top shows the original image. Bottom shows the segmented image. The ILM (green), BM (blue), and choroid-sclera (CS) boundary (magenta) were segmented automatically, and the choroidal thickness was measured as the vertical distance between BM and CS boundary at the red dash line indicating the foveal pit.
Figure 1
 
Example of an automated subfoveal choroidal thickness measurement. Top shows the original image. Bottom shows the segmented image. The ILM (green), BM (blue), and choroid-sclera (CS) boundary (magenta) were segmented automatically, and the choroidal thickness was measured as the vertical distance between BM and CS boundary at the red dash line indicating the foveal pit.
The minimum, maximum, mean, SD, and coefficient of variation (CV) were computed for the manual and automated measurements. Intrarater, interrater, and interscan variability of the measurements was assessed with intraclass correlation coefficients (ICC) and pair-wise t-tests. The choroidal thicknesses measured by the raters and automated algorithm were assessed for correlation with drusen area using linear regression analysis. A generalized linear model was used to find the age-adjusted association of the subfoveal choroidal thickness with drusen area. SPSS (Version 19; SPSS, Inc., Chicago, IL) was used for the statistical analysis. 
Results
In the study group of 44 subjects total, 31 were females (70%) and 13 were males (30%). The average age of the participants was 75.7 ± 8.4, ranging from 60 to 91. Of the patients 40 (91%) were taking age-related eye disease (AREDS) vitamin supplementation. No patient was a current smoker, and eight patients (18%) were previous smokers who had quit between 13 and 49 years ago. The mean signal strength was 8.45 ± 1.11 for the EDI raster scans and 7.91 ± 0.98 for the macular cube scans. The mean drusen area was 1.44 ± 1.42 mm2 and the mean drusen volume was 0.079 ± 0.136 mm3. The mean visual acuity was 0.39 ± 0.29 logMAR for right eyes and 0.36 ± 0.23 for left eyes. 
Subfoveal Choroidal Thickness Measurement Variability
Intrarater Variability.
Intrarater variability was assessed for each rater using the three repeat measurements made at different time points on the same scan. Intrarater ICCs for the two raters were 0.92 and 0.97. Intrarater ICC for the automated algorithm is 1, since the same image input yields the same measurement. For both manual raters, there was no significant correlation between the intrarater variability and the image signal strength. This shows the minimum signal strength threshold of 6 was sufficient, such that beyond this threshold the signal strength did not affect the measurement repeatability significantly. The first rater showed a small, but statistically significant positive correlation between the intrarater variability and average choroidal thickness (r = 0.27, P = 0.01), suggesting that greater choroidal thickness was associated with greater intrarater variability. The intrarater variability of the two raters was not correlated significantly (r = 0.16, P = 0.12) which implies that there were no consistent scan features that contributed to the variability between raters. 
Interrater Variability.
Interrater variability was assessed between the raters and the automated algorithm. The values for the raters were averaged over the three repeated measures to reduce the effect of intrarater variability. Of total 88 eyes, 5 for which the difference between the average manual measurement and automated measurement was greater than 40% were excluded from analysis as unsuccessful. Descriptive statistics, ICC values, and paired sample t-test results are shown in Tables 1 and 2
Table 1. 
 
Descriptive Statistics of the Choroidal Thickness Measured by the First Rater (Rater 1), Second Rater (Rater 2), and the Algorithm (Auto)
Table 1. 
 
Descriptive Statistics of the Choroidal Thickness Measured by the First Rater (Rater 1), Second Rater (Rater 2), and the Algorithm (Auto)
Minimum, μm Maximum, μm Mean, μm SD, μm CV, %
Rater 1 83 145 521 246 63 25
Rater 2 83 99 526 214 68 32
Auto 83 107 447 203 53 26
Table 2. 
 
Intraclass Correlation and Paired Mean Difference for Choroidal Thickness Measurement Between the Raters (Rater 1, Rater 2) and the Algorithm (Auto)
Table 2. 
 
Intraclass Correlation and Paired Mean Difference for Choroidal Thickness Measurement Between the Raters (Rater 1, Rater 2) and the Algorithm (Auto)
ICC 95% CI Paired Mean Difference, μm P Value
Rater 1 and rater 2 0.96 0.94–0.98 30.56 <0.001
Rater 1 and auto 0.85 0.77–0.90 45.51 <0.001
Rater 2 and auto 0.84 0.75–0.89 15.39 <0.016
The ICC between the raters was 0.96, which was higher than the ICC between the first rater and the automated algorithm (0.85), and the second rater and the automated algorithm (0.84). 
The paired t-test between the two raters indicated a statistically significant mean difference (mean difference 31 μm, P < 0.001), such that the first rater measured generally larger choroidal thickness than the second rater. The pair-wise difference between the raters was not correlated significantly with the choroidal thickness, drusen area, or drusen volume. This implies that “shadowing” of the underlying choroid by drusen did not contribute significantly to variability. Correlation between the two raters was 0.93 (P < 0.001). 
The paired t-test between the raters and the automated algorithm indicated a statistically significant mean differences (mean difference with the first rater 46 μm, mean difference with the second rater 15 μm, P < 0.001 for both), such that the automated algorithm measured generally smaller choroidal thickness than both raters, but closer to the second rater. The mean difference between the second rater and the automated algorithm was less than that between the second and first rater. The pair-wise difference between the automated algorithm and the raters was correlated with the thickness (r = 0.70, P < 0.001), which implies that larger choroidal thickness measurement by the manual raters has a higher chance of yielding greater discrepancy with the automated measurement. The difference was not correlated significantly with drusen area or volume. Correlation of the automated algorithm was 0.77 (P < 0.001) with the first rater and 0.76 (P < 0.001) with the second rater. 
Bland-Altman plots of the agreement between the raters, and between the manual (average of the two raters) and the automated algorithm are presented in Figure 2
Figure 2
 
Bland-Altman plots showing the interrater variability of choroidal thickness measurements between the two manual raters (left), and between the manual raters (averaged) and the automated algorithm. The dotted lines indicate the upper and lower 95% confidence interval limits (N = 83).
Figure 2
 
Bland-Altman plots showing the interrater variability of choroidal thickness measurements between the two manual raters (left), and between the manual raters (averaged) and the automated algorithm. The dotted lines indicate the upper and lower 95% confidence interval limits (N = 83).
Interscan Variability.
Interscan variability was assessed between two repeat scans measurements of the same eye. Of 88 eyes, 6 eyes were excluded from analysis: 5 with largely different repeat scans, 1 with optic nerve head imaged instead of fovea. The result is tabulated in Table 3. The first rater's interscan ICC was 0.91, the second rater's interscan ICC was 0.96, and the automated algorithms interscan ICC was 0.87. 
Table 3. 
 
Intraclass Correlation and Paired Mean Difference for Choroidal Thickness Measurement Between the Repeat Scans by the Raters (Rater 1, Rater 2) and the Algorithm (Auto)
Table 3. 
 
Intraclass Correlation and Paired Mean Difference for Choroidal Thickness Measurement Between the Repeat Scans by the Raters (Rater 1, Rater 2) and the Algorithm (Auto)
ICC 95% CI Paired Mean Difference, μm P Value
Rater 1 scan A and scan B 0.91 0.86–0.94 −10.28 0.049
Rater 2 scan A and scan B 0.96 0.94–0.97 −10.98 0.002
Auto scan A and scan B 0.87 0.80–0.92 4.76 0.249
However, in the paired samples t-test, the mean difference between the first and second scan measurements was significant for the raters, but not for the algorithm. A possible explanation is different bias in different measurement sessions by the human raters. The raters measured the first set of scans first, and after two weeks or more measured the second set of scans. Since the scans were acquired within a minute each other, and there was no cause for a consistent and recurring systematic difference between the first and second scans, the significant mean difference can be attributed to the fact that in the second session the raters tended to measure larger choroidal thickness. Since the automated algorithm does not have such intrarater variability, the measurement difference between the scans was not session-dependent, and the mean difference between the first and second scans was not significant. 
The interscan variability of the raters was not correlated significantly with the interscan variability of the automated algorithm, and showed that generally there was no certain set of scans in which the raters and algorithms had small or large interscan variability. The interscan variability was not correlated with choroidal thickness, drusen area, or drusen volume for all methods. 
Correlation Between Choroidal Thickness and Drusen Area
Drusen area was correlated significantly and inversely with choroidal thickness for both raters and the algorithm (Table 4) for multivariate analysis, including age, which was not a statistically significant predictor of drusen area. 
Table 4. 
 
Results of Multivariable Regression Analysis Between Drusen Area and Choroid Thickness Measured by the Rater (Rater 1 and Rater 2) and the Automated Algorithm (Auto)
Table 4. 
 
Results of Multivariable Regression Analysis Between Drusen Area and Choroid Thickness Measured by the Rater (Rater 1 and Rater 2) and the Automated Algorithm (Auto)
Outcome Predictor Beta (SE) Model R 2
Drusen area Age −0.037 (0.042) 0.38 0.12
Choroid Thickness, rater 1 −0.006 (0.003) 0.05
Drusen area Age −0.031 (0.041) 0.45 0.16
Choroid Thickness, rater 2 −0.007 (0.003) 0.02
Drusen area Age −0.036 (0.018) 0.84 0.26
Choroid Thickness, auto −0.005 (0.002) 0.02
Discussion
We have compared the subfoveal choroidal thickness measured by two expert raters and an automated algorithm on EDI-OCT raster scans of nonneovascular AMD patients. The agreement between the two raters was high, but with a statistically significant mean difference. The agreement between the raters and the automated algorithm was less than that between the raters, but acceptable. 21 The raters and the algorithm showed comparably good repeat scan repeatability with slightly higher ICC values for the raters; however, the raters displayed statistically significant mean difference between the first and second scan measurements, which was not present for the automated algorithm. This may be attributed to intrarater, or intersession, variability in manual measurement. 
The raters felt it was difficult to identify confidently the choroid-sclera boundary in many scans. However, the overall result shows high ICCs for intrarater, interrater, and interscan repeatability among the two raters. The automated algorithm did not have intrarater variability, and had lower, but fair interrater and interscan repeatability relative to the raters. A recent article published in IOVS presented a similar comparative analysis of a commercial automated algorithm and manual graders on drusen segmentation. 22 The drusen volume agreement was comparable between the manual graders (ICC = 0.98), and between the average of the graders and algorithm (ICC = 0.94); however, the drusen area agreement was significantly higher between manual graders (ICC = 0.99) than between the average of the graders and algorithm (ICC = 0.64). 
The OCT scan signal strength level beyond the given threshold of 6 did not affect the measurement. Drusen area and volume also were not correlated with measurement repeatability or interrater agreement for the raters and the algorithm. From observation, most drusen in our study did not affect the visibility of the choroid except in a few cases. Even when a drusen of a substantial size shadowed the choroid, the raters and the algorithm were capable of some level of interpolation from surrounding regions where the choroid-sclera boundary was visible. 
The discrepancy between the interrater (manual) measurements, and the manual and automated measurements was influenced by two factors: weak image intensity at the posterior choroidal boundary and different definitions of the posterior choroidal boundary. A thicker choroid implies a deeper posterior boundary where light penetration may be low, resulting in decreased image intensity and edge clarity. Of the total 88 eyes, 6 eyes had the choroidal thickness greater than 330 μm, and the automated segmentation was unsuccessful for 4 of them (total 6 scans were categorized as unsuccessful). The value 330 μm was selected to include the cluster of unsuccessful segmentation at large choroidal thickness. Edge clarity also is dependent on the overall image quality and the peak signal location, which is determined at acquisition of the EDI-OCT scans. When the peak signal location is closer to the anterior choroid and the image intensity fall off is large the anterior blood vessels may have significantly higher contrast than the vessels near the posterior edge of the choroid. In these cases the algorithm may falsely detect a boundary inside the choroid resulting in a smaller thickness measurement. Unusually large drusens also contribute to low edge clarity. Examples are shown in Figure 3
Figure 3
 
Examples of segmentation by the first rater (red), second rater (yellow), and the algorithm (magenta). The first and second scans show the three measurements close to each other. In the third scan, the posterior choroidal boundary is located deep with low edge strength, and the automated measurement is smaller than the manual measurements. In the fourth scan a large drusen reduces the visibility of the posterior choroidal boundary.
Figure 3
 
Examples of segmentation by the first rater (red), second rater (yellow), and the algorithm (magenta). The first and second scans show the three measurements close to each other. In the third scan, the posterior choroidal boundary is located deep with low edge strength, and the automated measurement is smaller than the manual measurements. In the fourth scan a large drusen reduces the visibility of the posterior choroidal boundary.
One systematic factor in smaller automated measurements compared to manual measurements was how the posterior choroidal boundary was defined by the raters and algorithm. The algorithm searched for a relatively smooth boundary of strong dark-to-bright contrast, which ideally corresponds to the posterior edge of the dark, large blood vessels at the bottom of the choroid. On the other hand, the raters occasionally chose the outermost (most posterior) edge they could identify as a smooth boundary (Fig. 4). The thickness difference between the innermost dark-to-bright boundary chosen by the algorithm and the dark-to-bright boundary chosen by the raters could be explained physiologically by the presence of the heavily pigmented suprachoroidal layer (lamina fusca). The thickness of the suprachoroidal space is approximately 30 microns, 23 which is close to the mean thickness difference that we observed between the raters and algorithm. 
Figure 4
 
Example of different choroid-sclera boundary measurements by the first rater (red), second rater (yellow), and the algorithm (magenta).
Figure 4
 
Example of different choroid-sclera boundary measurements by the first rater (red), second rater (yellow), and the algorithm (magenta).
Interscan repeatability on two consecutive scans of the same fovea was affected by patient motion and image quality of the scans. Even when the repeat scans are imaged approximately at the same location at the fovea, the amount of light penetration and the corresponding image intensity at the choroid is low, and the visibility of choroidal structures can be sensitive to even small changes between repeat scans. A pair of repeat scans with a large difference in image quality, and a large interscan variability for the raters and automated algorithm is shown in Figure 5
Figure 5
 
Repeat scans of the same fovea with different image quality.
Figure 5
 
Repeat scans of the same fovea with different image quality.
We note the metric used in the study was the choroidal thickness below the fovea, where the choroid tends to be the thickest, and accordingly the least visible. Also, the choroidal thickness measurement was made on a single point and not averaged over a region. This likely also contributed to the measurement variability and agreement for the raters and automated algorithm. 
The number and size of drusen is an important risk factor for predicting the progression of AMD. 17 We found that choroidal thickness was correlated with drusen area, suggesting that this may be an important parameter to follow in clinical practice and in clinical trials. Manual and automated choroidal thickness measurements were found to correlate with drusen area, suggesting that either would be adequate for clinical use. While manual measurements tend to be less variable, obtaining these measurements can be time-consuming. Automated measurement, thus, offers the advantage of being more applicable to a clinical setting. 
Our study has presented manual and automated measurements of subfoveal choroidal thickness in nonneovascular AMD patients using EDI-OCT scans. The manual raters agreed with each other better than they agreed with the automated algorithm, which still showed a fair level of agreement with the manual raters (ICC between the raters = 0.96, ICC between the raters and the algorithm = 0.84, 0.85). All methods performed relatively well in measurement repeatability in repeat scans. The manual measurements appeared to be subject to possible biases in different raters and different rating sessions. The automated algorithm measurement was determined only by the given scan, but showed greater sensitivity to image quality. As such, the manual and automated measurements of the subfoveal choroidal thickness in 2D EDI-OCT scans should be used with consideration. The performance of the automated algorithm, and to some extent manual raters, is likely to benefit from 3D or more in-depth imaging modalities, such as polarization sensitive OCT or 1060 nm OCT. 
The correlation of the automated choroid measurements with the drusen area was comparable to that of the manual measurements, indicating clinical relevance of the automated measurements in studying the relationship between the drusen and choroid. The automated measurement showed potential for faster speed and reduced labor cost in clinical practice and clinical trials. 
Acknowledgments
The authors thank Carl Zeiss Meditec, Inc., for use of the Cirrus SD-OCT machine and software. 
Supported by a New Researcher Grant from the Canadian National Institute for the Blind (CNIB), Canadian Institutes of Health Research (CIHR), Natural Sciences and Engineering Research Council of Canada (NSERC), and Michael Smith Foundation for Health Research (MSFHR). 
Disclosure: S. Lee, None; N. Fallah, None; F. Forooghian, None; A. Ko, None; K. Pakzad-Vaezi, None; A.B. Merkur, None; A.W. Kirker, None; D.A. Albiani, None; M. Young, None; M.V. Sarunic, None; M.F. Beg, None 
References
Pournaras CJ Logean E Riva CE Regulation of subfoveal choroidal blood flow in age-related macular degeneration. Invest Ophthalmol Vis Sci . 2006; 47: 1581–1586. [CrossRef] [PubMed]
Grunwald JE Metelitsina TI Dupont JC Ying GS Maguire MG. Reduced foveolar choroidal blood flow in eyes with increasing AMD severity. Invest Ophthalmol Vis Sci . 2005; 46: 1033–1038. [CrossRef] [PubMed]
Berenberg TL Metelitsina TI Madow B The association between drusen extent and foveolar choroidal blood flow in age-related macular degeneration. Retina . 2012; 32: 25–31. [CrossRef] [PubMed]
Grunwald JE Hariprasad SM DuPont J. Foveolar choroidal blood flow in age-related macular degeneration. Invest Ophthalmol Vis Sci . 1998; 39: 385–390. [PubMed]
Friedman E. A hemodynamic model of the pathogenesis of age-related macular degeneration. Am J Ophthalmol . 1997; 124: 677–682. [CrossRef] [PubMed]
Spaide RF Koizumi H Pozonni MC. Enhanced depth imaging spectral-domain optical coherence tomography. Am J Ophthalmol . 2008; 146: 496–500. [CrossRef] [PubMed]
Margolis R Spaide RFA. Pilot study of enhanced depth imaging optical coherence tomography of the choroid in normal eyes. Am J Ophthalmol . 2009; 147: 811–815. [CrossRef] [PubMed]
Fujiwara T Imamura Y Margolis R Slakter JS Spaide RF. Enhanced depth imaging optical coherence tomography of the choroid in highly myopic eyes. Am J Ophthalmol . 2009; 148: 445–450. [CrossRef] [PubMed]
Imamura Y Fujiwara T Margolis R Spaide RF. Enhanced depth imaging optical coherence tomography of the choroid in central serous chorioretinopathy. Retina . 2009; 28: 1469–1473. [CrossRef]
Maruko I Iida T Sugano Y Ojima A Sekiryu T. Subfoveal choroidal thickness in fellow eyes of patients with central serous chorioretinopathy. Retina . 2011; 31: 1603–1608. [CrossRef] [PubMed]
Manjunath V Goren J Fujimoto JG Duker JS. Analysis of choroidal thickness in age-related macular degeneration using spectral-domain optical coherence tomography. Am J Ophthalmol . 2011; 152: 663–668. [CrossRef] [PubMed]
Kajić V Esmaeelpour M Považay B Marshall D Rosin PL Drexler W. Automated choroidal segmentation of 1060 nm OCT in healthy and pathologic eyes using a statistical model. Biomed Opt Express . 2012; 3: 86–103. [CrossRef] [PubMed]
Zhang L Lee K Niemeijer M Mullins RF Sonka M Abràmoff MD. Automated segmentation of the choroid from clinical SD-OCT. Invest Ophthalmol Vis Sci . 2012; 53: 7510–7519. [CrossRef] [PubMed]
Torzicky T Pircher M Zotter S Bonesi M Götzinger E Hitzenberger CK. Automated measurement of choroidal thickness in the human eye by polarization sensitive optical coherence tomography. Opt Express . 2012; 20: 7564–7574. [CrossRef] [PubMed]
Garvin MK Abramoff MD Kardon R Russell SR Xiaodong W Sonka M. Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search. IEEE T Med Imaging . 2008; 27: 1495–1505. [CrossRef]
Garvin MK Abramoff MD Wu X Russell SR Burns TL Sonka M. Automated 3-D intraretinal layer segmentation of macular spectral-domain optical coherence tomography images. IEEE T Med Imaging . 2009; 28: 1436–1447. [CrossRef]
The Age-Related Eye Disease Research Group. A randomized, placebo-controlled, clinical trial of high-dose supplementation with vitamins C and E, beta carotene, and zinc for age-related macular degeneration and vision loss: AREDS report no. 8. Arch Ophthalmol . 2001; 119: 1417–1436. [CrossRef] [PubMed]
Cukras C Agrón E Klein ML Age-Related Eye Disease Study Research Group. Natural history of drusenoid pigment epithelial detachment in age-related macular degeneration: Age-Related Eye Disease Study Report No. 28. Ophthalmology . 2010; 117: 489–499. [CrossRef] [PubMed]
Gregori G Wang F Rosenfeld PJ Spectral domain optical coherence tomography imaging of drusen in non-exudative age-related macular degeneration. Ophthalmology . 2011; 118: 1373–1379. [PubMed]
Li K Wu X Chen DZ Sonka M. Optimal surface segmentation in volumetric images – a graph-theoretic approach. IEEE T Pattern Anal . 2006; 28: 119–134. [CrossRef]
Landis JR Koch GG. The measurement of observer agreement for categorical data. Biometrics . 1977; 33: 159–174. [CrossRef] [PubMed]
Nittala MG Ruiz-Garcia H Sadda SR. Accuracy and reproducibility of automated drusen segmentation in eyes with non-neovascular age-related macular degeneration. Invest Ophthalmol Vis Sci . 2012; 53: 8319–8324. [CrossRef] [PubMed]
Trattler WB Kaiser PK Friedman NJ. Review of Ophthalmology: Expert Consult – Online and Print. 2nd ed. Philadelphia, PA: W. B. Saunders Co; 2012.
Figure 1
 
Example of an automated subfoveal choroidal thickness measurement. Top shows the original image. Bottom shows the segmented image. The ILM (green), BM (blue), and choroid-sclera (CS) boundary (magenta) were segmented automatically, and the choroidal thickness was measured as the vertical distance between BM and CS boundary at the red dash line indicating the foveal pit.
Figure 1
 
Example of an automated subfoveal choroidal thickness measurement. Top shows the original image. Bottom shows the segmented image. The ILM (green), BM (blue), and choroid-sclera (CS) boundary (magenta) were segmented automatically, and the choroidal thickness was measured as the vertical distance between BM and CS boundary at the red dash line indicating the foveal pit.
Figure 2
 
Bland-Altman plots showing the interrater variability of choroidal thickness measurements between the two manual raters (left), and between the manual raters (averaged) and the automated algorithm. The dotted lines indicate the upper and lower 95% confidence interval limits (N = 83).
Figure 2
 
Bland-Altman plots showing the interrater variability of choroidal thickness measurements between the two manual raters (left), and between the manual raters (averaged) and the automated algorithm. The dotted lines indicate the upper and lower 95% confidence interval limits (N = 83).
Figure 3
 
Examples of segmentation by the first rater (red), second rater (yellow), and the algorithm (magenta). The first and second scans show the three measurements close to each other. In the third scan, the posterior choroidal boundary is located deep with low edge strength, and the automated measurement is smaller than the manual measurements. In the fourth scan a large drusen reduces the visibility of the posterior choroidal boundary.
Figure 3
 
Examples of segmentation by the first rater (red), second rater (yellow), and the algorithm (magenta). The first and second scans show the three measurements close to each other. In the third scan, the posterior choroidal boundary is located deep with low edge strength, and the automated measurement is smaller than the manual measurements. In the fourth scan a large drusen reduces the visibility of the posterior choroidal boundary.
Figure 4
 
Example of different choroid-sclera boundary measurements by the first rater (red), second rater (yellow), and the algorithm (magenta).
Figure 4
 
Example of different choroid-sclera boundary measurements by the first rater (red), second rater (yellow), and the algorithm (magenta).
Figure 5
 
Repeat scans of the same fovea with different image quality.
Figure 5
 
Repeat scans of the same fovea with different image quality.
Table 1. 
 
Descriptive Statistics of the Choroidal Thickness Measured by the First Rater (Rater 1), Second Rater (Rater 2), and the Algorithm (Auto)
Table 1. 
 
Descriptive Statistics of the Choroidal Thickness Measured by the First Rater (Rater 1), Second Rater (Rater 2), and the Algorithm (Auto)
Minimum, μm Maximum, μm Mean, μm SD, μm CV, %
Rater 1 83 145 521 246 63 25
Rater 2 83 99 526 214 68 32
Auto 83 107 447 203 53 26
Table 2. 
 
Intraclass Correlation and Paired Mean Difference for Choroidal Thickness Measurement Between the Raters (Rater 1, Rater 2) and the Algorithm (Auto)
Table 2. 
 
Intraclass Correlation and Paired Mean Difference for Choroidal Thickness Measurement Between the Raters (Rater 1, Rater 2) and the Algorithm (Auto)
ICC 95% CI Paired Mean Difference, μm P Value
Rater 1 and rater 2 0.96 0.94–0.98 30.56 <0.001
Rater 1 and auto 0.85 0.77–0.90 45.51 <0.001
Rater 2 and auto 0.84 0.75–0.89 15.39 <0.016
Table 3. 
 
Intraclass Correlation and Paired Mean Difference for Choroidal Thickness Measurement Between the Repeat Scans by the Raters (Rater 1, Rater 2) and the Algorithm (Auto)
Table 3. 
 
Intraclass Correlation and Paired Mean Difference for Choroidal Thickness Measurement Between the Repeat Scans by the Raters (Rater 1, Rater 2) and the Algorithm (Auto)
ICC 95% CI Paired Mean Difference, μm P Value
Rater 1 scan A and scan B 0.91 0.86–0.94 −10.28 0.049
Rater 2 scan A and scan B 0.96 0.94–0.97 −10.98 0.002
Auto scan A and scan B 0.87 0.80–0.92 4.76 0.249
Table 4. 
 
Results of Multivariable Regression Analysis Between Drusen Area and Choroid Thickness Measured by the Rater (Rater 1 and Rater 2) and the Automated Algorithm (Auto)
Table 4. 
 
Results of Multivariable Regression Analysis Between Drusen Area and Choroid Thickness Measured by the Rater (Rater 1 and Rater 2) and the Automated Algorithm (Auto)
Outcome Predictor Beta (SE) Model R 2
Drusen area Age −0.037 (0.042) 0.38 0.12
Choroid Thickness, rater 1 −0.006 (0.003) 0.05
Drusen area Age −0.031 (0.041) 0.45 0.16
Choroid Thickness, rater 2 −0.007 (0.003) 0.02
Drusen area Age −0.036 (0.018) 0.84 0.26
Choroid Thickness, auto −0.005 (0.002) 0.02
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×