February 2007
Volume 48, Issue 2
Free
Retina  |   February 2007
Error Correction and Quantitative Subanalysis of Optical Coherence Tomography Data Using Computer-Assisted Grading
Author Affiliations
  • Srinivas R. Sadda
    From the Doheny Image Reading Center, Doheny Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California.
  • Sandra Joeres
    From the Doheny Image Reading Center, Doheny Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California.
  • Ziqiang Wu
    From the Doheny Image Reading Center, Doheny Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California.
  • Paul Updike
    From the Doheny Image Reading Center, Doheny Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California.
  • Peggy Romano
    From the Doheny Image Reading Center, Doheny Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California.
  • Allyson T. Collins
    From the Doheny Image Reading Center, Doheny Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California.
  • Alexander C. Walsh
    From the Doheny Image Reading Center, Doheny Eye Institute, Keck School of Medicine of the University of Southern California, Los Angeles, California.
Investigative Ophthalmology & Visual Science February 2007, Vol.48, 839-848. doi:10.1167/iovs.06-0554
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Srinivas R. Sadda, Sandra Joeres, Ziqiang Wu, Paul Updike, Peggy Romano, Allyson T. Collins, Alexander C. Walsh; Error Correction and Quantitative Subanalysis of Optical Coherence Tomography Data Using Computer-Assisted Grading. Invest. Ophthalmol. Vis. Sci. 2007;48(2):839-848. doi: 10.1167/iovs.06-0554.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

purpose. To demonstrate feature subanalysis and error correction of optical coherence tomography (OCT) data by using computer-assisted grading.

methods. The raw exported StratusOCT (Carl Zeiss Meditec, Inc., Dublin, CA) scan data from 20 eyes of 20 patients were analyzed using custom software (termed OCTOR) designed to allow the user to define manually the retinal borders on each radial line scan. Measurements calculated by the software, including thickness of the nine standard macular subfields, foveal center point (FCP), and macular volume, were compared between two graders and with the automated Stratus analysis. Mean and range of differences for each parameter were calculated and assessed by Bland-Altman plots and Pearson correlation coefficients. Additional cases with clinically relevant subretinal findings were selected to demonstrate the capabilities of this system for quantitative feature subanalysis.

results. Retinal thickness measurements for the various subfields and the FCP showed a mean difference of 1.7 μm (maximum, 7 μm) between OCTOR graders and a mean difference of 2.3 μm (maximum of 8 μm) between the OCTOR and Stratus analysis methods. Volume measurements between Stratus and OCTOR methods differed by a mean of 0.06 mm3 (in reference to a mean macular volume of 6.81 mm3). The differences were not statistically significant, and the thicknesses correlated highly (R 2 ≥ 0.98 for all parameters).

conclusions. Manual identification of the inner and outer retinal boundaries on OCT scans can produce retinal thickness measurements consistent with those derived from the automated StratusOCT analysis. Computer-assisted OCT grading may be useful for correcting thickness measurements in cases with errors of automated retinal boundary detection and may be useful for quantitative subanalysis of clinically relevant features, such as subretinal fluid volume or pigment epithelial detachment volume.

Since its original description by Huang et al., 1 optical coherence tomography (OCT) has evolved into a critical diagnostic tool in the management of patients with vitreoretinal diseases. 2 3 4 5 6 7 8 9 10 11 12 13 Retinal thickness measurements provided by OCT analysis software have provided ophthalmologists with an unprecedented ability to quantify retinal disease and monitor patients with disease. These OCT measurements are not only an integral component of clinical practice, but they have now become important variables in analyzing the outcome of ophthalmic clinical trials, for studies of both macular edema 6 7 10 and choroidal neovascularization (CNV). 8 New antiangiogenic therapies 14 15 16 17 18 19 20 21 for CNV, such as pegaptanib, 16 21 have demonstrated visual benefits and reduction in retinal thickness measurements despite apparent growth or persistent leakage in CNV lesions by fluorescein angiography. This disparity has led some investigators to recommend OCT as the primary tool for assessing therapeutic response in patients undergoing these treatments. 3 13 17  
Several investigators have demonstrated a relatively high reproducibility of OCT measurements, 22 23 24 25 26 27 which has increased clinician confidence in the use of these measures in follow-up of patients with macular disease. Artifacts in the OCT scans, however, can have a significant impact on the accuracy of these measurements. 28 29 30 31 Ray et al. 31 observed that artifacts affecting retinal thickness were present in 43.2% of cases. Using stringent criteria, we had noted errors in the detection of retinal boundaries in 92% of cases, with moderate errors in 19.5% and severe errors in 13.5%, in a series of 200 patients who underwent StratusOCT imaging with the version 4.0 software (Carl Zeiss Meditec, Inc., Dublin, CA). 32 Moreover, in patients with CNV, we observed more frequent and severe OCT retinal boundary detection errors than in patients with other macular diseases. 
The best solution to this problem is the development of improved segmentation algorithms for more consistently accurate detection of retinal boundaries, but such algorithms are not yet available. In the interim, strategies that allow a trained human expert to correct OCT errors may provide a suitable mechanism for increasing the precision of retinal thickness measurements for monitoring patients with macular disease, particularly in clinical trials. 
Methods
Data Collection and Study Population
Study cases were selected sequentially from an alphabetical list of patients who underwent StratusOCT imaging of the macula at the Ophthalmic Imaging Unit at the Doheny Eye Institute. Approval for the collection and analysis of OCT images was obtained from the Institutional Review Board of the University of Southern California. The research adhered to the tenets set forth in the Declaration of Helsinki. All study scans were obtained between January 1, 2004, and June 1, 2006, using the “Radial Lines” protocol (set at a transverse resolution of 512 A-scans per B-scan) on a single StratusOCT machine and were analyzed with the version 4.0 software. In this protocol, six scans of 6-mm radial lines (oriented 30° apart) were obtained individually and sequentially by the operator. Although investigators in many published reports on OCT have used the Fast Macular Scan protocol (with a lower transverse resolution of 128 A-scans/B-scan), the high-resolution Radial Lines protocol has been the standard method used in our imaging unit because of the enhanced morphologic detail provided. 
The resultant pseudocolor B-scan images were manually inspected by a certified Doheny Image Reading Center (DIRC) grader (SRS or ZW) until 20 consecutive cases were found that had accurate automated detection by the StratusOCT software of the inner and outer retinal boundaries in all six radial line scans. Accrual of this 20-case cohort required review of 150 consecutive cases from the Imaging Unit log. These selected “accurately detected” scans were frequently the fellow healthy eye of patients with a diseased eye. 
The inner retinal boundary was deemed to be properly positioned when it coincided with the anticipated location of the internal limiting membrane (ILM). The outer retinal boundary was determined to be correctly positioned when it coincided with the anterior aspect of the highly reflective band previously believed to correspond to the RPE-choriocapillaris interface. Additional cases with notable boundary detection errors or clinically relevant subretinal features that were improperly outlined by the automated segmentation software were also collected, for demonstration purposes only. 
Data for each case were exported to disc with the export feature available in the StratusOCT version 4.0 analysis software. 
Computer-Assisted OCT Grading Software
Software (termed OCTOR) was written by DIRC software engineers to facilitate viewing and manual grading of each of the radial line scans by a trained OCT grader. The OCTOR software is not commercially available, but a web-based application (www.driamd.org) is being developed to permit generalized use of this technique. The software, which effectively operates as a painting program and calculator, imports data exported from the StratusOCT machine and allows the grader to use a computer mouse to draw various boundaries in the retinal cross-sectional image, including: the inner retinal surface, the outer retinal boundary, the RPE-choriocapillaris interface (if different from the outer retinal boundary, as in cases with subretinal fluid), and the estimated level of the normal RPE (if different from the RPE-choriocapillaris interface, as in cases of pigment epithelial detachment). The software allows the user to select one boundary line to be drawn at a time, from a list of the previously described interfaces. The OCTOR software also allows the user to zoom and pan across the image, thereby facilitating rapid and accurate drawing of the boundaries, by simply moving the computer mouse pointer from one end of the B-scan to the other. The user is also able to correct mistakes, such as a boundary drawn in the wrong location, rapidly by simply redrawing the line in the correct location. The OCTOR software automatically erases the previously drawn lines in those transverse locations that have been redrawn. After drawing the boundary, the grader scrutinizes the drawn line to make sure there are no discontinuities. An illustration of the appearance of a B-scan on the computer screen after the boundaries are partially drawn (using a computer mouse) by the user is shown in Figure 1 . Through the use of the zoom feature, we have found the simple manual drawing method to be faster and more reliable than the use of anchor points and spline interpolation. No smoothing or other processing of the manually drawn line is performed. The grader then proceeds to draw the desired boundaries in each of the six B-scans. 
The software calculates the distance in pixels between the manually drawn inner and outer retinal boundary lines. With the use of the dimensions of the B-scan image, this measurement is converted into micrometers to yield a thickness measurement at each location. The thicknesses at all unsampled locations between the radial lines are then interpolated based on a polar approximation to yield a “thickness map” analogous to the StratusOCT output data. The precise polar interpolation algorithm used by the StratusOCT is proprietary and not available. To validate our interpolation method, we first applied the method to generate a retinal map using the retinal boundaries and retinal thickness values at each axial location (6 B-scans, with 512 A-scans/B-scan) as obtained from the StratusOCT machine. 
To allow direct comparison to the StratusOCT output, the OCTOR software also calculates the average retinal thickness in each of the nine Early Treatment Diabetic Retinopathy Study (ETDRS) macular subfields, the mean and SD of the foveal center point (FCP) thickness, and the total macular volume (by multiplying the average thickness by the sampled area). The FCP and the foveal central subfield (FCS) thickness are the two most common StratusOCT output parameters used by clinicians and clinical trial reading centers. 
Computer-Assisted Grading Protocol
Two independent, masked DIRC graders (SJ and ZW) used the OCTOR software in all cases to draw lines (using the computer mouse) representing the inner and outer retinal boundaries on the six raw, exported, radial line scans. For each grader, the manual-grading process took approximately 5 minutes for each set of six scans. After completion of the grading, each grader used the OCTOR software to calculate the retinal thicknesses for the various standard subfields, FCP, and macular volume, to allow comparison between graders to assess the reproducibility of the OCTOR analysis method. In addition, the retinal thickness values for the two graders were averaged for subsequent comparison with the automated StratusOCT output. Finally, one grader (SJ) regraded and analyzed all the cases several weeks after the initial assessment to provide data regarding the intragrader reproducibility of the method. 
For the primary analysis, the graders drew the line for the inner retinal boundary at the inner edge of the internal limiting membrane. To allow a fair comparison to the automated results, the graders adopted the convention used by the StratusOCT algorithms, and drew the outer retinal boundary at the inner edge of the hyperreflective band between the retina and the choroid. The true outer retinal boundary may lie between “layers” of the hyperreflective band that can be easily discerned on high-quality scans or in patients with a healthy RPE, as reported by Costa et al. 33 and Pons and Garcia-Valenzuela, 29 and illustrated by the case in Figure 2 . Consequently, to quantify the difference between retinal thickness measurements from the StratusOCT software and measurements (called “true retinal thickness”) that incorporate this recent understanding of the outer retinal boundary, cases were manually regraded by using this new interface as the outer retinal boundary. Automated Stratus retinal thickness measurements were compared to the calculated “true” thicknesses for each A-scan. 
Statistical Methods
The 10 retinal thickness measurements (nine ETDRS subfields and FCP) and retinal volume constituted the 11 outcome parameters that were compared among the various analysis methods for each of the 20 cases. First, to validate the OCTOR interpolation algorithm, the 11 outcome parameters obtained automatically by the StratusOCT instrument using the Stratus interpolation algorithm were compared with the 11 outcome parameters obtained by applying the OCTOR interpolation algorithm on the Stratus-obtained retinal boundaries and retinal thicknesses. Second, to assess the reliability and reproducibility of the OCTOR computer-assisted manual boundary drawing process, the values of the 11 outcome parameters of two separate drawings obtained by one grader (intragrader reproducibility), as well as the values obtained by two independent masked graders (intergrader reproducibility), were compared. Finally, to assess the overall performance of the OCTOR method, the average (between the two graders) of the 11 outcome parameters obtained by OCTOR gradings were compared with the automated StratusOCT output. For the set of 20 cases, the maximum difference, mean difference, median difference, and mean percentage difference were calculated for each parameter. Bland-Altman plots were used to evaluate the level of agreement between the different analysis methods for each parameter. 34 35 Pearson product moment correlation coefficients were also calculated. 
As a final method of comparison, in addition to comparing the 11 outcome parameters (i.e., the clinically reported information), the retinal thickness measurements at the individual points (each of the 512 A-scans) along all six scan lines were averaged to yield a mean “raw” (uninterpolated) retinal thickness for each case. The mean absolute difference in retinal thicknesses between the two graders was calculated for all 20 cases. A similar analysis was also performed to compare OCTOR measurements obtained using the two different definitions of the outer retinal boundary. 
Results
Comparison of OCTOR Interpolation and Stratus Interpolation Algorithms
Table 1illustrates the level of agreement between the OCTOR polar interpolation method and the Stratus interpolation method when the two methods are applied on the same Stratus-obtained retinal boundaries. Pearson correlation coefficients demonstrated R 2 > 0.99 for all subfields. 
Because the FCP thickness is a single point and no interpolation is required, there is perfect agreement, as expected. Of the remaining parameters, FCS showed the greatest discrepancy between the OCTOR and Stratus interpolation methods, with a mean difference of 1.1 μm (0.5%) and a maximum of 2 μm (1.0%). Bland-Altman plots of the mean difference between the two analysis methods for each of the calculated parameters demonstrated excellent agreement. Bland-Altman plots of the FCS thickness and total macular volume are shown in Figure 3
Intragrader Reproducibility Using Computer-Assisted Manual Grading
Table 2illustrates the reproducibility achieved by one grader analyzing each of the 20 cases on two separate occasions using the manual OCTOR grading method. The mean difference in the FCP thickness between the gradings was 2.5 μm (1.6%), with a median of 3 μm and a maximum of 5 μm (4.4%). For the FCS thickness, the mean difference was 1.2 μm (0.6%) with a median of 1 μm and a maximum of 4 μm (2.0%). Overall, the average intragrader difference in retinal thickness for all subfields and the FCP was 1.6 μm, and the mean difference in total macular volume was 0.03 mm3 (0.4%) (median 0.03 mm3, maximum 0.06 mm3 [0.9%]). Bland-Altman plots of the mean difference between both gradings for each of the calculated parameters demonstrated excellent agreement. Bland-Altman plots of the FCP thickness and total macular volume are shown in Figures 4A and 4B
Intergrader Reproducibility Using Computer-Assisted Manual Grading
Table 3illustrates the level of agreement between the two graders using the manual OCTOR grading method. The mean difference between the FCP thickness obtained by the two graders was 3.7 μm (2.4%), with a median of 3 μm, and a maximum of 7 μm (5.2%). For the FCS thickness, the mean difference was 1.6 μm (0.8%) with a median of 1.5 μm and a maximum of 4 μm (1.8%). Overall, the average intergrader difference in retinal thickness for all subfields and the FCP was 1.9 μm. The discrepancy was even smaller for total macular volume, a parameter gaining increasing popularity among many clinicians, with a mean difference of 0.02 mm3 (0.4%), a median of 0.02 mm3, and a maximum of 0.09 mm3 (1.3%). Bland-Altman plots of the FCP thickness and total macular volume are shown in Figures 4C and 4D
Intergrader comparison of the raw (uninterpolated) retinal thickness measurements at every A-scan location for all six B-scans of all 20 cases yielded an average difference of 1.3 μm. 
Comparison between Computer-Assisted Manual Grading and Automated Stratus Measurements
Table 4shows the thickness measurements obtained by the computer-assisted OCTOR (average of the two graders) and automated Stratus analyses. Pearson correlation coefficients between automated and manual values demonstrated R 2 ≥ 0.98 for all 11 parameters. The mean difference between the FCP thickness obtained by the Stratus analysis and the computer-assisted human assessment was 3.7 μm (2.4%), with a median of 3.3 μm, and a maximum of 7.5 μm (5.9%). For the FCS thickness, the mean difference was 2.0 μm (1.0%) with a median of 2 μm and a maximum of 5 μm (3.0%). Overall, the average difference (OCTOR versus Stratus) in retinal thickness for all subfields and the FCP was 2.3 μm. For the total macular volume, there was a mean difference of 0.06 mm3 (0.9%), a median of 0.06 mm3, and a maximum of 0.13 mm3 (2.0%). 
Bland-Altman plots of the mean difference between the two analysis methods for each of the calculated parameters also demonstrated excellent agreement. Bland-Altman plots of the FCP thickness and total macular volume are shown in Figure 5 . It is important to note that these comparisons were based on the grader’s adopting the convention used by the Stratus analysis for defining the outer retinal boundary (Fig. 2)
Comparison between StratusOCT Retinal Thickness Measurements and Newly Defined “True Retinal Thickness”
After repositioning the outer retinal boundary to the true anatomic location (Fig. 2 , blue arrow) defined by Pons and Garcia-Valenzuela 29 and Costa et al., 33 and repeating the OCTOR computer-assisted analysis, the mean difference (at each A-scan location across all six radial line B-scans in all 20 cases) between StratusOCT retinal thickness measurements and OCTOR “true” retinal thickness measurements was 35.5 μm (range, 27–45 μm), or 15% of the measured StratusOCT retinal thickness. 
Correction of Retinal Boundaries in Cases with Detection Errors
Representative cases of diseases with frequent retinal boundary detection errors are illustrated in Figures 6 7 and 8
Full-Thickness Macular Hole
Figure 6illustrates a StratusOCT retinal boundary detection error relating to full-thickness macular holes (FTMHs) that occurs in nearly every FTMH case in the authors’ (SRS, ACW) clinical practice. As discontinuities and the absence of retinal tissue are not expected by the automated Stratus algorithms, the putative retinal surface (internal limiting membrane) is interpolated between the two edges of the macular hole (Fig. 6A) . The resultant map does not show evidence of the macular hole, or the loss of retinal thickness (Fig. 6B)
Figure 6Cillustrates the boundary lines as manually drawn by a human grader using the OCTOR software. In addition to drawing the correct location of the inner retinal surface, the grader also places lines indicating the location of the outer retinal surface and the inner RPE surface, thereby defining the subretinal space at the rim of the macular hole. After repeating this process for each of the six lines, the computer software computes the distances between the lines and performs a polar interpolation between the OCT scans to generate thickness maps. In addition to the corrected retinal height (the distance between the retinal surface and the inner surface of the RPE, which includes the retina and the subretinal space), separate maps are generated of the true retinal thickness, demonstrating the cuff of retinal thickening (Fig. 6D , asterisk) and the subretinal fluid cuff (Fig. 6E)
Central Serous Chorioretinopathy
Figure 7Aillustrates a common error in retinal boundary detection by the StratusOCT software in scans obtained from a patient with central serous chorioretinopathy (CSCR) with neurosensory retinal detachments. In this case, the outer retinal boundary does not correspond to the inner surface of the RPE. As a result, the retinal thickness map (Fig. 7B)does not depict retinal thickness, but rather retinal height from the RPE surface. In this case, one of the six radial line scans detected these boundaries inconsistently, resulting in a wedge-shaped artifact in the thickness map. Using the OCTOR software, the grader was able to distinguish and identify correctly the outer retinal and inner RPE surfaces (Fig. 7C) , thereby differentiating the retinal thickness (Fig. 7E)and subretinal space (Fig. 7F)from the retinal height (Fig. 7D) . In this case of CSCR, it is now apparent that the retina was not swollen, and the volume of subretinal fluid can be quantified. 
Choroidal Neovascularization
Figure 8illustrates the case of a patient with a fibrovascular pigment epithelial detachment (FVPED) associated with age-related macular degeneration. Despite the large PED, this patient had relatively good visual acuity (20/50). Figure 8Ademonstrates a frequently observed artifact of the Stratus analysis. To generate a thickness map, the Stratus software “aligns” the A-scans, thereby effectively flattening areas of RPE elevation. Although the precise Stratus alignment algorithms are proprietary and not available, the alignment process appears to “slide” adjacent A-scans up or down, such that the hyperreflective signals in each A-scan corresponding to the RPE-choriocapillaris interface are positioned adjacent to each other in a relatively flat horizontal line. In addition, because it only measures the distance between the inner retinal surface and the inner RPE surface (the retinal height), the automated thickness map (Fig. 8B)does not convey any information regarding the height or volume of the PED. Furthermore, the retinal edema is not distinguishable from the subretinal fluid. 
Although the alignment algorithm of the Stratus analysis can distort the retinal morphology, the clinician can always view the nonaligned raw data as shown in Figure 8C . However, using OCTOR software, a grader is able to identify the inner retinal surface, outer retinal surface, inner RPE surface, and the estimated “original” location of the RPE (by interpolation between the areas of flat RPE adjacent to the PED (Fig. 8D) . Thus, maps of the retinal thickness, subretinal fluid, and PED may also be generated (Figs. 8E 8F 8G 8H)
Discussion
In this report, we demonstrate that manual identification of retinal boundaries on raw OCT B-scan images can allow generation of retinal thickness maps and quantitative data virtually identical with that produced by properly functioning automated segmentation and analysis algorithms on the existing StratusOCT instrument. 
In this study, the maximum difference in retinal thickness between the automated Stratus output and the manual OCTOR method in any subfield in any case was less than 8 μm (Table 4) . Among the 11 output parameters, the FCP thickness showed the largest differences both in comparisons between two attempts by one grader (mean percent difference of 1.6%), between two graders (mean percent difference of 2.4%), as well as in comparisons between manual grading and the automated Stratus output (mean percent difference of 2.4%). In contrast, for all other output parameters (in any type of comparison), the mean percentage difference was always less than 1.2%. The slightly greater discrepancy observed for the FCP is likely because the FCP is based on the averaging of only six points, whereas the other parameters (particularly the total macular volume) use many more A-scans. 
This study also demonstrates that human graders can manually draw retinal boundaries using a computer mouse with good precision and reproducibility. Intra- and intergrader reproducibility appeared to be similar with this method. The small differences between gradings observed in this study are likely tolerable for most clinical or clinical research applications. Finally, although the interpolation algorithms used by the StratusOCT have not been published, the simple polar approximation used in this study appears to mimic the Stratus results closely. 
Clinical OCT technology has dramatically evolved in just the last several years. The StratusOCT system can now render intraretinal and subretinal features in detail that would have seemed impossible only a few years ago. Future spectral domain 36 37 38 39 and ultra-high-resolution 40 OCT technology, with or without adaptive optics imaging, promises to improve imaging resolution even more, while decreasing acquisition times. With all these unique capabilities, OCT has quickly risen to the forefront of retinal diagnostic imaging. Despite limited research evidence to support its use in clinical decision-making, it is relied on as an important diagnostic tool by many ophthalmologists, and it is beginning to make its way into organized clinical trials. Many trials of macular edema, for example, require a minimum or maximum retinal thickness for eligibility, such as a FCP thickness greater than 300 μm. Errors in Stratus algorithms may affect eligibility decisions in these patients, and manual correction may be a valuable solution. 
As OCT makes the transition from a research device to a critical clinical tool, care must be taken to ensure that its usage does not exceed its capabilities. For example, it would be quite easy to assume that the quantitative accuracy of retinal thickness measurements from this device should at least be equal to the superb imaging resolution evident in its cross-sectional images. Although most clinicians interpret the quantitative information in relation to the morphologic findings and their observations from the biomicroscopic examination, some clinicians may be tempted to rely on the machine’s automated, quantitative data summaries, particularly when morphologic changes over time are not striking. 
Although this reliance on processed OCT information may be based on the assumption that the machine’s quantitative output is as accurate as its imaging output, mounting evidence 29 41 suggests that this may not be the case. Based on recent advances in the understanding of the retinal anatomic correlates of the outer hyperreflective bands present on OCT, 29 this study has identified a mean difference of 35 μm between the measured and true retinal thicknesses. Although this comprises only 15% of the measured value used by clinicians, it suggests that better software algorithms and anatomic knowledge will be needed before clinicians can fully rely on the quantitative output from these devices. Indeed, investigators such as Ishikawa et al. 42 have worked to develop new automated segmentation algorithms that better detect the anatomically correct location of the RPE. In further support of the need for improved segmentation algorithms, several investigators have recently identified another set of errors in StratusOCT automated quantification that stem from problems with automated boundary detection. 31 32 Although the effects of these errors on clinical management have not been extensively studied, they can only be expected to cause greater problems as clinicians increase their reliance on and confidence in OCT data. 
Furthermore, clinically relevant intraretinal and subretinal features that are clearly evident to the human observer are not identified or quantified with current versions of automated segmentation software. For example, although retinal thickness is quantified in patients with macular edema, the volume of retinal cysts is not measured. In addition, an important limitation in eyes with subretinal fluid is that the fluid is often combined with the neurosensory retina by the Stratus software when calculating thickness measurements. For this reason, the traditional Stratus measurements are better termed “retinal height” (from the RPE) rather than retinal thickness. 
Unfortunately, the inability of existing analyses to distinguish subretinal fluid volume from retinal volume results in a loss of potentially clinically useful data. In some patients with CNV, for example, a particular treatment may cause resorption of macular edema, but may have no effect on subretinal fluid or RPE elevation. Thus, this computer-assisted grading of OCT images may allow a grader to select and quantify the most clinically useful features, such as subretinal fluid, PED volume, or cystic spaces, in a given patient. Ongoing quality-assurance programs for masked reanalysis of OCT images at the reading center (DIRC) have demonstrated excellent reproducibility (Romano P, unpublished data, 2006) among graders for identifying the boundaries for these spaces. However, it is important to note that the reproducibility data (as well as the data described in this report) were obtained by certified reading center graders. Future studies quantifying various pathologic features, particularly those employing nonstandardized grading personnel, must demonstrate similar reproducibility of data before the results of those analyses can be considered meaningful. 
Although manual correction of OCT boundary detection errors and the delineation of boundaries of other structures (such as subretinal fluid) described in this report are potentially useful, short-term solutions to the limitations of existing OCT software, it is important to recognize that ongoing advances in OCT hardware are likely to necessitate improvement in automated segmentation algorithms. New spectral (Fourier domain) OCT devices are capable of capturing more than 200 B-scans within a few seconds, but purely manual correction of boundary detection errors for this large number of scans is clearly not practical. It is hoped that recent breakthroughs in image processing and high-speed computing will allow the software advances to keep pace with the rapid development of enhanced imaging hardware. 
 
Figure 1.
 
Optical coherence tomography (OCT) B-scan, after partial grading using the OCTOR computer-assisted, manual-grading software. OCTOR software allows the grader to draw various retinal boundaries individually, by selecting the desired boundary from a list. Each boundary line is color coded and drawn by dragging the computer mouse from one end of the B-scan to the other. The outer retinal boundary is drawn as a yellow line (yellow arrow), and the partially completed inner retinal boundary is drawn as a white line (white arrow, the edge of the drawn line).
Figure 1.
 
Optical coherence tomography (OCT) B-scan, after partial grading using the OCTOR computer-assisted, manual-grading software. OCTOR software allows the grader to draw various retinal boundaries individually, by selecting the desired boundary from a list. Each boundary line is color coded and drawn by dragging the computer mouse from one end of the B-scan to the other. The outer retinal boundary is drawn as a yellow line (yellow arrow), and the partially completed inner retinal boundary is drawn as a white line (white arrow, the edge of the drawn line).
Figure 2.
 
OCT B-scan of a normal retina showing the outer retinal boundary (white line; inset: white arrow) as detected by the StratusOCT version 4.0 software. The outer retinal boundary is detected at the inner surface of the innermost hyperreflective band, whereas the true boundary should be at the inner surface of the second hyperreflective band (blue line; inset: blue arrow).
Figure 2.
 
OCT B-scan of a normal retina showing the outer retinal boundary (white line; inset: white arrow) as detected by the StratusOCT version 4.0 software. The outer retinal boundary is detected at the inner surface of the innermost hyperreflective band, whereas the true boundary should be at the inner surface of the second hyperreflective band (blue line; inset: blue arrow).
Table 1.
 
Comparison of OCTOR and Stratus Interpolation Algorithms
Table 1.
 
Comparison of OCTOR and Stratus Interpolation Algorithms
OCT Output Parameter Mean Stratus Retinal Thickness-Stratus Interpolation (μm) Mean Stratus Retinal Thickness-OCTOR Interpolation (μm) Absolute Difference, Mean [Median, Maximum] (μm)* Mean Percent Difference, † R 2
Foveal center point thickness 160.5 160.5 0.0 [0, 0] 0.0 1.00
Foveal center subfield (subfield 9) 202.9 204.0 1.1 [1, 2] 0.5 1.00
Outer Circle (range: subfields 1–4) 221.7–236.3 221.9–236.4 0.2–0.9 [0–1, 1–2] 0.1–0.4 1.00
Inner Circle (range: subfields 5–8) 260.9–268.9 261.6–269.6 0.6–0.8 [1, 1–2] 0.2–0.3 1.00
Volume (mm3) 6.75 6.79 0.03 [0.04, 0.05] 0.5 1.00
Figure 3.
 
Bland-Altman plots demonstrating level of agreement between the StratusOCT and OCTOR (computer-assisted, manual-grading software) interpolation methods. Comparison of results between the two methods is shown for FCS (A) and total macular volume (B). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 3.
 
Bland-Altman plots demonstrating level of agreement between the StratusOCT and OCTOR (computer-assisted, manual-grading software) interpolation methods. Comparison of results between the two methods is shown for FCS (A) and total macular volume (B). Solid line: mean difference; dashed lines: 95% confidence interval.
Table 2.
 
Intragrader Reproducibility Using Computer-Assisted Manual Grading
Table 2.
 
Intragrader Reproducibility Using Computer-Assisted Manual Grading
OCT Output Parameter Mean 1st Grading OCTOR Retinal Thickness (μm) Mean 2nd Grading OCTOR Retinal Thickness (μm) Absolute Difference, Mean [Median, Maximum] (μm)* Mean Percent Difference, † R 2
Foveal center point thickness 156.5 156.1 2.5 [3, 5] 1.6 0.99
Foveal center subfield (subfield 9) 204.4 203.6 1.2 [1, 4] 0.6 1.00
Outer circle (range: subfields 1–4) 222.6–236.7 221.9–236.0 1.3–1.8 [1, 3–5] 0.5–0.8% 0.99–1.00
Inner Circle (range: subfields 5–8) 264.0–271.5 263.1–270.2 1.4–1.8 [1–2, 3–5] 0.5–0.7% 0.99–1.00
Volume (mm3) 6.81 6.79 0.03 [0.03, 0.06] 0.4 1.00
Figure 4.
 
Bland-Altman plots demonstrating reproducibility and level of agreement between the measurements obtained by the same grader on two separate occasions (intragrader, A, B) and by two independent masked graders (intergrader, C, D). Comparison of the results are shown for foveal center point thickness (A, intragrader; C, intergrader) and total macular volume (B, intragrader; D, intergrader). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 4.
 
Bland-Altman plots demonstrating reproducibility and level of agreement between the measurements obtained by the same grader on two separate occasions (intragrader, A, B) and by two independent masked graders (intergrader, C, D). Comparison of the results are shown for foveal center point thickness (A, intragrader; C, intergrader) and total macular volume (B, intragrader; D, intergrader). Solid line: mean difference; dashed lines: 95% confidence interval.
Table 3.
 
Intergrader Reproducibility Using Computer-Assisted Manual Grading
Table 3.
 
Intergrader Reproducibility Using Computer-Assisted Manual Grading
OCT Output Parameter Mean Grader 1 OCTOR Retinal Thickness (μm) Mean Grader 2 OCTOR Retinal Thickness (μm) Absolute Difference, Mean [Median, Maximum] (μm)* Mean Percent Difference, † R 2
Foveal center point thickness 158.8 156.5 3.7 [3, 7] 2.4 % 0.99
Foveal center subfield (subfield 9) 205.3 204.4 1.6 [1.5, 4] 0.8 % 1.00
Outer circle (range: subfields 1–4) 222.6–237.0 222.6–236.7 1.2–2.2 [1–2, 4–7] 0.5–1.0% 0.99
Inner circle (range: subfields 5–8) 263.6–271.2 264.0–271.5 1.5–2.0 [1–2, 4–5] 0.6–0.7% 0.99–1.00
Volume (mm3) 6.82 6.81 0.02 [0.02, 0.09] 0.4% 1.00
Table 4.
 
Comparison between Automated StratusOCT Measurements and Computer-Assisted Manual Grading
Table 4.
 
Comparison between Automated StratusOCT Measurements and Computer-Assisted Manual Grading
OCT Output Parameter Mean Stratus Retinal Thickness-Stratus Interpolation (μm) Mean OCTOR Retinal Thickness (μm)* Absolute Difference Mean, [Median, Maximum] (μm), † Mean Percent Difference, ‡ R 2
Foveal center point thickness 160.5 157.5 3.7 [3.3, 7.5] 2.4 % 0.98
Foveal center subfield (subfield 9) 202.9 204.8 2.0 [2.0, 5.0] 1.0 % 1.00
Outer circle (range: subfields 1–4) 221.7–236.3 222.6–236.8 1.3–1.9 [1.3–1.5, 2.5–6.5] 0.6–0.8% 0.99
Inner circle (range: subfields 5–8) 260.9–268.9 263.8–271.6 2.5–3.0 [2.5–3.0, 5.0–6.5] 1.0–1.2% 0.99–1.00
Volume (mm3) 6.75 6.81 0.06 [0.06, 0.13] 0.9 % 0.99
Figure 5.
 
Bland-Altman plots demonstrating level of agreement between the StratusOCT and OCTOR (computer-assisted manual-grading software) retinal thickness analysis results. Comparison of results between the two methods is shown for foveal center point thickness (A) and total macular volume (B). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 5.
 
Bland-Altman plots demonstrating level of agreement between the StratusOCT and OCTOR (computer-assisted manual-grading software) retinal thickness analysis results. Comparison of results between the two methods is shown for foveal center point thickness (A) and total macular volume (B). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 6.
 
OCT scan from a patient with a full-thickness macular hole. (A) Automated retinal boundary detection by the StratusOCT version 4.0 software interpolates the inner retinal surface across the discontinuity (arrow), and the resultant retinal thickness map (B) does not demonstrate the retinal hole. After manual placement of boundaries for the retina and subretinal space with the computer-assisted software (C), the resultant retinal height map demonstrates the macular hole (D). In this way, the subretinal fluid under the edge of the hole (E) can be measured independently from the parafoveal “cuff” of retinal edema (D, Image not available ).
Figure 6.
 
OCT scan from a patient with a full-thickness macular hole. (A) Automated retinal boundary detection by the StratusOCT version 4.0 software interpolates the inner retinal surface across the discontinuity (arrow), and the resultant retinal thickness map (B) does not demonstrate the retinal hole. After manual placement of boundaries for the retina and subretinal space with the computer-assisted software (C), the resultant retinal height map demonstrates the macular hole (D). In this way, the subretinal fluid under the edge of the hole (E) can be measured independently from the parafoveal “cuff” of retinal edema (D, Image not available ).
Figure 7.
 
OCT scan from a patient with central serous chorioretinopathy (CSCR). (A) Automated retinal boundary detection by the StratusOCT version 4.0 software does not separate the subretinal fluid from the neurosensory retina (long arrow) and incorrectly identifies the inner retinal boundary (short arrow). The resultant retinal thickness map (B) actually depicts the retinal “height,” and contains a wedge-shaped artifact ( Image not available ) due to misidentification of the retinal surface in one scan. After manual placement (C) of boundaries for the retina and subretinal space using computer-assisted software, the artifact is now removed from the retinal height map (D). Separate thickness maps of the retina (E) and the subretinal space (F) demonstrate that the retina is actually not substantially thickened in this patient.
Figure 7.
 
OCT scan from a patient with central serous chorioretinopathy (CSCR). (A) Automated retinal boundary detection by the StratusOCT version 4.0 software does not separate the subretinal fluid from the neurosensory retina (long arrow) and incorrectly identifies the inner retinal boundary (short arrow). The resultant retinal thickness map (B) actually depicts the retinal “height,” and contains a wedge-shaped artifact ( Image not available ) due to misidentification of the retinal surface in one scan. After manual placement (C) of boundaries for the retina and subretinal space using computer-assisted software, the artifact is now removed from the retinal height map (D). Separate thickness maps of the retina (E) and the subretinal space (F) demonstrate that the retina is actually not substantially thickened in this patient.
Figure 8.
 
OCT scan of a patient with a fibrovascular pigment epithelial detachment (FVPED) associated with AMD. (A) Generation of a retinal thickness map by the StratusOCT version 4.0 software requires “alignment” of the A-scans, which results in an artificial flattening of the FVPED. This does not accurately reflect the overall retinal surface topography (B) or the raw scan data (C). In addition, it fails to differentiate the subretinal fluid from the neurosensory retina. This differentiation can be accomplished by manual placement of boundaries for the retina, subretinal space, and the original (or baseline) location of the retinal pigment epithelium using computer-assisted software (D). With this analysis, the true topography of the retinal surface is now depicted in the retinal height map (E), along with separate thickness maps for the retina (F), the subretinal space (G), and the FVPED (H).
Figure 8.
 
OCT scan of a patient with a fibrovascular pigment epithelial detachment (FVPED) associated with AMD. (A) Generation of a retinal thickness map by the StratusOCT version 4.0 software requires “alignment” of the A-scans, which results in an artificial flattening of the FVPED. This does not accurately reflect the overall retinal surface topography (B) or the raw scan data (C). In addition, it fails to differentiate the subretinal fluid from the neurosensory retina. This differentiation can be accomplished by manual placement of boundaries for the retina, subretinal space, and the original (or baseline) location of the retinal pigment epithelium using computer-assisted software (D). With this analysis, the true topography of the retinal surface is now depicted in the retinal height map (E), along with separate thickness maps for the retina (F), the subretinal space (G), and the FVPED (H).
HuangD, SwansonEA, LinCP, et al. Optical coherence tomography. Science. 1991;254:1178–1181. [CrossRef] [PubMed]
BroeckerEH, DunbarMT. Optical coherence tomography: its clinical use for the diagnosis, pathogenesis, and management of macular conditions. Optometry. 2005;76:79–101. [CrossRef] [PubMed]
HeeMR, BaumalCR, PuliafitoCA, et al. Optical coherence tomography of age-related macular degeneration and choroidal neovascularization. Ophthalmology. 1996;103:1260–1270. [CrossRef] [PubMed]
GallemoreRP, JumperJM, McCuenBW, 2nd, JaffeGJ, PostelEA, TothCA. Diagnosis of vitreoretinal adhesions in macular disease with optical coherence tomography. Retina. 2000;20:115–120. [CrossRef] [PubMed]
CoppeAM, RipandelliG. Optical coherence tomography in the evaluation of vitreoretinal disorders of the macula in highly myopic eyes. Semin Ophthalmol. 2003;18:85–88. [PubMed]
PanozzoG, GussonE, ParoliniB, MercantiA. Role of OCT in the diagnosis and follow up of diabetic macular edema. Semin Ophthalmol. 2003;18:74–81. [CrossRef] [PubMed]
RivelleseM, GeorgeA, SulkesD, ReichelE, PuliafitoC. Optical coherence tomography after laser photocoagulation for clinically significant macular edema. Ophthalmic Surg Lasers. 2000;31:192–197. [PubMed]
RogersAH, MartidisA, GreenbergPB, PuliafitoCA. Optical coherence tomography findings following photodynamic therapy of choroidal neovascularization. Am J Ophthalmol. 2002;134:566–576. [CrossRef] [PubMed]
LiuX, LingY, GaoR, ZhaoT, HuangJ, ZhengX. Optical coherence tomography’s diagnostic value in evaluating surgical impact on idiopathic macular hole. Chin Med J (Engl). 2003;116:444–447. [PubMed]
MassinP, DuguidG, ErginayA, HaouchineB, GaudricA. Optical coherence tomography for evaluating diabetic macular edema before and after vitrectomy. Am J Ophthalmol. 2003;135:169–177. [CrossRef] [PubMed]
SchumanJS, HeeMR, AryaAV, et al. Optical coherence tomography: a new tool for glaucoma diagnosis. Curr Opin Ophthalmol. 1995;6:89–95. [CrossRef] [PubMed]
StalmansP, SpileersW, DralandsL. The use of optical coherence tomography in macular diseases. Bull Soc Belge Ophtalmol. 1999;272:15–30. [PubMed]
VooI, MavrofridesEC, PuliafitoCA. Clinical applications of optical coherence tomography for the diagnosis and management of macular diseases. Ophthalmol Clin North Am. 2004;17:21–31. [CrossRef] [PubMed]
HeierJS, AntoszykAN, PavanPR, et al. Ranibizumab for treatment of neovascular age-related macular degeneration: a phase I/II multicenter, controlled, multidose study. Ophthalmology. 2006;113:642.e1–e4
KimIK, HusainD, MichaudN, et al. Effect of intravitreal injection of ranibizumab in combination with verteporfin PDT on normal primate retina and choroid. Invest Ophthalmol Vis Sci. 2006;47:357–363. [CrossRef] [PubMed]
KourlasH, SchillerDS. Pegaptanib sodium for the treatment of neovascular age-related macular degeneration: a review. Clin Ther. 2006;28:36–44. [CrossRef] [PubMed]
MichelsS, RosenfeldPJ, PuliafitoCA, MarcusEN, VenkatramanAS. Systemic bevacizumab (Avastin) therapy for neovascular age-related macular degeneration twelve-week results of an uncontrolled open-label clinical study. Ophthalmology. 2005;112:1035–1047. [CrossRef] [PubMed]
AveryRL, PieramiciDJ, RabenaMD, CastellarinAA, NasirMA, GiustMJ. Intravitreal bevacizumab (Avastin) for neovascular age-related macular degeneration. Ophthalmology. 2006;113:363–372. [CrossRef] [PubMed]
RosenfeldPJ, FungAE, PuliafitoCA. Optical coherence tomography findings after an intravitreal injection of bevacizumab (avastin) for macular edema from central retinal vein occlusion. Ophthalmic Surg Lasers Imaging. 2005;36:336–339. [PubMed]
SpaideRF, LaudK, FineHF, et al. Intravitreal bevacizumab treatment of choroidal neovascularization secondary to age-related macular degeneration. Retina. 2006;26:383–390. [CrossRef] [PubMed]
GragoudasES, AdamisAP, CunninghamET, Jr, FeinsodM, GuyerDR. Pegaptanib for neovascular age-related macular degeneration. N Engl J Med. 2004;351:2805–2816. [CrossRef] [PubMed]
BlumenthalEZ, WilliamsJM, WeinrebRN, GirkinCA, BerryCC, ZangwillLM. Reproducibility of nerve fiber layer thickness measurements by use of optical coherence tomography. Ophthalmology. 2000;107:2278–2282. [CrossRef] [PubMed]
BrowningDJ. Interobserver variability in optical coherence tomography for macular edema. Am J Ophthalmol. 2004;137:1116–1117. [CrossRef] [PubMed]
CarpinetoP, CiancagliniM, ZuppardiE, FalconioG, DoronzoE, MastropasquaL. Reliability of nerve fiber layer thickness measurements using optical coherence tomography in normal and glaucomatous eyes. Ophthalmology. 2003;110:190–195. [CrossRef] [PubMed]
MassinP, VicautE, HaouchineB, ErginayA, PaquesM, GaudricA. Reproducibility of retinal mapping using optical coherence tomography. Arch Ophthalmol. 2001;119:1135–1142. [CrossRef] [PubMed]
PaunescuLA, SchumanJS, PriceLL, et al. Reproducibility of nerve fiber thickness, macular thickness, and optic nerve head measurements using StratusOCT. Invest Ophthalmol Vis Sci. 2004;45:1716–1724. [CrossRef] [PubMed]
SchumanJS, Pedut-KloizmanT, HertzmarkE, et al. Reproducibility of nerve fiber layer thickness measurements using optical coherence tomography. Ophthalmology. 1996;103:1889–1898. [CrossRef] [PubMed]
HeeMR. Artifacts in optical coherence tomography topographic maps. Am J Ophthalmol. 2005;139:154–155. [CrossRef] [PubMed]
PonsME, Garcia-ValenzuelaE. Redefining the limit of the outer retina in optical coherence tomography scans. Ophthalmology. 2005;112:1079–1085. [CrossRef] [PubMed]
PodoleanuA, CharalambousI, PleseaL, DogariuA, RosenR. Correction of distortions in optical coherence tomography imaging of the eye. Phys Med Biol. 2004;49:1277–1294. [CrossRef] [PubMed]
RayR, StinnettSS, JaffeGJ. Evaluation of image artifact produced by optical coherence tomography of retinal pathology. Am J Ophthalmol. 2005;139:18–29. [PubMed]
SaddaSR, WuZ, WalshAC, et al. Errors in retinal thickness measurements obtained by optical coherence tomography. Ophthalmology. 2006;113:285–293. [CrossRef] [PubMed]
CostaRA, CalucciD, SkafM, et al. Optical coherence tomography 3: automatic delineation of the outer neural retinal boundary and its influence on retinal thickness measurements. Invest Ophthalmol Vis Sci. 2004;45:2399–2406. [CrossRef] [PubMed]
LudbrookJ. Comparing methods of measurements. Clin Exp Pharmacol Physiol. 1997;24:193–203. [CrossRef] [PubMed]
BlandJM, AltmanDG. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet. 1986;1:307–310. [PubMed]
WojtkowskiM, SrinivasanV, FujimotoJG, et al. Three-dimensional retinal imaging with high-speed ultrahigh-resolution optical coherence tomography. Ophthalmology. 2005;112:1734–1746. [CrossRef] [PubMed]
WojtkowskiM, BajraszewskiT, GorczynskaI, et al. Ophthalmic imaging by spectral optical coherence tomography. Am J Ophthalmol. 2004;138:412–419. [CrossRef] [PubMed]
de BoerJF, CenseB, ParkBH, PierceMC, TearneyGJ, BoumaBE. Improved signal-to-noise ratio in spectral-domain compared with time-domain optical coherence tomography. Opt Lett. 2003;28:2067–2069. [CrossRef] [PubMed]
NassifN, CenseB, ParkBH, et al. In vivo human retinal imaging by ultrahigh-speed spectral domain optical coherence tomography. Opt Lett. 2004;29:480–482. [CrossRef] [PubMed]
AngerEM, UnterhuberA, HermannB, et al. Ultrahigh resolution optical coherence tomography of the monkey fovea: identification of retinal sublayers by correlation with semithin histology sections. Exp Eye Res. 2004;78:1117–1125. [CrossRef] [PubMed]
KaiserPK, RiemannCD, SearsJE, LewisH. Macular traction detachment and diabetic macular edema associated with posterior hyaloidal traction. Am J Ophthalmol. 2001;131:44–49. [CrossRef] [PubMed]
IshikawaH, SteinDM, WollsteinG, BeatonS, FujimotoJG, SchumanJS. Macular segmentation with optical coherence tomography. Invest Ophthalmol Vis Sci. 2005;46:2012–2017. [CrossRef] [PubMed]
Figure 1.
 
Optical coherence tomography (OCT) B-scan, after partial grading using the OCTOR computer-assisted, manual-grading software. OCTOR software allows the grader to draw various retinal boundaries individually, by selecting the desired boundary from a list. Each boundary line is color coded and drawn by dragging the computer mouse from one end of the B-scan to the other. The outer retinal boundary is drawn as a yellow line (yellow arrow), and the partially completed inner retinal boundary is drawn as a white line (white arrow, the edge of the drawn line).
Figure 1.
 
Optical coherence tomography (OCT) B-scan, after partial grading using the OCTOR computer-assisted, manual-grading software. OCTOR software allows the grader to draw various retinal boundaries individually, by selecting the desired boundary from a list. Each boundary line is color coded and drawn by dragging the computer mouse from one end of the B-scan to the other. The outer retinal boundary is drawn as a yellow line (yellow arrow), and the partially completed inner retinal boundary is drawn as a white line (white arrow, the edge of the drawn line).
Figure 2.
 
OCT B-scan of a normal retina showing the outer retinal boundary (white line; inset: white arrow) as detected by the StratusOCT version 4.0 software. The outer retinal boundary is detected at the inner surface of the innermost hyperreflective band, whereas the true boundary should be at the inner surface of the second hyperreflective band (blue line; inset: blue arrow).
Figure 2.
 
OCT B-scan of a normal retina showing the outer retinal boundary (white line; inset: white arrow) as detected by the StratusOCT version 4.0 software. The outer retinal boundary is detected at the inner surface of the innermost hyperreflective band, whereas the true boundary should be at the inner surface of the second hyperreflective band (blue line; inset: blue arrow).
Figure 3.
 
Bland-Altman plots demonstrating level of agreement between the StratusOCT and OCTOR (computer-assisted, manual-grading software) interpolation methods. Comparison of results between the two methods is shown for FCS (A) and total macular volume (B). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 3.
 
Bland-Altman plots demonstrating level of agreement between the StratusOCT and OCTOR (computer-assisted, manual-grading software) interpolation methods. Comparison of results between the two methods is shown for FCS (A) and total macular volume (B). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 4.
 
Bland-Altman plots demonstrating reproducibility and level of agreement between the measurements obtained by the same grader on two separate occasions (intragrader, A, B) and by two independent masked graders (intergrader, C, D). Comparison of the results are shown for foveal center point thickness (A, intragrader; C, intergrader) and total macular volume (B, intragrader; D, intergrader). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 4.
 
Bland-Altman plots demonstrating reproducibility and level of agreement between the measurements obtained by the same grader on two separate occasions (intragrader, A, B) and by two independent masked graders (intergrader, C, D). Comparison of the results are shown for foveal center point thickness (A, intragrader; C, intergrader) and total macular volume (B, intragrader; D, intergrader). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 5.
 
Bland-Altman plots demonstrating level of agreement between the StratusOCT and OCTOR (computer-assisted manual-grading software) retinal thickness analysis results. Comparison of results between the two methods is shown for foveal center point thickness (A) and total macular volume (B). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 5.
 
Bland-Altman plots demonstrating level of agreement between the StratusOCT and OCTOR (computer-assisted manual-grading software) retinal thickness analysis results. Comparison of results between the two methods is shown for foveal center point thickness (A) and total macular volume (B). Solid line: mean difference; dashed lines: 95% confidence interval.
Figure 6.
 
OCT scan from a patient with a full-thickness macular hole. (A) Automated retinal boundary detection by the StratusOCT version 4.0 software interpolates the inner retinal surface across the discontinuity (arrow), and the resultant retinal thickness map (B) does not demonstrate the retinal hole. After manual placement of boundaries for the retina and subretinal space with the computer-assisted software (C), the resultant retinal height map demonstrates the macular hole (D). In this way, the subretinal fluid under the edge of the hole (E) can be measured independently from the parafoveal “cuff” of retinal edema (D, Image not available ).
Figure 6.
 
OCT scan from a patient with a full-thickness macular hole. (A) Automated retinal boundary detection by the StratusOCT version 4.0 software interpolates the inner retinal surface across the discontinuity (arrow), and the resultant retinal thickness map (B) does not demonstrate the retinal hole. After manual placement of boundaries for the retina and subretinal space with the computer-assisted software (C), the resultant retinal height map demonstrates the macular hole (D). In this way, the subretinal fluid under the edge of the hole (E) can be measured independently from the parafoveal “cuff” of retinal edema (D, Image not available ).
Figure 7.
 
OCT scan from a patient with central serous chorioretinopathy (CSCR). (A) Automated retinal boundary detection by the StratusOCT version 4.0 software does not separate the subretinal fluid from the neurosensory retina (long arrow) and incorrectly identifies the inner retinal boundary (short arrow). The resultant retinal thickness map (B) actually depicts the retinal “height,” and contains a wedge-shaped artifact ( Image not available ) due to misidentification of the retinal surface in one scan. After manual placement (C) of boundaries for the retina and subretinal space using computer-assisted software, the artifact is now removed from the retinal height map (D). Separate thickness maps of the retina (E) and the subretinal space (F) demonstrate that the retina is actually not substantially thickened in this patient.
Figure 7.
 
OCT scan from a patient with central serous chorioretinopathy (CSCR). (A) Automated retinal boundary detection by the StratusOCT version 4.0 software does not separate the subretinal fluid from the neurosensory retina (long arrow) and incorrectly identifies the inner retinal boundary (short arrow). The resultant retinal thickness map (B) actually depicts the retinal “height,” and contains a wedge-shaped artifact ( Image not available ) due to misidentification of the retinal surface in one scan. After manual placement (C) of boundaries for the retina and subretinal space using computer-assisted software, the artifact is now removed from the retinal height map (D). Separate thickness maps of the retina (E) and the subretinal space (F) demonstrate that the retina is actually not substantially thickened in this patient.
Figure 8.
 
OCT scan of a patient with a fibrovascular pigment epithelial detachment (FVPED) associated with AMD. (A) Generation of a retinal thickness map by the StratusOCT version 4.0 software requires “alignment” of the A-scans, which results in an artificial flattening of the FVPED. This does not accurately reflect the overall retinal surface topography (B) or the raw scan data (C). In addition, it fails to differentiate the subretinal fluid from the neurosensory retina. This differentiation can be accomplished by manual placement of boundaries for the retina, subretinal space, and the original (or baseline) location of the retinal pigment epithelium using computer-assisted software (D). With this analysis, the true topography of the retinal surface is now depicted in the retinal height map (E), along with separate thickness maps for the retina (F), the subretinal space (G), and the FVPED (H).
Figure 8.
 
OCT scan of a patient with a fibrovascular pigment epithelial detachment (FVPED) associated with AMD. (A) Generation of a retinal thickness map by the StratusOCT version 4.0 software requires “alignment” of the A-scans, which results in an artificial flattening of the FVPED. This does not accurately reflect the overall retinal surface topography (B) or the raw scan data (C). In addition, it fails to differentiate the subretinal fluid from the neurosensory retina. This differentiation can be accomplished by manual placement of boundaries for the retina, subretinal space, and the original (or baseline) location of the retinal pigment epithelium using computer-assisted software (D). With this analysis, the true topography of the retinal surface is now depicted in the retinal height map (E), along with separate thickness maps for the retina (F), the subretinal space (G), and the FVPED (H).
Table 1.
 
Comparison of OCTOR and Stratus Interpolation Algorithms
Table 1.
 
Comparison of OCTOR and Stratus Interpolation Algorithms
OCT Output Parameter Mean Stratus Retinal Thickness-Stratus Interpolation (μm) Mean Stratus Retinal Thickness-OCTOR Interpolation (μm) Absolute Difference, Mean [Median, Maximum] (μm)* Mean Percent Difference, † R 2
Foveal center point thickness 160.5 160.5 0.0 [0, 0] 0.0 1.00
Foveal center subfield (subfield 9) 202.9 204.0 1.1 [1, 2] 0.5 1.00
Outer Circle (range: subfields 1–4) 221.7–236.3 221.9–236.4 0.2–0.9 [0–1, 1–2] 0.1–0.4 1.00
Inner Circle (range: subfields 5–8) 260.9–268.9 261.6–269.6 0.6–0.8 [1, 1–2] 0.2–0.3 1.00
Volume (mm3) 6.75 6.79 0.03 [0.04, 0.05] 0.5 1.00
Table 2.
 
Intragrader Reproducibility Using Computer-Assisted Manual Grading
Table 2.
 
Intragrader Reproducibility Using Computer-Assisted Manual Grading
OCT Output Parameter Mean 1st Grading OCTOR Retinal Thickness (μm) Mean 2nd Grading OCTOR Retinal Thickness (μm) Absolute Difference, Mean [Median, Maximum] (μm)* Mean Percent Difference, † R 2
Foveal center point thickness 156.5 156.1 2.5 [3, 5] 1.6 0.99
Foveal center subfield (subfield 9) 204.4 203.6 1.2 [1, 4] 0.6 1.00
Outer circle (range: subfields 1–4) 222.6–236.7 221.9–236.0 1.3–1.8 [1, 3–5] 0.5–0.8% 0.99–1.00
Inner Circle (range: subfields 5–8) 264.0–271.5 263.1–270.2 1.4–1.8 [1–2, 3–5] 0.5–0.7% 0.99–1.00
Volume (mm3) 6.81 6.79 0.03 [0.03, 0.06] 0.4 1.00
Table 3.
 
Intergrader Reproducibility Using Computer-Assisted Manual Grading
Table 3.
 
Intergrader Reproducibility Using Computer-Assisted Manual Grading
OCT Output Parameter Mean Grader 1 OCTOR Retinal Thickness (μm) Mean Grader 2 OCTOR Retinal Thickness (μm) Absolute Difference, Mean [Median, Maximum] (μm)* Mean Percent Difference, † R 2
Foveal center point thickness 158.8 156.5 3.7 [3, 7] 2.4 % 0.99
Foveal center subfield (subfield 9) 205.3 204.4 1.6 [1.5, 4] 0.8 % 1.00
Outer circle (range: subfields 1–4) 222.6–237.0 222.6–236.7 1.2–2.2 [1–2, 4–7] 0.5–1.0% 0.99
Inner circle (range: subfields 5–8) 263.6–271.2 264.0–271.5 1.5–2.0 [1–2, 4–5] 0.6–0.7% 0.99–1.00
Volume (mm3) 6.82 6.81 0.02 [0.02, 0.09] 0.4% 1.00
Table 4.
 
Comparison between Automated StratusOCT Measurements and Computer-Assisted Manual Grading
Table 4.
 
Comparison between Automated StratusOCT Measurements and Computer-Assisted Manual Grading
OCT Output Parameter Mean Stratus Retinal Thickness-Stratus Interpolation (μm) Mean OCTOR Retinal Thickness (μm)* Absolute Difference Mean, [Median, Maximum] (μm), † Mean Percent Difference, ‡ R 2
Foveal center point thickness 160.5 157.5 3.7 [3.3, 7.5] 2.4 % 0.98
Foveal center subfield (subfield 9) 202.9 204.8 2.0 [2.0, 5.0] 1.0 % 1.00
Outer circle (range: subfields 1–4) 221.7–236.3 222.6–236.8 1.3–1.9 [1.3–1.5, 2.5–6.5] 0.6–0.8% 0.99
Inner circle (range: subfields 5–8) 260.9–268.9 263.8–271.6 2.5–3.0 [2.5–3.0, 5.0–6.5] 1.0–1.2% 0.99–1.00
Volume (mm3) 6.75 6.81 0.06 [0.06, 0.13] 0.9 % 0.99
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×