April 2014
Volume 55, Issue 13
Free
ARVO Annual Meeting Abstract  |   April 2014
Assessing Manual versus Automated Segmentation of the Macula using Optical Coherence Tomography
Author Affiliations & Notes
  • Jonathan D Oakley
    Voxeleron LLC, Pleasanton, CA
  • Iñigo Gabilondo
    Center of Neuroimmunology and Department of Neurology, Institute of Biomedical Research August Pi Sunyer (IDIBAPS), Barcelona, Spain
  • Christopher Songster
    UCSF Medical Center, University of California, San Francisco, CA
  • Daniel Russakoff
    Voxeleron LLC, Pleasanton, CA
  • Ari Green
    UCSF Medical Center, University of California, San Francisco, CA
  • Pablo Villoslada
    Center of Neuroimmunology and Department of Neurology, Institute of Biomedical Research August Pi Sunyer (IDIBAPS), Barcelona, Spain
  • Footnotes
    Commercial Relationships Jonathan Oakley, Voxeleron LLC (E), Voxeleron LLC (P); Iñigo Gabilondo, None; Christopher Songster, None; Daniel Russakoff, Voxeleron LLC (E), Voxeleron LLC (P); Ari Green, None; Pablo Villoslada, None
  • Footnotes
    Support None
Investigative Ophthalmology & Visual Science April 2014, Vol.55, 4790. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Jonathan D Oakley, Iñigo Gabilondo, Christopher Songster, Daniel Russakoff, Ari Green, Pablo Villoslada; Assessing Manual versus Automated Segmentation of the Macula using Optical Coherence Tomography. Invest. Ophthalmol. Vis. Sci. 2014;55(13):4790.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
 
Purpose
 

To investigate the inter-rater reliability of retinal segmentation between two experts and an automated algorithm.

 
Methods
 

Optical coherence tomography (OCT) imaging and segmentation of the macula enable in-vivo quantification of retinal tissues. To be used clinically, the algorithms must be validated. In this study, 24 Spectralis OCT volumes were automatically segmented using custom software (rater “A1”). The resulting seven layer interfaces were independently presented to two experts (raters “R1” and “R2”), who then manually corrected all layers in all B-scans. Within the ETDRS grid, Bland-Altman analysis gauged overall differences; the intra-class correlation coefficients (ICC) ranked rater agreement. Layers analyzed were retinal nerve fiber (RNFL), ganglion cell and inner plexiform complex (GCIPL), outer plexiform (OPL), inner and outer nuclear (INL, ONL) and the photo-receptor complex from the inner segment to Bruch’s membrane (PR).

 
Results
 

Mean differences and ranges across raters were all within the inherent axial resolution (~5µ) of the device (Table 1). The limits of agreement (LOA) were highest for GCIP between the algorithm and the experts (16.15µ, 10.65µ) and within the experts (8.82µ); and lowest for the OPL (3.3 µ, 1.1 µ and 3.12µ, respectively), as might be expected based on average thickness. The ONL showed very good LOA (8.0, 1.79 and 3.12µ), especially given overall thickness. Excellent inter-rater reproducibility is reported across most layers, although overall agreement was low for the INL and the PR (Table 2). No individual layer differed significantly between at least one expert and the algorithm.

 
Conclusions
 

Automated segmentation of the macula can accurately identify seven retinal layer interfaces. Observed differences between any rater pairings could be either an algorithm limitation or the readers’ differing interpretations of the boundaries in the data. Given the laborious task of manually interpreting OCT data, however, an automated algorithm is preferable and typically more repeatable. Additionally, the algorithm used is device independent and, in this study, cannot be differentiated from manual results. The ability to reliably compare results across different OCT systems would be of great interest to the research community.

 
 
Table 1 - Mean difference (µ) and 95% confidence intervals.
 
Table 1 - Mean difference (µ) and 95% confidence intervals.
 
 
Table 2 - ICC across reviewers - Highest ICC score pairing*.
 
Table 2 - ICC across reviewers - Highest ICC score pairing*.
 
Keywords: 549 image processing • 550 imaging/image analysis: clinical • 552 imaging methods (CT, FA, ICG, MRI, OCT, RTA, SLO, ultrasound)  
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×