May 2006
Volume 47, Issue 13
Free
ARVO Annual Meeting Abstract  |   May 2006
Optical Cohrerence Tomography (OCT) Quality Evaluation Revisited With Common Classification Criteria
Author Affiliations & Notes
  • H. Ishikawa
    UPMC Eye Center, Ophthalmology and Visual Science Research Center, Eye and Ear Institute, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
  • G. Wollstein
    UPMC Eye Center, Ophthalmology and Visual Science Research Center, Eye and Ear Institute, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
  • R.J. Noecker
    UPMC Eye Center, Ophthalmology and Visual Science Research Center, Eye and Ear Institute, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
  • D. Greenfield
    Bascom Palmer Eye Institute, Miami, FL
  • C. Mattox
    New England Eye Center, Tufts–New England Medical Center, Boston, MA
  • R. Varma
    Doheny Eye Institute, University of Southern California, Los Angeles, CA
  • R.A. Bilonick
    UPMC Eye Center, Ophthalmology and Visual Science Research Center, Eye and Ear Institute, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
  • J.S. Schuman
    UPMC Eye Center, Ophthalmology and Visual Science Research Center, Eye and Ear Institute, Department of Ophthalmology, University of Pittsburgh School of Medicine, Pittsburgh, PA
  • Footnotes
    Commercial Relationships  H. Ishikawa, None; G. Wollstein, None; R.J. Noecker, None; D. Greenfield, None; C. Mattox, None; R. Varma, None; R.A. Bilonick, None; J.S. Schuman, Carl Zeiss Meditec, Inc., C; Carl Zeiss Meditec, Inc., P.
  • Footnotes
    Support  NIH Grants RO1–EY013178–6 and P30–EY008098, Research to Prevent Blindness and The Eye and Ear Foundation (Pittsburgh)
Investigative Ophthalmology & Visual Science May 2006, Vol.47, 3363. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      H. Ishikawa, G. Wollstein, R.J. Noecker, D. Greenfield, C. Mattox, R. Varma, R.A. Bilonick, J.S. Schuman; Optical Cohrerence Tomography (OCT) Quality Evaluation Revisited With Common Classification Criteria . Invest. Ophthalmol. Vis. Sci. 2006;47(13):3363.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : Last year we reported that signal strength (SS), the standard image quality parameter for optical coherence tomography (OCT; StratusOCT, Carl Zeiss Meditec, Inc., Dublin, CA) images, performed as well as human experts' assessment. However, one limitation of that study was that the experts did not have a common set of classification criteria, leading to poor agreement among themselves. We revisited the same study with common criteria in order to evaluate the performance of SS more accurately.

Methods: : StratusOCT images with 3 standard fast scan types (macular, nerve fiber layer, and optic nerve head scans) were obtained. SS was calculated for each image using Stratus 4.0 software. A software evaluation program was developed so that OCT experts could see StratusOCT images one at a time on a computer monitor while categorizing them by subjective 3–level–grading (excellent, acceptable, and poor). Four OCT experts independently evaluated these images in a randomized fashion with the common criteria, and the majority opinion scores were compared with SS. Area under the receiver operating characteristic curve (AROC) was computed to assess the performance of SS discriminating poor from acceptable and excellent images based on experts' majority opinion.

Results: : 270 images of 90 subjects (30 each for normal, early, and advanced glaucoma) were enrolled in this study. Agreement among experts was better when only two categories were considered (poor versus acceptable/excellent, kappa 0.446) than three (poor versus acceptable versus excellent, kappa 0.223). Although not statistically significant agreement improved with the common criteria compared to no criteria (kappa 0.446 vs. 0.369, p>0.05). SS showed a significant difference among three categories (p<0.0001, one–way ANOVA) and good discrimination (AROC 0.936). The suggested cutoff for SS using the likelihood ratio was 6.21 achieving the best specificity and sensitivity (87.4% and 87.5% respectively).

Conclusions: : Image quality discrimination performance of SS was comparable to human expert assessment. Agreement between experts regarding image quality improved when a standard image set was used. The optimal cutoff for SS for clinical use was confirmed to be 6 or greater.

Keywords: imaging/image analysis: clinical • imaging methods (CT, FA, ICG, MRI, OCT, RTA, SLO, ultrasound) • image processing 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×