March 2012
Volume 53, Issue 14
Free
ARVO Annual Meeting Abstract  |   March 2012
A Method For Content Extraction From Retinal Slit-lamp Video Sequences
Author Affiliations & Notes
  • Tobias Rudolph
    ARTORG Center Ophthalmic Technologies,
    Department of Ophthalmology,
    University of Bern, Bern, Switzerland
  • Marcel Menke
    Department of Ophthalmology,
    University of Bern, Bern, Switzerland
  • Sebastian Wolf
    Department of Ophthalmology,
    University of Bern, Bern, Switzerland
  • Jens H. Kowal
    ARTORG Center Ophthalmic Technologies,
    Department of Ophthalmology,
    University of Bern, Bern, Switzerland
  • Footnotes
    Commercial Relationships  Tobias Rudolph, None; Marcel Menke, None; Sebastian Wolf, None; Jens H. Kowal, None
  • Footnotes
    Support  Haag-Streit Foundation
Investigative Ophthalmology & Visual Science March 2012, Vol.53, 4098. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Tobias Rudolph, Marcel Menke, Sebastian Wolf, Jens H. Kowal; A Method For Content Extraction From Retinal Slit-lamp Video Sequences. Invest. Ophthalmol. Vis. Sci. 2012;53(14):4098.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : Despite the recent advances in digital imaging, the slit-lamp biomicroscope is still one of the most frequently used instruments for examining the posterior segment of a patient's eye. Although modern slit-lamps can be equipped with digital video adapters, variable contrast, narrow field of view and various kinds of reflections render the storage and visualization of raw fundus video data useless. Therefore, extensive pre-processing is necessary to enable more sophisticated image processing algorithms such as fundus mosaicking or video based tracking. This article addresses the issue of extracting the meaningful content from slit-lamp video sequences.

Methods: : For every pixel in a video frame the algorithm decides whether the pixel displays content or background. Here, background represents all unusable pixels such as the non-illuminated black part of the image or specular reflections. This poses a classic two-category problem, that can be expressed using Bayesian probability theory. Given the a-priori probabilities of selected pixel features, the likeliness of a pixel belonging to content or background can be calculated using Bayesian ruling. In this study the pixel color and its position in the image were used to calculate the class probabilities. To chose the correct class, the ratio between these two values is compared to a predefined decision threshold.From a set of training images the various a-priori probabilities were estimated. In each image the pixels were manually assigned to either class using a standard raster-image software. The color probability distributions were determined using three-dimensional color histograms.The algorithm’s performance was then evaluated using leave-one-out cross-validation. To maximize the predictive power of the classifier, the validation method was also used to find the optimal bin width and classification threshold from the receiver-operating-characteristic curves.

Results: : In this study 400 images with 24bit color depth were used. The validation revealed an accuracy of over 90%. The optimal setting was found at a width of 32 colors per bin.

Conclusions: : We present an efficient method for classifying pixels in an slit-lamp video image based on its color and position. The algorithm is particularly well suited for real-time processing, because the online classification merely requires a few table lookups per pixel. Also, the use of Bayesian decision theory allows for the straightforward integration of additional features.

Keywords: retina 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×