Abstract
Purpose: :
Despite the recent advances in digital imaging, the slit-lamp biomicroscope is still one of the most frequently used instruments for examining the posterior segment of a patient's eye. Although modern slit-lamps can be equipped with digital video adapters, variable contrast, narrow field of view and various kinds of reflections render the storage and visualization of raw fundus video data useless. Therefore, extensive pre-processing is necessary to enable more sophisticated image processing algorithms such as fundus mosaicking or video based tracking. This article addresses the issue of extracting the meaningful content from slit-lamp video sequences.
Methods: :
For every pixel in a video frame the algorithm decides whether the pixel displays content or background. Here, background represents all unusable pixels such as the non-illuminated black part of the image or specular reflections. This poses a classic two-category problem, that can be expressed using Bayesian probability theory. Given the a-priori probabilities of selected pixel features, the likeliness of a pixel belonging to content or background can be calculated using Bayesian ruling. In this study the pixel color and its position in the image were used to calculate the class probabilities. To chose the correct class, the ratio between these two values is compared to a predefined decision threshold.From a set of training images the various a-priori probabilities were estimated. In each image the pixels were manually assigned to either class using a standard raster-image software. The color probability distributions were determined using three-dimensional color histograms.The algorithm’s performance was then evaluated using leave-one-out cross-validation. To maximize the predictive power of the classifier, the validation method was also used to find the optimal bin width and classification threshold from the receiver-operating-characteristic curves.
Results: :
In this study 400 images with 24bit color depth were used. The validation revealed an accuracy of over 90%. The optimal setting was found at a width of 32 colors per bin.
Conclusions: :
We present an efficient method for classifying pixels in an slit-lamp video image based on its color and position. The algorithm is particularly well suited for real-time processing, because the online classification merely requires a few table lookups per pixel. Also, the use of Bayesian decision theory allows for the straightforward integration of additional features.