Purpose
Motion artifacts during image acquisition are a major problem for ophthalmic OCT. This study proposes and evaluates the feasibility of a new approach for eye motion correction using texture-based features extracted from slit-lamp adapted ultrahigh resolution OCT images (SL-UHR-OCT).
Methods
A statistical model is used to estimate the misalignment of each A-scan in x-axis and z-axis separately and without a reference B-scan. For A-scan movement tracking, we first extracted texture-based A- scan features and used them in the automatic alignment process to search the best x-alignment. Specifically, each B-scan is divided into sub-images of size 15*15 pixels from which16 texture features are extracted and later used to search the best alignment between two consecutive B-scans using the Geman-Mc-Clure (GM) similarity measure. Second, the same procedure is applied for the z-alignment, the intensity values of equally spaced A-scans on each B-scan are used to search the best z-alignment. The performance is evaluated on two 3D OCT data sets with isotropically and non-isotropically sampling scan density (i.e. 256x1365x256 & 512x1365x128 respectively) of the optic nerve head (ONH) corrupted by simulated eye movement artifacts. These data is obtained from a single healthy subject using two different scan protocols. Simulated artifacts generated inside and outside the ONH included only x-axis movement of various amplitudes [2-10 pixels] generated at a random location and direction. The algorithm accuracy is evaluated using the GM estimator of the difference between the intensity values of the corrected and the original 3D OCT data. In addition, the performance of the proposed method is compared with a recent reported statistical algorithm using particle filtering.
Results
The largest misalignment was obtained for the motion simulated inside the ONH (0.193±0.013 pixels). Our approach and the reported statistical model are quantitatively evaluated for different eye movement amplitudes, examples are shown in Figures 1 and 2. Note that algorithm generated artifacts observed with the reported algorithm are reduced or eliminated with our approach.
Conclusions
The preliminary results show that our algorithm is robust and may be appropriate for 3D OCT eye motion correction in SL-UHR-OCT 3D data. Based on subjective and objective evaluations, our algorithm outperforms the eye motion correction using particle filtering algorithm.
Keywords: 602 motion-3D •
552 imaging methods (CT, FA, ICG, MRI, OCT, RTA, SLO, ultrasound) •
549 image processing