Abstract
Purpose :
Our research focuses on the development of a robust software-based algorithm to overcome the motion artifacts from the patient’s involuntary eye movements on OCT and OCTA retinal images. Acquiring multiple volumes from the same retinal location followed by precise registration and averaging reduces the effects of motion artifacts and improves the contrast, resulting in better visualization of distinctive features and facilitation of retinal diseases diagnosis in clinics.
Methods :
In our 3D registration method, prior to the registration, the motion artifacts are detected and eliminated. Following the motion artifact removal, a B-scan-based registration is applied to align the spatial coordinates of all the volumes to the coordinates of a reference volume with minimum motion artifacts. B-scan-based registration is followed by an en face affine registration to correct the rotation angle difference between each volume and the reference volume.
Results :
The accuracy and robustness of our proposed method are evaluated on datasets with 3 different field-of-views (FOVs) acquired from 4 custom-built OCT systems. Figure 1 illustrates a small FOV OCT dataset (>100 volumes) obtained using a 1060 nm swept-source OCT system. The final averaged result is a motion-free volume with an increased signal-to-noise ratio (79.43% improvement), allowing for better visualization of retinal capillary, photoreceptor mosaic, and RPE layers. Figure 2 shows a middle FOV OCTA dataset (> 20 volumes) acquired using an 800 nm spectral-domain OCT system. The final OCTA volume data shows a more distinct vasculature network without noticeable motion artifacts in superficial and deep capillary plexus.
Conclusions :
We have developed a novel 3D volume registration algorithm that can be utilized for various OCT/OCTA volumes with different spatial resolutions and FOVs. This registration algorithm can be utilized across different OCT platforms to overcome the motion artifacts.
This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.