Purchase this article with an account.
Danilo Andrade De Jesus, Luisa Sanchez Brea, Kate Grieve, Michel Paques, Nicolas Lefaudeux, Theo van Walsum, Stefan Klein; Adaptive Optics SLO motion correction assisted by image similarity. Invest. Ophthalmol. Vis. Sci. 2020;61(7):227.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Imaging artefacts due to eye motion related distortion are often observed within and between AO-SLO consecutive frames. In this work, we propose and compare different approaches to quantify image similarity and, hence, to assist the image registration.
AO-SLO images acquired with the MAORI (Physical Sciences Inc, USA) of 6 healthy subjects and 7 AMD patients and with the MERLIN prototype (Imagine Eyes, France) of 4 healthy individuals were used. A pipeline based on ranking the AO-SLO frames according to their similarity to a reference image, followed by strip-wise registration, was developed. Convolutional Neural Networks (MobileNet, InceptionV3, ResNet50, VGG16, VGG19, and DenseNet121) pre-trained on ImageNet dataset were used to extract features from the AO-SLO images. Additionally, the values of two histogram-based methods (1–using the image, and 2–combining it with two filtered images) were considered. Eight distance metrics (Correlation, Chi-squared, Intersection, Hellinger, Euclidian, Manhattan, Chebyshev, and KNN) were used. The sum of four normalized image quality metrics (Variance, Contrast, Entropy, and Kurtosis), for the first 20 ranked, registered and averaged images, were calculated to estimate the performance of each combination.
The motion correction and the quality of the averaged images increased by incorporating a similarity measure before the registration. Visual inspection of the averaged images shows that the proposed approach resulted in sharper edges and reduced the offset between images compared to motion correction without image similarity. A statistically significant difference (p<0.001) was observed for the image quality between the feature extraction methods but not between the distance metrics (p=0.94). The VGG19 network followed by the VGG16 led to the highest image quality.
We show that standard pre-trained networks can be used to assist the motion correction, reducing the complexity needed for the registration and, hence, improving the image quality of the registered and averaged AO-SLO images.
This is a 2020 ARVO Annual Meeting abstract.
Schematics of the pipeline developed in this work.
Normalized image quality for all feature extraction methods and distance metrics (left), averaged image quality for each feature extraction method over all distance metrics (center), and averaged image quality for each distance metric over all feature extraction methods (right). The black line denotes the standard deviation.
This PDF is available to Subscribers Only