Purchase this article with an account.
Riadh Fezzani, Philippe Cornic, Erika Boyenga Ödlund, Aurélien Plyer, Guy Le Besnerais, Caroline Kulcsàr; Unsupervised registration of Adaptive Optics retinal images in SLO fundus images. Invest. Ophthalmol. Vis. Sci. 2014;55(13):4814.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Assist the practitioner by providing, in an unsupervised way, the localization of high resolution retinal images captured using adaptive optics (AO) retinal cameras in classical scanning laser ophtalmoscopes (SLO) eye fundus images.
We consider images of eye fundus from different types of cameras: - AO retinal cameras with a 4°×4° field of view, - SLO with 30°×30° field of view. Infrared (IR) imaging is used. Mutual information (MI) maximization is used to register the AO images in the SLO images. MI is computed between gray levels, and also between image edges to take advantage of vessel structures. Noticing that the solution lies in a local maximum, we apply a Laplace-type filter to enhance local maxima and avoid the detection of a false position. We moreover propose a selection criteria and a confidence index for the result of our approach.
The python/C++ multi-threaded implementation of our approach was tested over 154 AO images and 17 corresponding infrared SLO images from pathological and non-pathological subjects. The rate of successful registration is 98.7%. Even in areas with very few vessel structures including the foveal avascular zone (FAZ), our algorithm was able to find the correct solution. We would like eventually to notify that our implementation is reasonably fast, in fact computation time takes less then 3 seconds for a position computed on a 764×764 pixels SLO image.
We have developed, implemented and tested on many pathological and non-pathological subjects, a fast, robust, and unsupervised approach for the localization of Adaptive Optics retinal images in infrared SLO fundus images. Color, red-free (RF) and auto-fluorescence (AF) imaging will be considered in a future work. To conclude, we would point that the developed algorithm is well suited for a GPU implementation, we can then expect real time positioning of the AO output in the SLO fundus image. This would be a great tool for helping the practitioner to decide where to capture AO images.
This PDF is available to Subscribers Only