Abstract
Purpose :
To introduce an automated system that aligns and mosaics fluorescein angiographic images with color fundus images by detecting novel key features for image registration.
Methods :
In this work, an automated system was developed to automatically extract align and register retinal images from multiple modalities. The proposed system makes use of novel low-dimensional step pattern analysis (LoSPA) features which are rotation invariant and scale insensitivity. Compared to prior methods which require vessel detection, the proposed LoSPA features can be applied directly onto retinal images without the need to detect vasculature. Instead, LoSPA makes use of step patterns from corner features derived directly from edge maps. These LoSPA features are extracted from color fundus images and fluorescein angiogram images. We find matches between two sets of LoSPA feature vectors by Euclidean distance, using the k-dimensional data structure and search algorithm, where k = 3. Once matched, an affine transformation is applied and the matched images are overlaid.
Results :
The system was tested using anonymous data from a local hospital comprising color fundus images and corresponding fluorescein angiographic images of 120 anonymous patients, with symptoms of macular edema and staphyloma. Each pair of images were marked with 10 corresponding points, which represents the ground truth. In this study, two variants of LoSPA were used comprising of two scales (LoSPA-58) and three scales (LoSPA-86). We run comparative experiments between 7 algorithms: SIFT, GDB-ICP, ED-DB-ICP, UR-SIFT-PIIFD, Harris-PIIFD, LoSPA-58 and LoSPA-86. Successful registration is defined as a Mean Square Error of less than 5 pixels. For patients with mild-to-moderate retinal diseases, the best success rates are from LoSPA-86 (93.33%), LoSPA-58 (90%) and Harris-PIIFD (90%). For patients with severe retinal diseases, the best success rates are from LoSPA-86 (79.17%), LoSPA-58 (66.67%) and Harris-PIIFD (41.67%).
Conclusions :
An automated system that detects key features from multi-modal retinal images is tested. Experimental results show that its performance is promising, showing good potential for the system to be used to mosaic multi-modal images. This system can be used for surgical planning in the ARTEMIS surgery platform.
This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.