Purchase this article with an account.
Wenxiang Deng, Qiao Hu, Mohammad Saleh Miri, Bhavna Josephine Antony, Michael David Abramoff, Mona K Garvin; Use of a GPU-Based k-NN Implementation Can Provide Substantial Speedup for Classification of Retinal Imaging Data. Invest. Ophthalmol. Vis. Sci. 2014;55(13):4835.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
The k-nearest-neighbor (k-NN) classification approach is commonly used for retinal image segmentation. Because a brute-force implementation of a k-NN approach (on a PC) is often prohibitively time consuming, approaches such as the Approximate Nearest Neighbor (ANN) implementation by Mount and Arya are commonly used. However, with higher-dimensional data, even the ANN approach becomes prohibitively slow. With the low-cost availability of graphics processing units (GPUs) that provide the ability to perform many computations in parallel, GPU-implementations of a k-NN pixel classification provide a promising alternative. Thus, the purpose of this work was to compare the running time of the commonly used ANN implementation with that of a GPU-based k-NN implementation (Garcia et al., CVPRW 2008) on retinal feature sets.
We compare the time required to classify 10,000 query points using the ANN implementation and the GPU-based k-NN implementation as a function of number of training data points and number of feature dimensions using three existing retinal feature sets: bifurcation features (Hu et al., SPIE 2013), cup&rim features (Miri et al., SPIE 2013), and OCT surface segmentation features (Antony et al., SPIE 2012). For each dataset, the number k is fixed as 21; the tested number of feature dimensions (dim) ranges from 5 to the maximum number of features in the dataset (in increments of 10); and the tested number of training data points (n) is from 4000 to the maximum number available (up to 250,000). Tests are performed using a computer with an Intel i7-2600k processor, 16GB memory, and NVIDIA GeForce GTX 570 GPU.
Fig. 1 illustrates the running time for both approaches as a function of dimension and training data size and Table 1 provides the speedup (ratio of ANN time to GPU k-NN time) results. The GPU-based approach was faster for the bifurcation and cup&rim features (up to ~100x), but was slower for the OCT surface features.
Considerably faster running times can be obtained when using a GPU-based k-NN implementation over the ANN approach. The performance improvements are most noticeable for higher-dimensional data.
This PDF is available to Subscribers Only