April 2014
Volume 55, Issue 13
Free
ARVO Annual Meeting Abstract  |   April 2014
Use of a GPU-Based k-NN Implementation Can Provide Substantial Speedup for Classification of Retinal Imaging Data
Author Affiliations & Notes
  • Wenxiang Deng
    Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA
    Department of Biomedical Engineering, The University of Iowa, Iowa City, IA
  • Qiao Hu
    Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA
  • Mohammad Saleh Miri
    Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA
  • Bhavna Josephine Antony
    Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA
  • Michael David Abramoff
    Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA
    Department of Ophthalmology and Visual Sciences, The University of Iowa, Iowa City, IA
  • Mona K Garvin
    Department of Electrical and Computer Engineering, The University of Iowa, Iowa City, IA
    VA Center for the Prevention and Treatment of Visual Loss, Iowa City VA Health Care System, Iowa City, IA
Investigative Ophthalmology & Visual Science April 2014, Vol.55, 4835. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Wenxiang Deng, Qiao Hu, Mohammad Saleh Miri, Bhavna Josephine Antony, Michael David Abramoff, Mona K Garvin; Use of a GPU-Based k-NN Implementation Can Provide Substantial Speedup for Classification of Retinal Imaging Data. Invest. Ophthalmol. Vis. Sci. 2014;55(13):4835.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract
 
Purpose
 

The k-nearest-neighbor (k-NN) classification approach is commonly used for retinal image segmentation. Because a brute-force implementation of a k-NN approach (on a PC) is often prohibitively time consuming, approaches such as the Approximate Nearest Neighbor (ANN) implementation by Mount and Arya are commonly used. However, with higher-dimensional data, even the ANN approach becomes prohibitively slow. With the low-cost availability of graphics processing units (GPUs) that provide the ability to perform many computations in parallel, GPU-implementations of a k-NN pixel classification provide a promising alternative. Thus, the purpose of this work was to compare the running time of the commonly used ANN implementation with that of a GPU-based k-NN implementation (Garcia et al., CVPRW 2008) on retinal feature sets.

 
Methods
 

We compare the time required to classify 10,000 query points using the ANN implementation and the GPU-based k-NN implementation as a function of number of training data points and number of feature dimensions using three existing retinal feature sets: bifurcation features (Hu et al., SPIE 2013), cup&rim features (Miri et al., SPIE 2013), and OCT surface segmentation features (Antony et al., SPIE 2012). For each dataset, the number k is fixed as 21; the tested number of feature dimensions (dim) ranges from 5 to the maximum number of features in the dataset (in increments of 10); and the tested number of training data points (n) is from 4000 to the maximum number available (up to 250,000). Tests are performed using a computer with an Intel i7-2600k processor, 16GB memory, and NVIDIA GeForce GTX 570 GPU.

 
Results
 

Fig. 1 illustrates the running time for both approaches as a function of dimension and training data size and Table 1 provides the speedup (ratio of ANN time to GPU k-NN time) results. The GPU-based approach was faster for the bifurcation and cup&rim features (up to ~100x), but was slower for the OCT surface features.

 
Conclusions
 

Considerably faster running times can be obtained when using a GPU-based k-NN implementation over the ANN approach. The performance improvements are most noticeable for higher-dimensional data.

 
 
Table 1. Speedup (ratio of ANN time to GPU k-NN time) for different dimensions and training set sizes.
 
Table 1. Speedup (ratio of ANN time to GPU k-NN time) for different dimensions and training set sizes.
 
 
Fig. 1. Running time of ANN and GPU k-NN approach for different training data sizes (colored lines) and dimensions.
 
Fig. 1. Running time of ANN and GPU k-NN approach for different training data sizes (colored lines) and dimensions.
 
Keywords: 549 image processing  
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×