June 2015
Volume 56, Issue 7
Free
ARVO Annual Meeting Abstract  |   June 2015
Image processing using a saliency map for a 49-channel retinal prosthesis
Author Affiliations & Notes
  • Hiroyuki Kanda
    Applied Visual Science, Osaka Univ Graduate Sch of Med, Suita, Japan
    Vision Institute, NIDEK, Gamagori, Japan
  • Takaomi Kanda
    Adaptive Machine Systems, Osaka Univ Graduate Sch of Eng, Suita, Japan
  • Yukie Nagai
    Adaptive Machine Systems, Osaka Univ Graduate Sch of Eng, Suita, Japan
  • Takashi Fujikado
    Applied Visual Science, Osaka Univ Graduate Sch of Med, Suita, Japan
  • Footnotes
    Commercial Relationships Hiroyuki Kanda, NIDEK (E); Takaomi Kanda, None; Yukie Nagai, None; Takashi Fujikado, NIDEK (P)
  • Footnotes
    Support None
Investigative Ophthalmology & Visual Science June 2015, Vol.56, 4783. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Hiroyuki Kanda, Takaomi Kanda, Yukie Nagai, Takashi Fujikado; Image processing using a saliency map for a 49-channel retinal prosthesis. Invest. Ophthalmol. Vis. Sci. 2015;56(7 ):4783.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Our second-generation device as a suprachoroidal-transretinal stimulatoin (STS) comprises a 49-channel active electrode array. To achieve effective prosthetic vision using a limited number of electrodes, it is necessary to develop an optimum image processing strategy. In recent years, a computational model “saliency map” (Itti et al. 1998) that can detect an outstanding location using primitive features has been proposed. The purpose of this study is to evaluate the feasibility of a saliency map for image processing in a 49-pixel retinal prosthesis.

Methods: We constructed a retinal prosthesis simulator system and compared the eye-hand coordination in two different algorithms: a saliency map and a conventional model (brightness map). The simulator system comprised a head-mounted camera, a laptop computer, and a head-mounted display (HMD). Images were captured using the head-mounted camera and converted into 7 × 7-pixel images for projection onto HMD. We recruited five healthy volunteers aged 21-38 years. All subjects wore the simulator and took the localization test. For the localization test, the subjects were instructed to point at the center of the visual target on a touch panel display, placed 40 cm from the subject. The distances from the target center to the position where the subject pointed were recorded as error distances. The target and the background color combinations were randomly changed (black/white, white/black, green/red, and red/green). The localization test trials were repeated 20 times for each condition.

Results: Significant differences were observed in the mean error distances between the color combinations for the brightness map (p < 0.01, ANOVA). In contrast, no significant differences were observed in the saliency map (p = 0.91, ANOVA). The mean error distances for the saliency map were significantly smaller than those for the brightness map (p < 0.05, paired t-test), for color combinations of green/red and red/green. It is notable that when the background and target were of similar brightness, the recognition accuracy for the saliency map was better than that for the brightness map.

Conclusions: These results suggest that the accuracy of visual object recognition can be improved using a saliency map. These findings demonstrate the feasibility of applying the saliency map for image processing in a 49-channel retinal prosthesis.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×