September 2016
Volume 57, Issue 12
Open Access
ARVO Annual Meeting Abstract  |   September 2016
Improvement of visually-guided reach for retinal prosthesis wearers using scanning-based camera alignment
Author Affiliations & Notes
  • Michael P Barry
    Biomedical Engineering, Johns Hopkins University, Baltimore, Maryland, United States
  • Gislin Dagnelie
    Ophthal-Lions Vision Cntr, Johns Hopkins University, Baltimore, Maryland, United States
  • Footnotes
    Commercial Relationships   Michael Barry, Second Sight Medical Products (F); Gislin Dagnelie, Second Sight Medical Products (F)
  • Footnotes
    Support  R01 EY021220, T32 EY007143
Investigative Ophthalmology & Visual Science September 2016, Vol.57, 1959. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Michael P Barry, Gislin Dagnelie; Improvement of visually-guided reach for retinal prosthesis wearers using scanning-based camera alignment. Invest. Ophthalmol. Vis. Sci. 2016;57(12):1959.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose : To determine whether scanning-based camera alignment can offer retinal prosthesis wearers improved visually-guided reach over that provided by direct-stimulation-based camera alignment.

Methods : Three end-stage RP patients implanted with the Argus® II epiretinal prosthesis participated in this study. These prostheses stimulate the retina based on a set 18° x 11° area selected within the camera’s 49° x 38° field of view. The center of this processed area is referred to as the camera alignment position (CAP). CAPs were estimated and tested over a 5-month period in up to 13 testing sessions. All tests used a touchscreen placed in front of a participant. Camera-screen distance was set before every trial run. For direct-stimulation alignment, the central 4 of 60 electrodes in a wearer’s array were stimulated 8 times, and the CAP was set to the centroid of where the wearer pointed. The participants then touched a white square that appeared at random locations 40 times on a black background on the monitor. Typical targets had sides spanning 4° of visual field. Locations of each target and subject response were used to calculate the participant’s touch accuracy and precision. 1-4 trial runs of the same localization procedure were used as a scanning-based method to separately calculate a CAP to reduce errors. A test run of the localization task measured errors associated with the final calculated CAP.

Results : Localization error for all three subjects was on average reduced by 1-2° using the scanning-based method of alignment. Significance of error reductions between the two alignment methods was calculated using heteroscedastic t-tests. One subject’s average error fell by 1.6° (31%, n = 22, p < 0.001). Two other subjects participated in fewer tests and did not demonstrate statistically significant improvement, but average error was reduced by 1.2° (15%, n = 2, p > 0.05) and 1.1° (10%, n = 5, p > 0.05).

Conclusions : Direct-stimulation camera alignment may be satisfactory as a quick method of programming prostheses with external cameras if visually-guided reach is not a priority. For wearers who would notice up to 2° of additional pointing and reaching error, alignment based on camera input and head scanning may be preferable. Data collection and analysis will continue for all subjects.

This is an abstract that was submitted for the 2016 ARVO Annual Meeting, held in Seattle, Wash., May 1-5, 2016.


This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.