April 2014
Volume 55, Issue 13
Free
ARVO Annual Meeting Abstract  |   April 2014
Assessing improvements in perception afforded by retinal prostheses in multisensory tasks
Author Affiliations & Notes
  • Sara Garcia
    Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom
  • Karin Petrini
    Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom
  • Lyndon Da Cruz
    Vitreoretinal Surgery, Moorfields Eye Hospital, London, United Kingdom
  • Gary S Rubin
    Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom
  • Marko Nardini
    Visual Neuroscience, UCL Institute of Ophthalmology, London, United Kingdom
    Psychology, Durham University, Durham, United Kingdom
  • Footnotes
    Commercial Relationships Sara Garcia, None; Karin Petrini, None; Lyndon Da Cruz, None; Gary Rubin, None; Marko Nardini, None
  • Footnotes
    Support None
Investigative Ophthalmology & Visual Science April 2014, Vol.55, 5962. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Sara Garcia, Karin Petrini, Lyndon Da Cruz, Gary S Rubin, Marko Nardini; Assessing improvements in perception afforded by retinal prostheses in multisensory tasks. Invest. Ophthalmol. Vis. Sci. 2014;55(13):5962.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: To assess whether prosthetic vision can be combined with other sensory information to enhance performance on multisensory tasks.

Methods: Five participants (4 retinitis pigmentosa, 1 choroideremia; 49-76yrs) implanted with Second Sight’s Argus II retinal prosthesis in 2008-09, participated in this study. All had some level of bare light perception, but no measurable visual acuity, prior to surgery. Following implant, two scored reliably on the visual acuity scale with the system on (2.8 & 2.9logMAR). Participants completed three tasks: (1) A size discrimination task using vision, touch, or both. (2) A reaction time task to visual, auditory or visual-auditory stimuli (flash-beep). (3) A navigation task, using visual and/or self-motion information, to reproduce a path or complete a walked triangle.

Results: (1) No participants showed sensory combination, as all five had significantly lower size discrimination thresholds for haptic-only than visual-only judgments. Visual-haptic (VH) performance did not significantly differ from haptic-only (H) performance [Mean correct: VH=87%; H=85.5%]. (2) Two participants of five indicated sensory combination, with faster responses to visual-auditory (VA) than unisensory (A or V) stimuli [A-VA>.028ms & V-VA>.094ms]. Two of three not showing this multisensory advantage, judged flash-beep stimuli as simultaneous when flashes preceded beeps by >.144ms (compared to .034-.036ms in other participants). (3) Three participants of four indicated sensory combination on the navigation tasks, but this worsened performance. Two were significantly further away from the correct final position with vision (V) than without (NV) during path reproduction [Mean error: V=.91m NV=.44m]. One participant’s end position was significantly further away with vision, for triangle completion [Mean error: V=1.01m NV=.49m].

Conclusions: The tests indicate that prosthetic vision can be used with other sensory information to enable faster perception in reaction time tasks. However, prosthetic vision did not improve accuracy in spatial tasks. This may be due to differences in spatial vs temporal resolution of the device, and to a need to learn new spatial mappings. Training participants to understand how prosthetic vision is informative about the external world may lead to perceptual improvements in the longer term. We are currently assessing the effects of training on navigation task performance.

Keywords: 584 low vision • 650 plasticity • 641 perception  
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×