June 2017
Volume 58, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2017
Using Sensory Augmentation to Optimize Training Outcomes with Vision Prostheses
Author Affiliations & Notes
  • Lauren N Ayton
    Centre for Eye Research Australia, East Melbourne, Victoria, Australia
    Department of Surgery (Ophthalmology), University of Melbourne, Parkville, Victoria, Australia
  • Lachlan Hamilton
    Bionics Institute, East Melbourne, Victoria, Australia
  • Chris D McCarthy
    Department of Computer Science and Software Engineering, Swinburne University, Hawthorn, Victoria, Australia
  • Matthew A Petoe
    Bionics Institute, East Melbourne, Victoria, Australia
    Department of Medical Bionics, University of Melbourne, Parkville, Victoria, Australia
  • Footnotes
    Commercial Relationships   Lauren Ayton, None; Lachlan Hamilton, None; Chris McCarthy, None; Matthew Petoe, None
  • Footnotes
    Support  Australian Research Council Special Research Initiative (SRI) in Bionic Vision Science and Technology grant to Bionic Vision Australia (BVA)
Investigative Ophthalmology & Visual Science June 2017, Vol.58, 4763. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to Subscribers Only
      Sign In or Create an Account ×
    • Get Citation

      Lauren N Ayton, Lachlan Hamilton, Chris D McCarthy, Matthew A Petoe; Using Sensory Augmentation to Optimize Training Outcomes with Vision Prostheses. Invest. Ophthalmol. Vis. Sci. 2017;58(8):4763.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Retinal prostheses are currently the only regulatory-approved treatment option for patients with profound vision loss from retinitis pigmentosa. Whilst showing exciting results, user training can be challenging due to the non-intuitive nature of phosphene perception. The aim of this study was to investigate whether the addition of auditory cues, using the Seeing With Sound (SWS) software program, would improve the interpretability of simulated phosphene vision (SPV), similar to that provided by retinal prostheses.

Methods : Two computer algorithms were used to process the input camera image. In the first, the SWS program scanned the image from left to right, with brightness converted to loudness, vertical image position converted to frequency (or pitch), and horizontal position converted to stereo panning, presented audibly via stereo headphones. In the second, the SPV program converted the image to phosphene-like dots for display with head-mounted Virtual Reality goggles. This SPV program incorporated the retinotopic map of a patient implanted with the Bionic Vision Australia prototype suprachoroidal retinal implant between 2012 and 2014. Forty normally-sighted subjects completed two visual tasks; a light localization task from the Basic Assessment of Light and Motion (BaLM) and an optotype recognition task from the Freiburg Acuity and Contrast Test (FrACT) with 1) SPV alone, 2) SWS alone or 3) SPV + SWS, in random order.

Results : Subjects reported SPV to be more intuitive than SWS and were able to complete both tasks more quickly in conditions with SPV. Accuracy on the light localization task was highest in the combined SPV + SWS condition (94.7 ± 5.8%) compared to SPV alone (91.7 ± 9.0%, p = 0.002) and SWS alone (89.0 ± 13.0%, p = 0.001). Response times were significantly faster for both SPV (6.6 ± 3.4s, p < 0.001) and SWS + SPV (6.7 ± 3.1s, p < 0.001) when compared to SWS alone (9.3 ± 4.3s). Visual acuity was best in the SWS condition (1.95 ± 0.24 logMAR), followed by the SPV + SWS condition (2.04 ± 0.26 logMAR), and the SPV alone (2.54 ± 0.07 logMAR).

Conclusions : Results for the combined SPV (visual) + SWS (auditory) condition demonstrate that the addition of auditory cues improved performance with simulated prosthetic vision and did not significantly slow down response times. Hence, the use of auditory cues may be beneficial in training visual prosthesis recipients.

This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×