Abstract
Purpose :
Retinal prostheses are currently the only regulatory-approved treatment option for patients with profound vision loss from retinitis pigmentosa. Whilst showing exciting results, user training can be challenging due to the non-intuitive nature of phosphene perception. The aim of this study was to investigate whether the addition of auditory cues, using the Seeing With Sound (SWS) software program, would improve the interpretability of simulated phosphene vision (SPV), similar to that provided by retinal prostheses.
Methods :
Two computer algorithms were used to process the input camera image. In the first, the SWS program scanned the image from left to right, with brightness converted to loudness, vertical image position converted to frequency (or pitch), and horizontal position converted to stereo panning, presented audibly via stereo headphones. In the second, the SPV program converted the image to phosphene-like dots for display with head-mounted Virtual Reality goggles. This SPV program incorporated the retinotopic map of a patient implanted with the Bionic Vision Australia prototype suprachoroidal retinal implant between 2012 and 2014. Forty normally-sighted subjects completed two visual tasks; a light localization task from the Basic Assessment of Light and Motion (BaLM) and an optotype recognition task from the Freiburg Acuity and Contrast Test (FrACT) with 1) SPV alone, 2) SWS alone or 3) SPV + SWS, in random order.
Results :
Subjects reported SPV to be more intuitive than SWS and were able to complete both tasks more quickly in conditions with SPV. Accuracy on the light localization task was highest in the combined SPV + SWS condition (94.7 ± 5.8%) compared to SPV alone (91.7 ± 9.0%, p = 0.002) and SWS alone (89.0 ± 13.0%, p = 0.001). Response times were significantly faster for both SPV (6.6 ± 3.4s, p < 0.001) and SWS + SPV (6.7 ± 3.1s, p < 0.001) when compared to SWS alone (9.3 ± 4.3s). Visual acuity was best in the SWS condition (1.95 ± 0.24 logMAR), followed by the SPV + SWS condition (2.04 ± 0.26 logMAR), and the SPV alone (2.54 ± 0.07 logMAR).
Conclusions :
Results for the combined SPV (visual) + SWS (auditory) condition demonstrate that the addition of auditory cues improved performance with simulated prosthetic vision and did not significantly slow down response times. Hence, the use of auditory cues may be beneficial in training visual prosthesis recipients.
This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.