Abstract
Purpose :
Rehabilitative training for individuals with ultra-low vision (ULV; VA worse than 20/1600) relies heavily on training these individuals to rely more on information from unimpaired senses. The purpose of this study was to investigate the feasibility of using virtual reality (VR) to investigate multisensory cue combination in people with simulated ULV. People with no sensory loss are known to optimally integrate information from multiple senses to maximize perceptual precision, but little is known about the efficiency of cue combination in individuals with late-stage sensory loss.
Methods :
Two normally sighted subjects (S1 & S2) wore Bangerter foils to simulate ULV and completed a spatial localization task (finding a phone on a table) in VR using visual (V) (HTC VIVE headset), auditory (A) (Valve Steam Spatial Audio delivered via headphones) and/or haptic (H) cues (vibrations from VIVE controller). In each trial a stimulus was presented sequentially (500ms per presentation,500ms inter-presentation gap for V and A and 2000 ms for H) in two different locations ranging between 0 and 36 degrees. Subjects were asked to report whether the second stimulus was to the right, left or coincident relative to the first one (3AFC). Both subjects completed unimodal trials with V, A or haptic H cues. There was a total of 105 trials/condition. Cumulative gaussian functions were fitted to each condition to estimate the point of subjective equality (μ) and uncertainty in the localization estimates (σ) respectively.
Results :
Both subjects were able to perform the VR task using V, A and H cues using off-the-shelf VR equipment. The estimated σ values were V: 2.3 & 4.7; A: 2.6 & 6.6; H: 11.1 & 7.04 degrees, for S1 and S2 respectively. Using maximum likelihood estimation, the predicted optimal relative cue weightings, if all three cues were present, were V: 0.55 & 0.51; A = 0.43 and 0.26; H = 0.02 & 0.23, for S1 and S2 respectively.
Conclusions :
These preliminary results confirmed the feasibility of testing cue combinations using a VR setting in people with simulated ULV. We found that perceptual uncertainty was lowest for visual cues compared to auditory and haptic cues in simulated ULV. Further testing is currently being done in bimodal and trimodal conditions to compare predicted cue weights with estimated cue weights and to investigate whether subjects optimally integrate information across multiple senses.
This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.