June 2023
Volume 64, Issue 8
Open Access
ARVO Annual Meeting Abstract  |   June 2023
Multisensory Cues in Simulated Ultra Low Vision
Author Affiliations & Notes
  • Arathy Kartha
    Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States
  • Roksana Sadeghi
    Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States
  • Ravnit Kaur Singh
    Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States
  • Jennifer Campos
    Toronto Rehabilitation Institute, Toronto, Ontario, Canada
    Department of Psychology, University of Toronto, Toronto, Ontario, Canada
  • Ione Fine
    Department of Psychology, University of Washington, Seattle, Washington, United States
  • Gislin Dagnelie
    Ophthalmology, Johns Hopkins University School of Medicine, Baltimore, Maryland, United States
  • Footnotes
    Commercial Relationships   Arathy Kartha None; Roksana Sadeghi None; Ravnit Singh None; Jennifer Campos None; Ione Fine None; Gislin Dagnelie None
  • Footnotes
    Support  K99EY033031 to AK, Research to Prevent Blindness to GD
Investigative Ophthalmology & Visual Science June 2023, Vol.64, 2807. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Arathy Kartha, Roksana Sadeghi, Ravnit Kaur Singh, Jennifer Campos, Ione Fine, Gislin Dagnelie; Multisensory Cues in Simulated Ultra Low Vision. Invest. Ophthalmol. Vis. Sci. 2023;64(8):2807.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Rehabilitative training for individuals with ultra-low vision (ULV; VA worse than 20/1600) relies heavily on training these individuals to rely more on information from unimpaired senses. The purpose of this study was to investigate the feasibility of using virtual reality (VR) to investigate multisensory cue combination in people with simulated ULV. People with no sensory loss are known to optimally integrate information from multiple senses to maximize perceptual precision, but little is known about the efficiency of cue combination in individuals with late-stage sensory loss.

Methods : Two normally sighted subjects (S1 & S2) wore Bangerter foils to simulate ULV and completed a spatial localization task (finding a phone on a table) in VR using visual (V) (HTC VIVE headset), auditory (A) (Valve Steam Spatial Audio delivered via headphones) and/or haptic (H) cues (vibrations from VIVE controller). In each trial a stimulus was presented sequentially (500ms per presentation,500ms inter-presentation gap for V and A and 2000 ms for H) in two different locations ranging between 0 and 36 degrees. Subjects were asked to report whether the second stimulus was to the right, left or coincident relative to the first one (3AFC). Both subjects completed unimodal trials with V, A or haptic H cues. There was a total of 105 trials/condition. Cumulative gaussian functions were fitted to each condition to estimate the point of subjective equality (μ) and uncertainty in the localization estimates (σ) respectively.

Results : Both subjects were able to perform the VR task using V, A and H cues using off-the-shelf VR equipment. The estimated σ values were V: 2.3 & 4.7; A: 2.6 & 6.6; H: 11.1 & 7.04 degrees, for S1 and S2 respectively. Using maximum likelihood estimation, the predicted optimal relative cue weightings, if all three cues were present, were V: 0.55 & 0.51; A = 0.43 and 0.26; H = 0.02 & 0.23, for S1 and S2 respectively.

Conclusions : These preliminary results confirmed the feasibility of testing cue combinations using a VR setting in people with simulated ULV. We found that perceptual uncertainty was lowest for visual cues compared to auditory and haptic cues in simulated ULV. Further testing is currently being done in bimodal and trimodal conditions to compare predicted cue weights with estimated cue weights and to investigate whether subjects optimally integrate information across multiple senses.

This abstract was presented at the 2023 ARVO Annual Meeting, held in New Orleans, LA, April 23-27, 2023.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×