July 2019
Volume 60, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2019
The Impact of Vision Status on Spatial Localization
Author Affiliations & Notes
  • YINGZI XIONG
    University of Minnesota, Minneapolis, Minnesota, United States
  • Douglas Addleman
    University of Minnesota, Minneapolis, Minnesota, United States
  • Peggy Nelson
    University of Minnesota, Minneapolis, Minnesota, United States
  • Gordon E Legge
    University of Minnesota, Minneapolis, Minnesota, United States
  • Footnotes
    Commercial Relationships   YINGZI XIONG, None; Douglas Addleman, None; Peggy Nelson, None; Gordon Legge, None
  • Footnotes
    Support  UMN CATSS Grant UMF0021212, NIH Grant EY002934, NSF Grant DGE-1734815
Investigative Ophthalmology & Visual Science July 2019, Vol.60, 1050. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      YINGZI XIONG, Douglas Addleman, Peggy Nelson, Gordon E Legge; The Impact of Vision Status on Spatial Localization. Invest. Ophthalmol. Vis. Sci. 2019;60(9):1050.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Vision typically dominates when localizing targets such as cars or people that exhibit both auditory and visual cues. However, for those with low vision, the role of impaired vision in spatial localization is largely unknown. We investigated this question by comparing spatial localization for auditory targets in three conditions with different extent of visual involvement: no vision (blindfolded), visual context only (i.e., eyes open but no visual cue at the location of the auditory target), and multimodal with both auditory and visual cues at the location of the target. We included low-vision subjects with both normal and impaired hearing because of our broader interest in dual-sensory loss.

Methods : Three groups—central vision loss (CVL, N = 5); dual sensory loss with combined CVL and hearing loss (DSL, N = 8); and age-matched healthy controls (N = 12)—completed a spatial localization task. Targets were auditory pink noise (0.2 kHz to 8 kHz) and/or visual white disks (subtending 3°). Targets were presented for 200ms from one of 17 possible locations along the horizontal plane (10° steps, from 90° left to 90° right). Subjects verbally estimated target direction (e.g., “left, 25 degrees”) in four conditions: blindfolded auditory cue; auditory cue with visual context (but with no specific visual cue for the target); visual cue but no auditory cue; and multimodal, with both visual and auditory cues. Accuracy was calculated as the unsigned difference between reported and actual target locations. Variability was calculated as the standard deviation of responses for stimuli at each location.

Results : Performance in the blindfolded condition served as baseline. Visual context improved accuracy and reduced variability in controls, and it also reduced variability in CVL and improved accuracy in DSL. When localizing visual or multimodal stimuli, all subjects were significantly better than baseline, with no significantly different accuracy or variability across groups.

Conclusions : When localizing an auditory target, both visual context and visual cues at the location of the target can improve spatial localization abilities in people with visual impairment. These results indicate the value of residual vision in supplementing auditory cues in spatial localization.

This abstract was presented at the 2019 ARVO Annual Meeting, held in Vancouver, Canada, April 28 - May 2, 2019.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×