March 2012
Volume 53, Issue 14
Free
ARVO Annual Meeting Abstract  |   March 2012
Evaluating Depth-based Visual Representations For Mobility In Simulated Prosthetic Vision
Author Affiliations & Notes
  • Nick M. Barnes
    Canberra Research Lab, NICTA, Canberra, Australia
    Engineering,
    The Australian National University, Canberra, Australia
  • Janine G. Walker
    Canberra Research Lab, NICTA, Canberra, Australia
    Centre for Mental Health Research,
    The Australian National University, Canberra, Australia
  • Chris McCarthy
    Canberra Research Lab, NICTA, Canberra, Australia
  • Viorica Botea
    Canberra Research Lab, NICTA, Canberra, Australia
  • Adele F. Scott
    Canberra Research Lab, NICTA, Canberra, Australia
  • Hugh Dennet
    Canberra Research Lab, NICTA, Canberra, Australia
    Psychology,
    The Australian National University, Canberra, Australia
  • Paulette Lieby
    Canberra Research Lab, NICTA, Canberra, Australia
    Engineering,
    The Australian National University, Canberra, Australia
  • Footnotes
    Commercial Relationships  Nick M. Barnes, WO/2011/041842 (P); Janine G. Walker, None; Chris McCarthy, None; Viorica Botea, None; Adele F. Scott, None; Hugh Dennet, None; Paulette Lieby, None
  • Footnotes
    Support  Bionic Vision Australia (ARC SRI on Bionic Vision Science and Technology), NICTA (DBCDE, Australian Govt, and ARC), NH&MRC Capacity Building Grant #418020
Investigative Ophthalmology & Visual Science March 2012, Vol.53, 5550. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Nick M. Barnes, Janine G. Walker, Chris McCarthy, Viorica Botea, Adele F. Scott, Hugh Dennet, Paulette Lieby; Evaluating Depth-based Visual Representations For Mobility In Simulated Prosthetic Vision. Invest. Ophthalmol. Vis. Sci. 2012;53(14):5550.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: : Safe and efficient mobility is an important challenge for prosthetic vision devices. To date, research into possible visual representations has largely employed intensity approaches that represent luminance. However, depth-based representations may provide advantages: robustness to low contrast; and, perception of distance to unfamiliar overhanging obstacles from a single view. We investigate whether a depth-based representation for mobility has advantages compared to intensity-based representations in a controlled high-contrast environment; a setting beneficial to intensity-based visual representations.

Methods: : 11 normally-sighted (20/20, Pelli-Robson>=1.95) participants traversed a controlled high-contrast indoor mobility course using a mobile prosthetic vision simulator set with phosphenes centrally displayed identically to both eyes. We compared a depth-based representation (Depth; 35x30 phosphenes, 16 separately noticeable levels of phosphene brightness) to two intensity-based representations: high dynamic range Intensity (35x30, 64 levels); and, Degraded (6x4, 2 levels). The study had a double-blind randomized factorial

Results: : In the test phase, mean PPWS was more than 43.5% for Intensity and Depth. Across all sessions, Intensity (n=335) had significantly higher PPWS than Depth (n=331; p<0.001). Both Depth and Intensity were faster than Degraded (n =55; 20.25% PPWS for Degraded, p<0.001). There was an interaction effect for VR and Obstacle Placement (p=0.005). Specifically, for Intensity, PPWS reduced significantly with the introduction of obstacles (p<0.001); whereas, PPWS did not significantly change for Depth. PPWS improved significantly over the sessions (p<0.001) for Depth and Intensity.

Conclusions: : Both Depth and Intensity were effective for navigation. For Depth, participants were not significantly slowed by overhanging obstacles, while they were for Intensity, despite high contrast and greater dynamic range. Depth shows promise as an effective approach for mobility for prosthetic vision devices.

Keywords: image processing • space and scene perception 
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×