May 2006
Volume 47, Issue 13
ARVO Annual Meeting Abstract  |   May 2006
Conquering the Maze: Virtual Mobility Performance with Simulated Prosthetic Vision
Author Affiliations & Notes
  • L. Wang
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • L. Yang
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • S. Sahajwani
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • G. Dagnelie
    Ophthalmology, Johns Hopkins University, Baltimore, MD
  • Footnotes
    Commercial Relationships  L. Wang, None; L. Yang, None; S. Sahajwani, None; G. Dagnelie, None.
  • Footnotes
    Support  NIH Grant EY07143–11
Investigative Ophthalmology & Visual Science May 2006, Vol.47, 3687. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      L. Wang, L. Yang, S. Sahajwani, G. Dagnelie; Conquering the Maze: Virtual Mobility Performance with Simulated Prosthetic Vision . Invest. Ophthalmol. Vis. Sci. 2006;47(13):3687.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Retinal prostheses in near future will likely have a small number of electrodes, e.g., less than 100. Mobility is a major goal for wearing such devices. We explored potential for low–resolution–prosthetic–vision guided mobility in a virtual maze.


Prosthetic vision was simulated in a video visor with built–in eye tracking as a grid of 10x6 blurred dots, visible only to the left eye. Ten 10–room mazes were simulated in virtual reality and visible through the visor. Each maze has a different room plan. The rooms were numerically labeled with 1 (entrance) to 10 (exit). Subjects (Ss) were instructed to use a game controller to traverse room 1 through 10 in order, which is the only correct route, in minimum time. Three normally sighted Ss participated in the study. Ss performed the task first in the "free–viewing" condition until reaching stable performance and then in the "gaze–locked" condition. In the free–viewing condition, the dot grid was stationary and Ss could freely scan the grid using eye movements. In the gaze–locked condition, the dot grid was fixed to the central retina. Ss were tested with several mazes (trials) in one hour per week. No two adjacent trials used the same maze. Performance was measured with two variables: Time Ss traversed a maze and Errors Ss made. An error was defined as returning to the previous room.


Ss completed most trials in 30 min. In the free–viewing condition, comparing the last 3 trails to the first 3 (columns 2 & 3, Table), all Ss reduced Time, Errors, and variation over trials. Time and Errors were positively correlated (column 4, Table). Ss differ in performance and performance improvement (Table). Two Ss were also tested in the gaze–locked condition. Comparing to the free–viewing condition, mean Time was slightly increased in S1 (1048±275 vs. 933±356 sec) but decreased in S2 (304±193 vs. 409±158). Mean Errors were decreased in both Ss (S1: 6.3±3 vs. 9.8±8.6, S2: 2±2.4 vs. 7±5.5).


The results suggest that low–resolution prosthetic vision may provide mobility assistance and that such mobility can be improved by practice. Gaze–locking appears not a significant hindrance.  

Keywords: low vision • learning • vision and action 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.