May 2007
Volume 48, Issue 13
ARVO Annual Meeting Abstract  |   May 2007
Meander Mazes: Eye-Hand Coordination in Simulated Prosthetic Vision
Author Affiliations & Notes
  • V. J. Mueller
    Department of Physics, Ruprecht Karls University, Heidelberg, Germany
  • L. Wang
    Ophthalmology, Johns Hopkins University, Baltimore, Maryland
  • L. A. Ostrin
    Ophthalmology, Johns Hopkins University, Baltimore, Maryland
  • G. D. Barnett
    Ophthalmology, Johns Hopkins University, Baltimore, Maryland
  • G. Dagnelie
    Ophthalmology, Johns Hopkins University, Baltimore, Maryland
  • Footnotes
    Commercial Relationships V.J. Mueller, None; L. Wang, None; L.A. Ostrin, None; G.D. Barnett, None; G. Dagnelie, None.
  • Footnotes
    Support NIH Grant EY 12843
Investigative Ophthalmology & Visual Science May 2007, Vol.48, 2548. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      V. J. Mueller, L. Wang, L. A. Ostrin, G. D. Barnett, G. Dagnelie; Meander Mazes: Eye-Hand Coordination in Simulated Prosthetic Vision. Invest. Ophthalmol. Vis. Sci. 2007;48(13):2548.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose:: Retinal prostheses with 50-100 electrodes are being implanted in blind retinitis pigmentosa patients in the US and Germany, and assessment of visually guided performance will be an important aspect of efficacy evaluation. Eye-hand coordination (EHC) is one of the most important forms of visually guided behavior. We sought to develop a test of EHC performance and learning without tactile feedback, through simulation in sighted subjects, and predict its applicability by manipulating parameters that model the as-yet-unknown properties of prosthetic vision.

Methods:: We use a maze-tracing task on a tablet computer as the EHC performance assessment. By processing the task through a pixelizing filter representing the retinal prosthesis and presenting it to the sighted subject in a video headset under gaze-locked conditions, manipulation of parameters such as phosphene luminance distribution and contrast, phosphene dropout, and dynamic noise should allow prediction of task performance by retinal implant wearers.

Results:: We combined existing prosthetic vision simulation hardware and software with a revision of maze tracing software originally designed for low vision patients. The maze is presented through a 6x10 raster of 2° blurred dots, with varying contrast (10%, 25%, 85%), dropouts (10%, 30%, 50%), and dynamic noise (off, on). The maze is a light band (floor) in a dark background field, connecting circular dots marking the starting and end points. The difficulty of the tracing task is manipulated by varying floor width and length, number and sharpness of turns, and floor-to-background contrast. As the subject views the maze through a camera on the head-mounted display, in pixelized view, and traces the maze with the fingertip touching the screen, the tablet computer records position and time, and the trace is made visible to the subject by inverting touched screen elements on the tablet computer. Tracing accuracy (total area enveloped by traces outside the floor) and speed are used as outcome measures.

Conclusions:: We are testing the newly completed system in subjects highly practiced in simulated prosthetic vision. Our initial evaluation shows that the design meets the intended requirements for EHC assessment with severely degraded visual input similar to that anticipated for retinal implant wearers. The integration of prosthetic vision simulation and maze-tracing provides a novel approach to low vision rehabilitation and performance evaluation for retinal implant wearers.

Keywords: retina • shape, form, contour, object perception • low vision 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.