April 2011
Volume 52, Issue 14
ARVO Annual Meeting Abstract  |   April 2011
Obstacle Avoidance and Wayfinding using a Computer Vision based Mobility Aid for the Visually Impaired
Author Affiliations & Notes
  • Vivek Pradeep
    Biomedical Engineering,
    University of Southern California, Los Angeles, California
  • Gerard Medioni
    Computer Science,
    University of Southern California, Los Angeles, California
  • James Weiland
    Biomedical Engineering,
    University of Southern California, Los Angeles, California
  • Footnotes
    Commercial Relationships  Vivek Pradeep, University of Southern California (P); Gerard Medioni, University of Southern California (P); James Weiland, University of Southern California (P)
  • Footnotes
    Support  Research to Prevent Blindness, W.M. Keck Foundation; NSF EEC-0310723; USAMRMC-W81XWH-10-2-0076
Investigative Ophthalmology & Visual Science April 2011, Vol.52, 388. doi:
  • Views
  • Share
  • Tools
    • Alerts
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Vivek Pradeep, Gerard Medioni, James Weiland; Obstacle Avoidance and Wayfinding using a Computer Vision based Mobility Aid for the Visually Impaired. Invest. Ophthalmol. Vis. Sci. 2011;52(14):388.

      Download citation file:

      © ARVO (1962-2015); The Authors (2016-present)

  • Supplements

Purpose: : A mobility assistive device for the visually impaired has been developed that guides users along safe paths using tactile cuing. Experiments were conducted with visually impaired subjects to evaluate the feasibility of this device.

Methods: : A head-mounted stereo camera acquired 3D data that was processed by a suite of algorithms to locate obstacles and compute a traversable path. The algorithms detected user’s deviation from this safe path and a tactile interface provided cues to guide the user back onto the path. To test the system, 9 subjects with varying visual impairment conditions were recruited at the Braille Institute of Los Angeles. An obstacle course was set up that the subjects navigated through 10 times. Subjects were randomly divided into 3 groups: group 1 used the white cane, group 2 used the device and group 3 used the cane and device simultaneously. After a month, group 1 and group 2 subjects were swapped to ensure that results were not biased because of group composition.

Results: : Group 3 (cane and device) subjects had no collision in any trial. Group 1 had 17 collisions and group 2 totaled 7 collisions. Thus, fewer collisions occurred when using the device. The difference was maintained when groups 1 and 2 were swapped. Video recordings were used to create ‘heatmaps’ to visualize trajectories adopted by the subjects. These heatmaps demonstrated that white cane users contacted sides of obstacles and got stuck behind longer objects due to the limited sensing range of the cane. However, the average Percentage Preferred Walking Speed (PPWS) measured for the 3 groups across all trials showed that group 1 was fastest followed by group 3; group 2 was slowest. Performance with the device-only was slow due to limitations of the prototype system and because subjects overcompensated when cues were received. In some cases, these delays led to disorientation, which was overcome by group 3 subjects who also used the cane to maintain a sense of direction.

Conclusions: : A camera based device for sensing obstacles in the environment decreased the number of collisions when compared to a white cane, when used by blind subjects in a mobility task. The current prototype system has operating speed constraints that require the user to move slowly through an environment to allow reliable obstacle detection. Improved technology and increased training may enhance performance and potentially assist the visually impaired in safe navigation when used in conjunction with the white cane.

Keywords: image processing • low vision 

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.