July 2018
Volume 59, Issue 9
Open Access
ARVO Annual Meeting Abstract  |   July 2018
Motion parallax improves recognition of fixated object with cluttered background in simulated prosthetic vision
Author Affiliations & Notes
  • Kassandra R. Lee
    Ophthalmology, Schepens Eye Research Institute , Boston, Massachusetts, United States
  • Cheng Qiu
    Psychology, University of Pennsylvania , Philadelphia , Pennsylvania, United States
  • Jae-Hyun Jung
    Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States
    Ophthalmology, Schepens Eye Research Institute , Boston, Massachusetts, United States
  • Eli Peli
    Ophthalmology, Harvard Medical School, Boston, Massachusetts, United States
    Ophthalmology, Schepens Eye Research Institute , Boston, Massachusetts, United States
Investigative Ophthalmology & Visual Science July 2018, Vol.59, 3893. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kassandra R. Lee, Cheng Qiu, Jae-Hyun Jung, Eli Peli; Motion parallax improves recognition of fixated object with cluttered background in simulated prosthetic vision. Invest. Ophthalmol. Vis. Sci. 2018;59(9):3893.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose : Video-based visual prostheses are being developed to assist blind individuals. Due to low resolution, low dynamic range, and small field of view (FoV), their efficacy for object recognition is limited, especially in the presence of background clutter. We propose an imaging approach to improve motion parallax cues, which may be useful for distinguishing an object of interest (OI) and background via de-cluttering. While users move their head laterally to induce motion parallax, the OI remains fixed at the center of FoV. This mimics the fixating of OI in normal vision via ocular-vestibular reflex, otherwise lacking in visual prostheses with head-mounted cameras. We used simulated prosthetic vision of a fixated OI to test recognition performance when the motions of the background induced by head movements were either correlated or not correlated with the head movements.

Methods : Images (20x20 resolution, 24° FoV) were captured from 9 viewpoints using prosthetic simulation of the BrainPort. All viewpoints were centered on the OI to mimic the fixation on OI by human vision. Object recognition was tested with 60 normally sighted subjects. Experimental conditions (2x3) included: background (with or without clutter) x object viewing conditions (static single viewpoint, 9 viewpoints displayed corresponding to subjects’ lateral head positions, and 9 viewpoints displayed randomly). Subjects were instructed to utilize lateral head movements to view the OI from different viewpoints. The 35 objects were displayed once for each subject in the Oculus Rift head mounted display, which tracked subjects’ head movement and displayed corresponding images. Recognition performance was evaluated in a three-way ANOVA (2 background conditions × 3 viewing conditions × 35 objects).

Results : Without background clutter, average recognition rate was about 50% for all viewing conditions. While the performance with background clutter dropped to 14% for the static condition, it was improved in both motion parallax conditions: 26% when viewpoints corresponded to head movement (p=0.007), and 24% with random non-corresponding viewpoints (p=0.043); yet both were worse than without clutter.

Conclusions : The background de-cluttering using motion parallax cues, not the coherent multiple viewpoints per se, improved object recognition. Additional de-cluttering with an imaging system may improve recognition further.

This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.

×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×