April 2014
Volume 55, Issue 13
Free
ARVO Annual Meeting Abstract  |   April 2014
The Effectiveness of Prosthetic Fixation for Recognizing Faces in Natural Scenes
Author Affiliations & Notes
  • Janine Walker
    Computer Vision, NICTA, Canberra, ACT, Australia
    Centre for Mental Health Research, Australian National University, Canberra, ACT, Australia
  • Xuming He
    Computer Vision, NICTA, Canberra, ACT, Australia
    College of Engineering and Computer Science, Australian National University, Canberra, ACT, Australia
  • Hanxi Li
    Computer Vision, NICTA, Canberra, ACT, Australia
    College of Engineering and Computer Science, Australian National University, Canberra, ACT, Australia
  • Nick Barnes
    Computer Vision, NICTA, Canberra, ACT, Australia
    College of Engineering and Computer Science, Australian National University, Canberra, ACT, Australia
  • Footnotes
    Commercial Relationships Janine Walker, None; Xuming He, None; Hanxi Li, None; Nick Barnes, NICTA (P)
  • Footnotes
    Support None
Investigative Ophthalmology & Visual Science April 2014, Vol.55, 1842. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Janine Walker, Xuming He, Hanxi Li, Nick Barnes; The Effectiveness of Prosthetic Fixation for Recognizing Faces in Natural Scenes. Invest. Ophthalmol. Vis. Sci. 2014;55(13):1842.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Human face recognition may be impaired in low vision, particularly in conditions that implicate the fovea (e.g., macular degeneration) which is critical to high acuity tasks. Computer vision processing methods may facilitate effective face recognition in low resolution conditions that may be experienced by retinal implant recipients, and other vision-assistive devices. We evaluated the effectiveness of a novel automated Prosthetic Fixation (PF) vision processing method based on an efficient face detection, localization and tracking algorithm compared to Conventional Object Detection + Zooming (CDZ) and No Fixation (NF, viewing the whole scene) for recognition of faces in natural scenes.

Methods: 10 normally-sighted adults each completed ≥ 68 trials in 1 x experimental session. Participants were shown a reference set of 4 gray-scale images of faces followed by one of the faces in an altered pose and background on a computer screen. Test images were randomly selected from video sequences (≥ 50 frames) showing an individual talking in the foreground of a natural environment. Participants selected the matching reference face using a keypad. Raw images were processed to simulate prosthetic vision using a 16x16 phosphene grid with 8 noticeable levels of brightness. The study had a randomized design with participants masked to presentation order of the vision processing method (PF, CDZ, NF), and the face reference sets. Outcome measures included: mean percentage of recognition accuracy; response time; and task ease [(%correct-25%)/tcorrect].

Results: For the face recognition task, PF achieved a significantly greater mean accuracy of 56.86% compared to CDZ and NF with 44.61% and 34.09% (p=0.001), respectively. Further, participants accurately recognized faces using PF (mean=8.34) with significantly greater ease compared to CDZ (mean=4.97) and NF (mean=2.87; p<0.0001).

Conclusions: Prosthetic Fixation incorporating face detection, localization and tracking provided significantly greater accuracy for face recognition compared to a conventional approach and viewing the whole scene for simulated prosthetic vision. The proposed vision processing method is robust in its ability to detect faces in natural scenes with background clutter. Prosthetic Fixation shows promise for providing automatic and stable images for face recognition in low-resolution vision assistive devices.

Keywords: 688 retina  
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×