June 2013
Volume 54, Issue 15
Free
ARVO Annual Meeting Abstract  |   June 2013
A bottom-up visual saliency-based image processing strategy for object recognition under simulated prosthetic vision
Author Affiliations & Notes
  • Xinyu Chai
    School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
  • Yanyu Lu
    School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
  • Weizhen Fu
    School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
  • Yao Chen
    School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
  • Chuanqing Zhou
    School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
  • Jing Wang
    School of Biomedical Engineering, Shanghai Jiao Tong University, Shanghai, China
  • Footnotes
    Commercial Relationships Xinyu Chai, None; Yanyu Lu, None; Weizhen Fu, None; Yao Chen, None; Chuanqing Zhou, None; Jing Wang, None
  • Footnotes
    Support None
Investigative Ophthalmology & Visual Science June 2013, Vol.54, 1033. doi:
  • Views
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Xinyu Chai, Yanyu Lu, Weizhen Fu, Yao Chen, Chuanqing Zhou, Jing Wang; A bottom-up visual saliency-based image processing strategy for object recognition under simulated prosthetic vision. Invest. Ophthalmol. Vis. Sci. 2013;54(15):1033.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose: Object recognition is among the most important visual tasks for the patients with visual prostheses. This study is to propose an image processing strategy of visual prostheses based on a bottom-up saliency-based visual attention model to detect, extract and enhance the object in a daily scene.

Methods: 18 subjects with normal or corrected-to-normal vision participated in this study. 70 object images taken under indoor scene with 20° × 20° visual angle were chosen as experimental materials. The images were processed with a visual attention model (Itti et al, 1998) to produce salient points. These points were clustered into some regions by applying FCM algorithm and the closet-to-center region was chosen as the ROI. The object in the ROI was separated from the scene by using an image segmentation method, GrabCut. We adopted two foreground/background contrast-enhancing strategies to present images under simulated prosthetic vision compared with a directly pixelization strategy (DP): the foreground with 8 gray levels while the background with 4 lower gray levels (8-4 separated pixelization, 8-4 SP) and the foreground with 8 gray levels while the background with edge extracted information (Background edge extraction, BEE).

Results: Subjects achieved above chance (1/70 = 1.43%) recognition accuracy (26.41 ± 8.83%) under DP condition. The two foreground/background separated strategies, 8-4 SP (41.73 ± 7.29%) and BEE (44.44 ± 7.70%), significantly increased the object recognition accuracy compared with DP (P < 0.05); however, there was no significant difference between these two strategies (P > 0.05). 70 objects were classified into 3 categories (perfect, good and bad ) according to Jaccard Coefficient (JC) which evaluated the effectiveness of the object extraction from background. As JC (segmentation quality) increased, the object recognition accuracy significantly increased using 8-4 SP and BEE strategies (P < 0.05), while there was no significant difference among the categories using DP strategy.

Conclusions: The results showed the foreground/background contrast-enhancing strategies based on a saliency-based visual attention model can significantly improve the recognition accuracy of objects under daily scenes. We hope our study on image processing strategies will be helpful to the future design and development of visual prosthesis to restore functional vision for the blinds.

Keywords: 549 image processing • 641 perception • 719 scene perception  
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×