Purchase this article with an account.
Xinyu Chai, Yanyu Lu, Weizhen Fu, Yao Chen, Chuanqing Zhou, Jing Wang; A bottom-up visual saliency-based image processing strategy for object recognition under simulated prosthetic vision. Invest. Ophthalmol. Vis. Sci. 2013;54(15):1033.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Object recognition is among the most important visual tasks for the patients with visual prostheses. This study is to propose an image processing strategy of visual prostheses based on a bottom-up saliency-based visual attention model to detect, extract and enhance the object in a daily scene.
18 subjects with normal or corrected-to-normal vision participated in this study. 70 object images taken under indoor scene with 20° × 20° visual angle were chosen as experimental materials. The images were processed with a visual attention model (Itti et al, 1998) to produce salient points. These points were clustered into some regions by applying FCM algorithm and the closet-to-center region was chosen as the ROI. The object in the ROI was separated from the scene by using an image segmentation method, GrabCut. We adopted two foreground/background contrast-enhancing strategies to present images under simulated prosthetic vision compared with a directly pixelization strategy (DP): the foreground with 8 gray levels while the background with 4 lower gray levels (8-4 separated pixelization, 8-4 SP) and the foreground with 8 gray levels while the background with edge extracted information (Background edge extraction, BEE).
Subjects achieved above chance (1/70 = 1.43%) recognition accuracy (26.41 ± 8.83%) under DP condition. The two foreground/background separated strategies, 8-4 SP (41.73 ± 7.29%) and BEE (44.44 ± 7.70%), significantly increased the object recognition accuracy compared with DP (P < 0.05); however, there was no significant difference between these two strategies (P > 0.05). 70 objects were classified into 3 categories (perfect, good and bad ) according to Jaccard Coefficient (JC) which evaluated the effectiveness of the object extraction from background. As JC (segmentation quality) increased, the object recognition accuracy significantly increased using 8-4 SP and BEE strategies (P < 0.05), while there was no significant difference among the categories using DP strategy.
The results showed the foreground/background contrast-enhancing strategies based on a saliency-based visual attention model can significantly improve the recognition accuracy of objects under daily scenes. We hope our study on image processing strategies will be helpful to the future design and development of visual prosthesis to restore functional vision for the blinds.
This PDF is available to Subscribers Only