Purchase this article with an account.
Xinyu Chai, Yajie Zeng, Heng Li, Yao Chen, Chuanqing Zhou; Perceived visual field expanding with content-aware image retargeting for prosthetic vision. Invest. Ophthalmol. Vis. Sci. 2017;58(8):4206.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Although great progress has been made in the development of retinal prostheses, small visual field and low-resolution remain as major obstacles to achieve satisfying prosthetic vision. Hence, we focus on expanding the small visual field by image retargeting methods, aiming to retarget a wider scene into the prosthetic vision and meanwhile preserve important information as much as possible.
We utilized the idea of the content-aware seam-assisted shrinkability (SAS) (Al-Atabany et al, 2010) method, and made improvements by weighting a color-based saliency map to enhanced the image importance.We compared our method with cropping, scaling and SAS under simulated prosthetic vision (SPV) by object detection and recognition experiments. 28 subjects with normal or corrected-to-normal vision participated in the study. 85 images (48 indoor and 37 outdoor) of 40° visual angle were compressed to 20° by four image strategies. After direct pixelization, there were four groups of images, cropping pixelization (C-P), scaling pixelization (S-P), SAS pixelization (SAS-P) and OURS-P.
On the method improvement, our method outperforms in preserving more important region of an image when resizing than SAS under different compression ratios based on MSRA100 database.About the experimental results, for indoor images, the percentage of detected objects were 45.47% ± 1.31% for C-P, 92.71% ± 1.11% for S-P, 97.48% ± 0.64% for SAS-P and 98.78% ± 0.42% for OURS-P. The latter three strategies had a significant difference from C-P. SAS-P and OURS-P both had a significant difference from S-P. The recognition accuracy were 23.61% ± 1.26% for C-P, 35.30% ± 3.47% for S-P. SAS-P (54.08% ± 2.63%) and OURS-P (64.11% ± 2.28%) showed a significant increase (P < 0.001) than C-P and S-P. The average recognition time for C-P was 7.69 ± 0.83 s, 10.62 ± 1.22 s for S-P, 9.52 ± 0.53 s for SAS-P, and 9.94 ± 1.21 s for OURS-P. And there was similar results for outdoor images.
We demonstrated the color-based saliency map contributed to preserve more important information. The experimental results indicated our method outperformed in conveying more useful visual information and meanwhile guaranteeing the recognition performance under low-resolution SPV. Therefore, this image processing strategy may be beneficial to prosthesis recipients in the future.
This is an abstract that was submitted for the 2017 ARVO Annual Meeting, held in Baltimore, MD, May 7-11, 2017.
This PDF is available to Subscribers Only