Purchase this article with an account.
Noelle Stiles, Ben McIntosh, Armand Tanguay, Mark Humayun; Retinal Prostheses: Functional Use of Monocular Depth Perception in the Low Resolution Limit. Invest. Ophthalmol. Vis. Sci. 2013;54(15):1042.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Monocular depth perception (using cues such as perspective and relative size) persists to very low resolution in both static and dynamic images (at video frame rates), with potential implications for limited resolution retinal prostheses that are currently implanted in only one eye. An image depth-rating task previously demonstrated a significant improvement in depth perceived by Gaussian filtering of pixellated images that exhibit false depth cues (N=11). The depth-rating task also showed that depth could be perceived even at very low resolutions (e.g., 8 x 10 pixels or electrodes) when dynamic depth cues (such as motion parallax) were present. This study investigates the degree to which image representation and resolution affects the performance of functional tasks. Furthermore, the role of foveation in improving the performance of such tasks at low resolution with restricted field-of-view was studied. Foveation may improve the functionality of retinal prostheses that currently use a head-mounted camera without eye-tracking capability, and can be implemented either with an intraocular camera or with eye-tracking and a wide field-of-view scene camera.
A functional reaching task with a head-mounted display (HMD) was used to determine monocular depth perception capabilities at low resolution with different image representations. Images were acquired from a head-mounted wide-field-of-view camera, processed to low resolution (pixellation and blur or pixellation alone), and then displayed in real time within the HMD. In the eye-pointed mode, the subject’s gaze directed the subregion of the wide-field-of-view (FOV) image displayed to the user; head position alone determined the subregion of the image displayed in the head-pointed mode.
Subjects (N=3) were able to reach and grasp a bottle while avoiding obstacles with pixellated and blurred images in the eye-pointed mode faster than with pixellated images in either mode. The task with 32 x 32 pixels, Gaussian post-pixellation blurred and eye-pointed took on average 5.3 seconds, as compared with 2.4 seconds on average with normal vision.
Results show that appropriate presentation of images as well as the implementation of foveation in retinal prostheses improve the efficiency of depth task performance. The functional task also affirms that subjects with a retinal prosthesis may be able to perceive depth monocularly.
This PDF is available to Subscribers Only