Purchase this article with an account.
Ranran L French, Gregory C DeAngelis; Scene-relative object motion biases depth percepts based on motion parallax.. Invest. Ophthalmol. Vis. Sci. 2020;61(7):1721.
Download citation file:
© ARVO (1962-2015); The Authors (2016-present)
Motion parallax (MP) – the relative image motion between stationary objects at different distances due to observer translation – is a potent monocular depth cue. However, if an object is moving relative to the scene, this complicates the computation of depth from MP since there will be an additional component of image motion related to object motion. Previous work on depth perception from MP has assumed that objects are stationary in the world; how the brain perceives depth of moving objects based on motion parallax has not been examined.
Human subjects viewed a virtual 3D scene consisting of a ground plane and stationary background objects, while lateral self-motion was simulated by optic flow. A target object, lying above the ground plane, could be either stationary or moving laterally at different velocities. Subjects were asked to judge the depth of the target object relative to the plane of fixation.
To correctly compute depth of moving object from MP, the brain needs to accurately parse retinal image motion into components related to self-motion and object motion. However, previous literature has shown that flow parsing is often incomplete. Therefore, we hypothesized that depth estimates for moving objects would be biased depending on object velocity. In our experiment, subjects showed systematic biases in perceived depth, with larger biases during monocular presentation of the target object, which supports our hypothesis.
Our findings show that human subjects are not able to simply ignore the component of retinal image motion related to object motion when making judgements of depth. As a result, the perceived depth based on MP is strongly biased by object velocity. Critically, the observed biases are not explained by existing theories for computation of depth based on MP.
This is a 2020 ARVO Annual Meeting abstract.
Visual stimuli used in the experiment. Lower right: 3D layout of the scene. Upper left: a sequence of actual frames of the display.
Behavioral data from subject 204. Depth psychometric functions are color coded according to whether self-motion and object motion have the same direction (pink) or opposite directions (cyan). The left and right panels show data for monocular and binocular viewing respectively (note that the ranges of the x-axis are different). Depth percepts are clearly biased in opposite directions by object motion, relative to the condition in which there is no object motion (blue).
This PDF is available to Subscribers Only