The ability to move the eyes independently of each other in different directions generally is restricted to specialized lateral eyed animal species, for example chameleons. In contrast, humans with frontally placed eyes are considered to have a tight coupling between the two eyes, although there are exceptions.
1 Normally, humans coordinate their eye movements in such a way that each eye is aimed at the same point at a given distance in visual space.
2 Association of visual inputs derived from corresponding retinal locations provides the brain with a binocular unified image of the visual world.
3 From the retinal images of the two eyes a binocular single representation of the visual world is constructed based on binocular retinal correspondence.
4 To achieve this, the brain must use the visual information from each eye to stabilize the two retinal images relative to each other. Imagine one is holding two laser pointers, one in each hand, and one must project the two beams precisely on top of each other on a wall.
Eye movements contribute to binocular vision using visual feedback from image motion of each eye and from binocular correspondence. It generally is believed that the problem of binocular correspondence is solved in the primary visual cortex (V1).
5,6 Neurons in V1 process monocular and binocular visual information.
7 At subsequent levels of processing eye-of-origin information is lost, and the perception of the binocular world is invariant for eye movements and self-motion.
8–11
An important question is to what extent binocular and/or monocular visual information is used. Binocular vision relies heavily on disparity, which works only within limits of fusion.
12,13 Neurons used for binocular disparity were described first in V1 in the cat,
14,15 and later were found in many other visual cortical areas: V1 to V5 (area MT) and in area MST.
6,10,11,16–19 Although most is known about the neural substrate of horizontal disparities, there also is evidence for vertical disparity sensitive neurons in the visual cortex.
7,20
The variation in sensitivity of cortical areas to specific stimulus attributes also suggests a hierarchical structure for motion processing. First-order motion energy detectors in striate areas are at the basis for initial ocular following responses.
21 In area MT cortical neurons not only are tuned to binocular disparity, but also to orientation, motion direction, and speed.
16,22 Perception of depth and motion in depth occurs outside area V1.
5,9,10,23,24
Although it has been suggested that V1 is responsible for generating input signals for the control of vergence during binocular vision,
6,25 and there is evidence for disparity energy sensing,
26 it is unknown how visual disparity signals from V1 are connected to oculomotor command centers in the brainstem.
Also at the brainstem level, the monocular or binocular organization of oculomotor signals still is controversial (for a review see the report of King and Zhou
27 ). On one hand, there is strong support for conjugate control using separate version and vergence centers, such as the mesencephalic reticular formation.
28 On the other hand, there also are examples of a more independent control.
1
Several lines of evidence suggest that at the premotor level abducens burst neurons can be divided in left and right eyes bursters, and thus have a monocular component.
29–32
In humans there is behavioral evidence for asymmetrical vergence.
8,33 Recently, a dual visual-local feedback model of the vergence movement system was proposed that can account for binocular as well as monocular driven vergence responses.
34
To investigate to what extent humans have independent binocular control and what are the required conditions for this behavior, we used a two-dimensional dichoptic visual stimulation paradigm. With this paradigm we demonstrated in humans that to sustain binocular vision, they can generate slow phase eye movements with independent motion directions, and the perceived direction of binocular motion can be dissociated from control of eye movement.