March 2013
Volume 54, Issue 3
Free
Eye Movements, Strabismus, Amblyopia and Neuro-ophthalmology  |   March 2013
Version–Vergence Interactions during Memory-Guided Binocular Gaze Shifts
Author Notes
  • From the Department of Neuroscience, Erasmus University Medical Center, Rotterdam, The Netherlands. 
  • Corresponding author: Johannes van der Steen, Department of Neuroscience, Room Ee1453, Erasmus University Medical Center, PO Box 2040, NL-3000 CA Rotterdam, The Netherlands; j.vandersteen@erasmusmc.nl
Investigative Ophthalmology & Visual Science March 2013, Vol.54, 1656-1664. doi:10.1167/iovs.12-10680
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Joyce Dits, Johan J. M. Pel, Angelique Remmers, Johannes van der Steen; Version–Vergence Interactions during Memory-Guided Binocular Gaze Shifts. Invest. Ophthalmol. Vis. Sci. 2013;54(3):1656-1664. doi: 10.1167/iovs.12-10680.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

Purpose.: Visual orientation toward remembered or visible visual targets requires binocular gaze shifts that are accurate in direction (version) and ocular distance (vergence). We determined the accuracy of combined version and vergence movements and the contribution of the abducting and adducting eye during gaze shifts toward memorized and visual targets in three-dimensional space.

Methods.: Subjects fixated either a “far” (94 cm) or “near” (31 cm) fixation light-emitting diode (LED) placed in front of the left eye. Next, in the memory-guided experiment, a target LED was lit for 80 ms (13 cm to the left or right and at 45 cm viewing distance). Subjects were instructed to make a saccade to the (remembered) target LED location. In the visually guided experiment, the target LED remained illuminated during the task. In both conditions, gaze shifts consisted of version and vergence movements.

Results.: Visually guided gaze shifts had both a fast intrasaccadic and a slow postsaccadic vergence component and were most accurate. During memory-guided gaze shifts, the abducting eye was more accurate than the adducting eye. Distance correction was achieved by slow postsaccadic vergence of the adducting eye. Memory-guided gaze shifts that required convergence lacked an intrasaccadic vergence component and were less accurate compared to memory-guided gaze shifts that required divergence.

Conclusions.: Visually guided binocular gaze shifts are faster and more accurate than memory-guided binocular gaze shifts. During memory-guided gaze shifts, the abducting eye has a leading role, and an intrasaccadic vergence enhancement during convergence is reduced.

Introduction
In our daily environment, visual targets are located in different directions and at different viewing distances relative to our body. To binocularly view an object of interest, two different types of eye movements are used: saccades and vergence. Saccades are the rapid conjugate component of eye movements used to orient the eyes toward a new direction by rotating the two eyes with similar angles. Vergence eye movements are used to bring the focal point of binocular gaze to a different viewing point in depth by disjunctively rotating the eyes. 
Saccades and vergence eye movements have different dynamic properties, and they are assumed to be controlled by independent subsystems. 1,2 Lesion and physiological data have shown that different neuronal populations exist for conjugate gaze and vergence. Motor commands for conjugate gaze shifts are assembled in the paramedian pontine reticular formation (PPRF), 35 whereas vergence command signals are generated in the midbrain. 6,7 Despite functional and anatomical differences between the two subsystems, there is also a close interaction. In natural gaze shifts, saccadic and vergence movements are seldom made in isolation, but combinations are made continuously, altering the dynamic properties of the eye movements. Saccades are slowed down when they are combined with vergence, while vergence movements during the intrasaccadic period become faster in combination with saccades. 8,9 There are different explanations for saccade–vergence interactions. One mechanism is an interaction between conjugate saccade command centers and vergence burst neurons, 10 whereas others suggest that combined saccade–vergence eye movements are generated via monocular saccadic burst neurons. 1113  
Binocular disparity, which refers to the difference in image location of an object seen by the left and right eye resulting from the eyes' horizontal separation, is one of the most effective stimuli for vergence movements. 14,15 For this reason, visual information about the target location in three-dimensional (3-D) space was continuously available in most experimental studies. Most modeling studies also assume that vergence is controlled by continuous disparity feedback. 16 Visual feedback is also implicitly assumed in more recent models that have embedded both attentive (target vergence) and inattentive mechanisms (global disparity) driving version–vergence voluntary gaze shifts. 17 However, saccade–vergence interactions may be different when binocular gaze shifts toward remembered visual targets are made. In this situation, eye movement command signals lack visual feedback information on where a target is located in 3-D space. Both direction and distance information to drive version and vergence eye movements have to be retrieved entirely from memory. The characteristics of memory-guided saccades without vergence have been studied extensively. Memory-guided saccades are less accurate and have lower peak velocities and longer duration than visually guided saccades. 1820 However, little is known about the dynamics of memory-guided eye movements with a combination of saccades and vergence. Kumar et al. 21 found differences in version/vergence peak velocity ratios between memory-guided and visually guided gaze shifts. The memory-guided eye movements showed a larger slowing of the convergence components than of the corresponding saccadic components, resulting in a smaller saccade–vergence peak velocity ratio. Taken together, the findings suggest that different interactions exist between version and vergence under visually and memory-guided conditions. 
In most experiments on spatial memory (updating) during binocular vision, monocular movements of each eye have been combined and results have been described in terms of vergence (right eye − left eye) and conjugate gaze or version ([right eye + left eye]/2) components. 22,23 The aim of the present study was to determine the accuracy and contribution of the abducting and adducting eye during binocular gaze shifts toward memorized and visual 3-D targets that were presented in 3-D space. To our knowledge this is the first study to quantify the contribution of vergence and of each eye separately during combined saccade–vergence gaze shifts from memory. 
Methods
Subjects
Eye movements were recorded from eight subjects (five male, three female, age 40 ± 15 years) who were naïve with respect to the goal of the experiment. Three of the eight subjects also participated in the control trials. Written informed consent was obtained from all subjects prior to the experiments. None of the subjects had a history of ocular pathology or impaired stereoscopic vision, and all subjects had normal or corrected-to-normal vision. The mean interocular distance of the subjects was 6.5 ± 0.2 cm. Phoria was measured before the experiment with the Maddox-Rod test at 6 and 1 m distances. As an additional measure we determined in three subjects the change in vergence over a period of 4 seconds following switching off the fixation target. Eye dominancy was determined with a variant of the hole-in-the-card test, that is, looking through a hole made with the hands. An individually molded bite board and a vacuum cushion that was placed around the back of the subject's neck were used to restrain the head. Experimental procedures were approved by the Medical Ethical Committee of Erasmus Medical Center and adhered to the Declaration of Helsinki for research involving human subjects. 
Experimental Setup
Eye Movement Recordings.
Eye positions were recorded with scleral search coils. We used a standard 25 kHz two-field coil system (model EMP3020; Skalar Medical, Delft, The Netherlands) based on the amplitude detection method published by Robinson. 24 Standard dual coils embedded in a silicone annulus (Skalar Medical) were inserted in each eye. Before insertion, the eyes were anesthetized with a few drops of oxybuprocain (0.4%) in HCl (pH 4.0). Both coils were calibrated in vitro by mounting them on a gimbal system placed in the center of the magnetic fields. Search coil data were sampled at 1000 Hz on a CED system (Cambridge Electronic Design, Cambridge, UK). At the beginning of each experiment, an in vivo calibration was performed. Subjects were instructed to fixate a series of five targets (central target and a target at 10° left, right, up, and down). Targets were back projected onto a translucent screen at 186 cm distance. The subjects fixated each target for 10 seconds. Data were stored on a hard disk for further offline analysis. 
Visual Targets.
During the experiment, the subject was seated in a chair that was placed in a completely darkened room. Three light-emitting diodes (LEDs) placed at eye level at different viewing distances from the subject served as visual stimuli (Fig. 1). One of two red LEDs in front of the subject aligned with the subject's left eye at respectively 94 cm (far) and 31 cm (near) viewing distance, functioned as fixation point (fixation LED). The third (white) LED served as visual target (target LED). The target LED was suspended from the ceiling 45 cm in front and 13 cm to the left relative to the subject's left eye. 
Figure 1
 
Schematic top view of visual targets. The fixation LEDs (black closed circles) were placed in line with the subject's left eye: One target was at a distance of 94 cm (“far”) and one at a distance of 31 cm (“near”) in depth. The target LED (gray closed circles) was located at a distance of 45 cm in front and 13 cm to the left relative to the subject's left eye.
Figure 1
 
Schematic top view of visual targets. The fixation LEDs (black closed circles) were placed in line with the subject's left eye: One target was at a distance of 94 cm (“far”) and one at a distance of 31 cm (“near”) in depth. The target LED (gray closed circles) was located at a distance of 45 cm in front and 13 cm to the left relative to the subject's left eye.
Experimental Procedures
Memory-Guided Gaze Shifts.
In the memory-guided experiment, subjects were instructed to make eye movements guided from memory (see Fig. 2B). The subjects were asked to fixate one of the two fixation targets. At a random interval during this fixation period, the target LED was flashed for 80 ms. Subjects had to maintain fixation while remembering the location of the target LED. The fixation LED was switched off 1800 ms after the target LED was extinguished, at which time the subject had to make a saccade to the remembered location of the target in 3-D space. Twenty trials were recorded per experimental session, and all conditions were presented in randomized order. This resulted in 10 repetitions of each condition. No visual feedback of the location of the flashed fixed target was provided after a trial. Data were analyzed post hoc for a possible learning effect. The trials were preceded by two fixation trials (Fig. 2A). In these two trials the subjects had to fixate the target LED that was presented for 5 seconds. Fixation trials were used to calculate the errors in horizontal gaze and vergence toward the flashed target (see Data Analysis section). 
Figure 2
 
Setup of trials. Trials are illustrated for the “near” condition. (A) Fixation trials: The subjects were instructed to fixate the target LED that was presented for 5 seconds. Fixation trials were used to calculate the errors in horizontal gaze and vergence toward the target LED. (B) Memory-guided trials: The subjects were asked to fixate the fixation LED while the target LED was turned on for 80 ms. Subjects had to maintain fixation on the fixation LED, and at the same time remember the location of the peripheral flashed target LED. 1800 ms after the target LED was extinguished, the fixation LED was turned off and the subject had to make a saccade to the remembered location of the target LED.
Figure 2
 
Setup of trials. Trials are illustrated for the “near” condition. (A) Fixation trials: The subjects were instructed to fixate the target LED that was presented for 5 seconds. Fixation trials were used to calculate the errors in horizontal gaze and vergence toward the target LED. (B) Memory-guided trials: The subjects were asked to fixate the fixation LED while the target LED was turned on for 80 ms. Subjects had to maintain fixation on the fixation LED, and at the same time remember the location of the peripheral flashed target LED. 1800 ms after the target LED was extinguished, the fixation LED was turned off and the subject had to make a saccade to the remembered location of the target LED.
Control Experiments.
Three control experiments were performed in which three subjects participated: 
  1.  
    Visually guided gaze shifts: This experiment was similar to the memory-guided experiment, except that the target LED was turned on again at the same time the fixation LED was turned off. The subjects were asked to make an eye movement toward the target LED after the fixation target was switched off.
  2.  
    Lateralization effects: To exclude possible left–right differences in performance due to eye dominance, we presented the flash target on the right side of the subject. Test conditions and trial numbers were identical to those for the memory-guided gaze shift trial except that the flash target was placed 45 cm in depth and 13 cm to the right relative to the right eye.
  3.  
    Peripheral visual cueing effects: To test the influence of depth cues on localization of the flash target, a light source consisting of eight bright white LEDs was placed on a frame above the subject's head. The light source illuminated the environment before and during the period when the flash target was turned on.
Data Analysis
Data were analyzed offline using self-written MATLAB (Mathworks, Natick, MA) routines. Eye movements to the 5-point calibration pattern, retrieved during the calibration task, were used to transform voltage into Fick angles. Trials were excluded when the subject did not keep his or her eyes aimed at the fixation target, or when the subject did not make a saccade after the fixation target was extinguished, or when the subject made a saccade before the fixation target was extinguished. We computed the horizontal gaze position of the left and right eye separately. We determined horizontal gaze and vergence at fixed points during each task (Fig. 3): the initial gaze position when the eyes were fixated on the fixation target (t 1) and after the memory-guided eye movement (t 2). In the case of a single memory-guided saccade, t 2 was selected after the saccade but before slow (drifting) eye movements occurred. In cases in which multiple saccades were made to reach the memorized location of the target, we defined t 2 as the first moment after the last of the series of saccades within a time frame of 1 second when gaze reached a stable value. Saccades were detected by calculating the horizontal angular eye velocity signal from the position signal. Peak velocities were calculated from the selected saccade signals. Vergence angles were calculated as the difference between right and left (L–R) horizontal gaze. Version was calculated as the average of left and right horizontal gaze ([R + L]/2). 
Figure 3
 
Points in time used for error calculation. Simulation of horizontal gaze (top panel) and vergence (bottom panel) against time. We determined horizontal gaze and vergence at fixed points during each task: the initial position when the eyes were fixated on the fixation LED (t 1) and after the memory-guided eye movement, but before slow drifting eye movements (t 2).
Figure 3
 
Points in time used for error calculation. Simulation of horizontal gaze (top panel) and vergence (bottom panel) against time. We determined horizontal gaze and vergence at fixed points during each task: the initial position when the eyes were fixated on the fixation LED (t 1) and after the memory-guided eye movement, but before slow drifting eye movements (t 2).
Errors in Horizontal Gaze (Direction).
After retrieving horizontal gaze at time t1 and t2, we calculated the errors for all trials. Absolute errors were calculated for each eye by subtracting correct gaze obtained during the fixation trials from the actual horizontal gaze at t2:  We also determined gaze errors for version using the following equation:  Subject-specific correct gaze values were obtained during the fixation trials. A negative error for gaze or version indicates that the leftward memory-guided eye movement was too large, whereas a positive error indicates that the eye movement was too small.  
Errors in Vergence (Depth).
Absolute vergence errors were calculated using the following equation:  Subject-specific correct vergence angle was obtained during the fixation trials. Negative vergence errors indicate that the vergence angle is too small, while positive errors indicate that the vergence angle is too large.  
Statistics.
The mean and standard deviation of gaze and vergence errors of each specific spatial task were group-wise calculated and tested for a normal distribution. To test for a possible learning effect during the experiment, regression lines were fitted to vergence errors as a function of trial number for each subject. The significance of version, gaze, and vergence errors was tested using one-sample t-tests by comparing the results with the test value set to zero. The differences between the errors of each eye were tested for significance using a trial-based paired-sample t-test. To test the influence of fixation distance and the influence of each eye on the errors, we performed univariate analysis of variance (ANOVA). Left and right eye were tested for significance of differences in peak velocities with a trial-based paired-sample t-test. We compared the memory-guided with the visually guided peak eye velocities using the independent- samples t-test. The vergence errors with and without illumination of the environment were compared using independent-samples t-tests. All statistical analyses (memory-guided trials and control trials) were performed on trial-based aggregated data of the subjects. Statistical analyses were performed in SPSS (IBM SPSS, New York, NY). 
Results
Memory-Guided Gaze Shifts
All subjects had phoria values measured with the Maddox cross that were less than 1°. All subjects successfully fixated the flash target with high precision in the fixation trials immediately preceding the memory-guided experiment. When we switched off the fixation target, vergence remained fairly stable over a period of at least 4 seconds (mean vergence change 0.52 ± 0.8° for the far target and 1.5 ± 1.2° for the near fixation target). Approximately 15% of the memory-guided gaze shift trials were discarded, mostly because subjects started a saccade toward the target LED too early. The average horizontal gaze in the fixation trials was −16.30 ± 0.45° and −23.30 ± 0.89° for the left and right eye, respectively, while vergence was 7.02 ± 0.51°. Parameters obtained from the fixation trials were subject specific and were used to calculate the performance of memory-guided eye movements. Subject-specific average errors for horizontal gaze of each eye and for version and vergence, including the number of analyzed trials, are listed in Table 1. Also subject-specific gaze, version, and vergence obtained from the fixation trials are given as well as eye dominancy. To test for a possible learning effect during the experiment, we also calculated vergence errors for each subject as a function of trial number. The fitted regression lines had an average slope of 0.00 ± 0.07°, indicating that there was no learning effect that could have influenced the measurement series. 
Table 1. 
 
Average Errors and Standard Deviation per Subject
Table 1. 
 
Average Errors and Standard Deviation per Subject
Subject Specific Gaze Error, deg ± SD Fixation Trial No. of Trials Eye Dominance
Near Far Near Far
1 Left −0.86 ± 2.18 −0.94 ± 2.00 −16.24 7 6 Right
Right −1.22 ± 2.23 2.26 ± 2.00 −23.57
Version −1.04 ± 2.05 0.66 ± 1.94 −19.90
Vergence 0.20 ± 1.50 −3.30 ± 0.83 7.33
2 Left 2.31 ± 3.69 0.30 ± 2.76 −17.31 8 8 Right
Right 2.88 ± 2.35 3.31 ± 2.61 −24.79
Version 2.59 ± 2.96 1.80 ± 2.66 −21.06
Vergence −0.71 ± 1.91 −3.15 ± 0.61 7.48
3 Left −2.11 ± 1.49 −2.97 ± 1.23 −16.04 6 7 Left
Right −5.71 ± 2.50 −5.04 ± 1.95 −22.75
Version −3.91 ± 1.87 −4.00 ± 1.56 −19.40
Vergence 3.85 ± 1.63 2.29 ± 1.11 6.71
4 Left 2.22 ± 1.88 0.26 ± 1.38 −16.24 10 10 Right
Right 3.56 ± 1.51 2.43 ± 1.39 −22.97
Version 2.89 ± 1.68 1.34 ± 1.36 −19.61
Vergence −1.35 ± 0.58 −2.17 ± 0.53 6.73
5 Left −2.40 ± 1.54 −3.50 ± 2.67 −16.00 7 9 Right
Right −0.09 ± 1.35 −0.02 ± 2.10 −22.66
Version −1.24 ± 1.42 −1.76 ± 2.37 −19.33
Vergence −2.29 ± 0.52 −3.46 ± 0.75 6.67
6 Left 2.81 ± 2.30 2.91 ± 2.53 −15.95 9 8 Right
Right 4.37 ± 2.43 5.52 ± 2.83 −22.59
Version 3.59 ± 2.36 4.21 ± 2.67 −19.38
Vergence −1.57 ± 0.47 −2.61 ± 0.51 6.85
7 Left 1.23 ± 2.43 −4.21 ± 1.67 −16.52 9 9 Left
Right 1.69 ± 2.21 −2.03 ± 1.97 −24.46
Version 1.46 ± 1.46 −3.12 ± 1.81 −20.05
Vergence −0.45 ± 0.58 −2.17 ± 0.50 7.94
8 Left −1.31 ± 2.32 2.08 ± 1.88 −16.08 7 8 Right
Right −0.07 ± 2.43 2.40 ± 1.45 −22.55
Version −0.69 ± 2.36 2.24 ± 1.59 −19.32
Vergence −1.34 ± 0.59 −0.42 ± 1.14 6.47
Horizontal Gaze
In Figures 4 and 5, examples of horizontal gaze against time for one subject (subject 6) for the “near” and “far” memory-guided conditions, respectively, are plotted. The top panel shows the horizontal gaze for each eye separately. Figure 4 illustrates the “near” condition in which the fixation LED was placed in front of the target LED. Figure 5 illustrates the “far” condition in which the fixation LED was behind the target LED. The bars on top of the horizontal gaze traces indicate the moment in time that the fixation and target LED were illuminated. In all conditions shown in our plots, leftward eye movements were made toward the memorized location of the target LED. The correct horizontal gaze values, retrieved from each subject in the fixation trials, are plotted as a reference. 
Figure 4
 
Example of memory-guided trials for the “near” condition. The examples of eye movements were obtained from subject 6. Horizontal gaze of both eyes is plotted against time in the top panel, and vergence is plotted against time in the bottom panel. The bars on top of the horizontal gaze traces indicate the illumination of the fixation (“near”) and target LED. The dashed lines are the subject-specific correct gaze and vergence values retrieved from the fixation trials. Leftward eye movements are made toward the memorized location of the target LED. Gaze of the left eye (red traces) is more accurate than gaze of the right eye gaze (blue traces). Vergence decreased during the memory-guided eye movement.
Figure 4
 
Example of memory-guided trials for the “near” condition. The examples of eye movements were obtained from subject 6. Horizontal gaze of both eyes is plotted against time in the top panel, and vergence is plotted against time in the bottom panel. The bars on top of the horizontal gaze traces indicate the illumination of the fixation (“near”) and target LED. The dashed lines are the subject-specific correct gaze and vergence values retrieved from the fixation trials. Leftward eye movements are made toward the memorized location of the target LED. Gaze of the left eye (red traces) is more accurate than gaze of the right eye gaze (blue traces). Vergence decreased during the memory-guided eye movement.
Figure 5
 
Example of memory-guided trials for the “far” condition. The examples of eye movements were obtained from subject 6. Horizontal gaze of both eyes is plotted against time in the top panel, and vergence is plotted against time in the bottom panel. The bars on top of the horizontal gaze traces indicate the illumination of the fixation (“far”) and target LED. The dashed lines are the subject-specific correct gaze and vergence values retrieved from the fixation trials. Leftward eye movements are made toward the memorized location of the target LED. Gaze of the left eye (red traces) is more accurate than gaze of the right eye gaze (blue traces). Vergence increased during the memory-guided eye movement.
Figure 5
 
Example of memory-guided trials for the “far” condition. The examples of eye movements were obtained from subject 6. Horizontal gaze of both eyes is plotted against time in the top panel, and vergence is plotted against time in the bottom panel. The bars on top of the horizontal gaze traces indicate the illumination of the fixation (“far”) and target LED. The dashed lines are the subject-specific correct gaze and vergence values retrieved from the fixation trials. Leftward eye movements are made toward the memorized location of the target LED. Gaze of the left eye (red traces) is more accurate than gaze of the right eye gaze (blue traces). Vergence increased during the memory-guided eye movement.
Average gaze errors, calculated from all trials per condition, are expressed in degrees and presented in Table 2 (left column). The tests of normality indicated that the data had normal distribution. In the “near” condition, mean gaze errors (mean ± SD) were 0.59 ± 2.97° and 1.23 ± 3.44° for the left and the right eye, respectively. In the “far” condition, mean gaze errors (mean ± SD) were −0.67 ± 3.23° and 1.42 ± 3.43° for the left and the right eye, respectively. In both conditions, the gaze error of the right eye was significantly different from zero (P < 0.01), whereas the gaze error of the left eye was not (P = 0.13 and P = 0.11 for “near” and “far,” respectively). This indicates that the mean gaze errors of the left eye were smaller. We further evaluated this with a paired-samples t-test, which showed a significant difference between the left and right eye errors (P < 0.01 and P < 0.001, “near” and “far,” respectively). 
Table 2. 
 
Horizontal Gaze and Vergence Errors
Table 2. 
 
Horizontal Gaze and Vergence Errors
Horizontal Gaze Error, deg Vergence Error, deg
Mean ± SD P Value* Mean ± SD P Value*
Near fixation, divergence
 Left eye 0.59 ± 2.97 P = 0.13 −0.67 ± 1.83 P < 0.01
 Right eye 1.23 ± 3.44 P < 0.01
 Version 1.69 ± 3.10 P < 0.05
Far fixation, convergence
 Left eye −0.67 ± 3.23 P = 0.11 −2.12 ± 1.65 P < 0.001
 Right eye 1.42 ± 3.43 P < 0.01
 Version 1.22 ± 3.82 P = 0.37
We also calculated gaze errors for the average of the two eyes (version). These values are also shown in Table 2. Version gaze errors (mean ± SD) were 1.69 ± 3.10° and 1.22 ± 3.82° for the “near” and “far” conditions, respectively. Version gaze error in the “near” condition was significantly different from zero (P < 0.05), whereas the version gaze error in the “far” condition was not (P = 0.37). A univariate ANOVA, with fixation LED (near/far) and horizontal gaze (left, right, and version) as fixed factors and gaze error as dependent variable, indicated no effect of fixation LED (F(1,366) = 2.13, P = 0.15), but a significant effect of horizontal gaze (F(2,366) = 7.55, P < 0.01). Post hoc Tukey test indicated that the error of the left eye was significantly smaller than the error of the right eye and version (for left versus right and left versus version, P < 0.01). 
Vergence
The bottom panels of Figures 4 and 5 show examples of vergence for the “near” and “far” memory-guided conditions, respectively. In the “near” condition (Fig. 4) vergence decreased, because the flash LED was placed farther than the fixation LED, whereas in the “far” condition (Fig. 5) the target LED was in front of the fixation LED. This required that a subject make a convergence eye movement toward the location of the remembered target. 
Errors in vergence are shown in Table 2 (right column). In the “near” condition, vergence error (±SD) was −0.67 ± 1.83°; this value was significantly different from zero (P < 0.01). This negative error indicates that vergence decreased too much: In other words, the divergence eye movement was too large. In the “far” condition, vergence error (mean ± SD) was −2.12 ± 1.65°, which is also significantly different from zero (P < 0.001). The negative vergence error in the “far” condition indicates that the convergent eye movement was too small: Vergence did not increase enough, resulting in a vergence undershoot. Vergence errors under the “near” and “far” conditions were significantly different (unpaired t-test, P < 0.001). Errors were significantly smaller for divergent eye movements than for convergent eye movements. 
Memory-Guided versus Visually Guided Gaze Shift
The memory-guided experiment (with the target LED on the left side) was designed such that the right eye had to make a larger gaze shift than the left eye in the “far” condition. However, in the “near” condition, the left eye had to make a larger gaze shift than the right eye. One way to overcome this difference in amplitude between left and right eye is to generate unequal saccades. 
Examples of binocular gaze shifts under visually guided and memory-guided conditions from one subject (subject 1) are presented in Figure 6. Visually guided gaze shifts and eye velocities are presented in the left panels. When the subject made a gaze shift under visually guided conditions, starting from the “far” fixation LED, the gaze shift made by the right eye was larger, resulting in higher peak velocity of the right eye. As expected, the opposite was found when gaze shifts were made from the “near” fixation LED to the target LED. Intrasaccadic peak velocity of the left eye was slightly larger than that of the right eye. The differences in right versus left eye peak velocities were significant and consistent across all three subjects (paired-samples t-test, P < 0.001). The average peak velocities from all subjects for each condition are presented in Table 3
Figure 6
 
Example of visually guided versus memory-guided gaze shifts. The examples of eye movements were obtained from subject 1. Horizontal gaze (top panels) and eye velocity (bottom panels) of both eyes are plotted against time. The smaller inset in each panel shows the corresponding vergence position and velocity at ×2.5 magnification along the y-axis. Visually guided trials are presented in the left panels. During the “far” fixation LED, the gaze shift and peak velocity of the right eye were larger. In the “near” fixation condition, intrasaccadic peak velocity of the left eye was slightly larger than that of the right eye. Memory-guided traces are presented in the right panels. In both the “near” and “far” fixation conditions, the left (abducting) eye was faster than the right (adducting) eye during the gaze shift to the target. Note that the peak velocities in the memory-guided trials were lower than in the visually guided trials.
Figure 6
 
Example of visually guided versus memory-guided gaze shifts. The examples of eye movements were obtained from subject 1. Horizontal gaze (top panels) and eye velocity (bottom panels) of both eyes are plotted against time. The smaller inset in each panel shows the corresponding vergence position and velocity at ×2.5 magnification along the y-axis. Visually guided trials are presented in the left panels. During the “far” fixation LED, the gaze shift and peak velocity of the right eye were larger. In the “near” fixation condition, intrasaccadic peak velocity of the left eye was slightly larger than that of the right eye. Memory-guided traces are presented in the right panels. In both the “near” and “far” fixation conditions, the left (abducting) eye was faster than the right (adducting) eye during the gaze shift to the target. Note that the peak velocities in the memory-guided trials were lower than in the visually guided trials.
Table 3. 
 
Peak Velocities per Subject
Table 3. 
 
Peak Velocities per Subject
Subject Gaze Shift Peak Velocity, deg/s ± SD
Near Far
Visually guided
 1 Left eye −192.29 ±32.58 −173.84 ±20.96
Right eye −172.51 ±34.72 −202.67 ±19.36
 2 Left eye −258.46 ±28.76 −337.02 ±21.22
Right eye −237.45 ±27.40 −340.14 ±22.48
 3 Left eye −369.86 ±18.88 −367.52 ±11.02
Right eye −360.59 ±17.46 −395.95 ±15.18
Memory guided
 1 Left eye −126.31 ±23.57 −145.55 ±17.29
Right eye −109.43 ±18.78 −136.67 ±21.44
 2 Left eye −165.34 ±38.45 −220.81 ±49.67
Right eye −144.18 ±35.15 −209.03 ±46.19
 3 Left eye −190.20 ±47.81 −205.03 ±42.29
Right eye −148.91 ±40.36 −180.56 ±41.52
For memory-guided binocular gaze shifts, the situation was different (see Fig. 6, right panels). In the “far” condition, the left (abducting) eye was faster than the right (adducting) eye during the gaze shift to the target. Subjects used postsaccadic vergence mediated by the right (adducting) eye to direct their binocular gaze to the memorized position in 3-D space. In the “near” condition, the left and right eye had different peak velocities during the gaze shifts to the remembered target. Similar to what was observed in the visually guided saccades, the peak velocity of the right (adducting) eye was smaller than that of the left (abducting) eye. The difference in right versus left eye peak velocity was significant and consistent across all three subjects; see Table 3 (paired-samples t-test, P < 0.001). 
Furthermore, we found that the peak velocities in the memory-guided trials were lower than in the visually guided trials (Table 3). We compared the visually guided with the memory-guided peak velocities of each eye and found a significant difference in all conditions (independent-samples t-test, P < 0.001). We also analyzed latencies (time between switching off the fixation LED and the initiation of an eye movement) in both the visually and memory-guided trials. In the memory-guided trials, the average latency was 370 ± 8 ms SD, and in the visually guided trials a latency of 360 ± 8 ms SD was found. The latencies were not significantly different. 
To compare the peak eye velocities, we expressed the differences between right and left eye peak velocities as right eye–left eye velocity ratios. The velocity ratios are presented in Figure 7. Only in the “far” visually guided condition was the velocity ratio larger than 1, indicating that the peak velocity of the right (adducting) eye was larger. In all other conditions, the left (abducting) eye had a larger peak velocity. All velocity ratios were significantly different from 1.0 (one-sample t-test, P < 0.001). For both the “far” and the “near” conditions, right eye–left eye velocity ratios between visually guided and memory-guided conditions were significantly different (independent-samples t-test, P < 0.001). 
Figure 7
 
Eye velocity ratios. The differences between right and left eye peak velocities are plotted as right eye–left eye velocity ratios. Velocity ratios are plotted for each subject both for the visually guided (black bars) and for the memory-guided (gray bars) “near” (bottom) and “far” (top) fixation conditions. In the “far” fixation visually guided condition, the velocity ratio was above 1, indicating that the peak velocity of the right (adducting) eye was larger. In all other conditions the left (abducting) eye had a larger peak velocity.
Figure 7
 
Eye velocity ratios. The differences between right and left eye peak velocities are plotted as right eye–left eye velocity ratios. Velocity ratios are plotted for each subject both for the visually guided (black bars) and for the memory-guided (gray bars) “near” (bottom) and “far” (top) fixation conditions. In the “far” fixation visually guided condition, the velocity ratio was above 1, indicating that the peak velocity of the right (adducting) eye was larger. In all other conditions the left (abducting) eye had a larger peak velocity.
Lateralization Effects
To exclude the possibility that eye preferences influenced the memory-guided gaze shifts, we calculated gaze errors for both the “near” and “far” conditions from the second control experiment, in which the target LED was placed on the right side. Gaze errors (mean ± SD) in the control experiment with “near” were not significantly different from zero: −0.83 ± 3.52° (P = 0.21) and 0.82 ± 4.60° (P = 0.34) for the left eye and right eye, respectively. Gaze errors (±SD) in the “far” condition were significantly different from zero for the left eye (−3.61 ± 2.42°, P < 0.001), but not for the right eye (−0.99 ± 3.30°, P = 0.15). Paired-samples t-tests demonstrated significant differences between left and right eye gaze errors (P < 0.001 and P < 0.001 for both the “near” and “far” conditions). A univariate ANOVA, with fixation LED (near/far) and horizontal gaze (left, right, and version) as fixed factors and gaze error as dependent variables, indicated a significant effect of both fixation LED (F(1,165) = 16.99, P < 0.001) and horizontal gaze (F(2,165) = 4.88, P < 0.01). Post hoc Tukey test indicated that the error of the right eye was significantly smaller than the error of the left eye (for left versus right, P < 0.01). The data show that differences in precision are based on differences between the adducting and abducting eye, and not on eye preference. 
Peripheral Visual Cueing Effects
In the third control experiment, we repeated the memory-guided gaze shift experiment by providing additional depth cues. Additional cues created by illuminating the environment while the target LED was turned on did not improve the memory-guided vergence movements. The opposite was the case; with illumination of the environment during memory-guided trials the vergence errors were larger than without illumination. In the “near” condition, vergence errors were larger with illumination (mean = −1.14, SD = 2.55°) than without illumination (mean = −0.67, SD = 1.83°). This difference was not significant, t (84) = 0.96, P = 0.34. In the “far” condition, vergence errors were also larger with illumination (mean = −3.26, SD = 1.30°) than without illumination (mean = −2.12, SD = 1.65°). This difference was significant, t (29.09) = 2.95, P < 0.05. 
Discussion
The aim of our experiments was to assess the accuracy and dynamics of binocular eye movements toward memorized targets in terms of direction and depth (version and vergence). We found significant differences in horizontal gaze errors between the left and the right eye that were related to viewing direction. When the memorized target was located on the left side of the subject, such that the left eye had to make an abducting movement (temporal) and the right eye an adducting (nasal) movement toward the remembered target, gaze error of the right eye was larger than that of the left eye. When the target LED was placed the right side of the subject, the opposite was found; gaze errors were in this case significantly larger in the left (adducting) eye. This means that in orienting gaze toward memorized targets, the abducting eye is more accurate than the adducting eye. This also excludes the possibility that eye dominancy influenced the task, because in the control experiment, in which the same subjects participated, the abducting eyes remained most accurate. Because version (the average gaze of both eyes) is a commonly used measure to quantify directional gaze shifts, we also compared version with gaze of each eye separately. When version errors alone are used (shown in Table 2), one may conclude that significantly larger errors are made in the “near” condition than in the “far” condition. However, gaze errors of each eye separately show that in both conditions, equally large errors are made. Thus, when version alone is used to analyze memory-guided saccades, important information on the performance of each eye separately might be missed. 
We also compared memory-guided binocular gaze shifts that had either a convergence or divergence component. In one condition the initial point of fixation of both eyes was at far distance, thus requiring convergent eye movements toward the remembered location of the target LED; in contrast, in the other situation the initial point of fixation was at near distance, requiring divergent memory-guided eye movements. We found that under the condition that required divergence, the error in vergence was significantly smaller than when convergence was required. Adding additional visual depth cues during the flash had no effect on vergence errors. In both the “near” and “far” conditions, the remembered vergence angles toward the target LED were too small. One explanation is that the subject underestimated the target distance. However, this is contradicted by the fact that adding additional depth cues did not improve the vergence error. Another explanation might be that the eyes have a tendency to diverge because of a limitation in vergence control from memory. This suggests that passive release of vergence is an easier task than active increase of vergence. This is in line with the finding that without visual input the eyes have a physiological resting vergence position. 25 Because in our experiment gaze errors were unequal in the two eyes, with the abducting eye being more accurate, vergence errors were mainly present in the adducting eye. 
To our knowledge this difference in accuracy of abducting and adducting memory-guided eye movements has not been described before. The differences in neural pathways that are involved in the control of abducting and adducting eye movements offer a possible explanation for this finding. Innervation of the lateral rectus muscle, which abducts the eye, is a pathway directly driven by the abducens nucleus and nerve (VI), whereas the medial rectus muscle (which adducts the eye) is indirectly controlled by the abducens nucleus via the medial longitudinal fasciculus (MLF) and the oculomotor nucleus (III). The abducting eye has small gaze errors, and its movement is also faster than that of the adducting eye. Differences in peak velocity were previously shown by Collewijn and colleagues using visually guided tasks. 26  
The experimental design (with the target LED on the left side) was such that based on geometry, different-sized gaze shifts were required to fixate the target LED. When gaze shifts were made from the “far” fixation toward the target LED, the right (adducting) eye had to make a larger gaze shift than the left eye. However, in the “near” condition, the left eye (abducting eye) had to make a larger gaze shift than the right eye. Under visually guided conditions, this amplitude difference between the left and right eyes was achieved by generating unequal-sized saccades. For memory-guided saccades, the abducting eye was always faster than the adducting eye. Thus, under this condition, subjects had to use postsaccadic vergence, mediated by the adducting eye, to direct their gaze to the memorized position in 3-D space. Our data are consistent with the findings of Kumar et al. 21 Those authors, however, considered only peak saccade and peak vergence velocity ratios and did not analyze relative timing aspects. Our study shows that under visually guided conditions, the abducting eye was initially faster but that the adducting eye continued to increase its velocity, reaching its peak after the peak velocity of the abducting eye. 
Our data show that different strategies are used to achieve 3-D gaze shifts under different conditions. An important issue is how our findings relate to the longstanding debate on the internal organization of binocular oculomotor control. The prevailing theory is Hering's law, which describes different vergence and conjugate commands that are summed by the motor neurons of each eye. 1 Helmholtz, however, postulated that each eye is individually controlled. 27 In support of Hering's theory, it has been shown that in the midbrain, different types of neurons are involved in the control of vergence. 6,28,29 These studies have demonstrated that vergence tonic cells, located in the mesencephalon, increase firing either before convergence or before divergence eye movements. It is considered that these vergence signals are added to conjugate saccadic motor commands. Thus, in this view, binocular saccadic eye movements are primarily controlled at the brainstem level by conjugate burst cells. However, more recently, monocular burst neurons encoding monocular commands for left and right eye saccades have been identified in the midbrain. 11 It has also been demonstrated that the brainstem burst generators, which were commonly assumed to drive only the conjugate component of eye movements, carry substantial vergence-related information during disconjugate saccades. 13 As pointed out by Kumar et al., 21 changes in binocular behavior that are quantitatively different for saccades and vergence are inconsistent with the independent eye control theory. Such changes were found during convergence, but not during divergence, both in our study and in the study of Kumar et al. 21  
Our findings suggest that the internal circuitry for making 3-D gaze shifts is dependent on continuous visual and local feedback when making intrasaccadic convergence eye movements during binocular gaze shifts. This does not exclude the possibility that there is also a memory system that keeps track of where a target is in 3-D space and that can be accessed by the oculomotor control system: Otherwise subjects could not make memory-guided binocular gaze shifts at all. A possible candidate is the caudal frontal eye field. 30,31 However, to use this information, the oculomotor system has to rely more on a sequential processing using fast and slow systems than on a simultaneous processing of saccade and vergence commands. The fact that during divergence the situation is different may very well reflect that for near vision there is already a state of tonic vergence. This is readily released at an early stage of the binocular gaze shift. 
Acknowledgments
We thank our subjects for participating in the experiments. 
References
Hering E. Die Lehre vom binokularen Sehen . Leipzig: Engleman; 1868.
Bridgeman B Stark L eds. Ewald Hering's Theory of Binocular Vision (Die Lehre vom Binokularen Sehen) . Bridgeman B trans. New York: Plenum Press; 1977.
Goebel HH Komatsuzaki A Bender MB Cohen B. Lesions of the pontine tegmentum and conjugate gaze paralysis. Arch Neurol . 1971; 24: 431–440. [CrossRef] [PubMed]
Hepp K Henn V. Spatio-temporal recoding of rapid eye movement signals in the monkey paramedian pontine reticular formation (PPRF). Exp Brain Res . 1983; 52: 105–120. [CrossRef] [PubMed]
Zee DS. Brain stem and cerebellar deficits in eye movement control. Trans Ophthalmol Soc U K . 1986; 105 (pt 5): 599–605. [PubMed]
Mays LE. Neural control of vergence eye movements: convergence and divergence neurons in midbrain. J Neurophysiol . 1984; 51: 1091–1108. [PubMed]
Mays LE Porter JD Gamlin PD Tello CA. Neural control of vergence eye movements: neurons encoding vergence velocity. J Neurophysiol . 1986; 56: 1007–1021. [PubMed]
Zee DS Fitzgibbon EJ Optican LM. Saccade-vergence interactions in humans. J Neurophysiol . 1992; 68: 1624–1641. [PubMed]
Collewijn H Erkelens CJ Steinman RM. Voluntary binocular gaze-shifts in the plane of regard: dynamics of version and vergence. Vision Res . 1995; 35: 3335–3358. [CrossRef] [PubMed]
Busettini C Mays LE. Saccade-vergence interactions in macaques. II. Vergence enhancement as the product of a local feedback vergence motor error and a weighted saccadic burst. J Neurophysiol . 2005; 94: 2312–2330. [CrossRef] [PubMed]
Zhou W King WM. Premotor commands encode monocular eye movements. Nature . 1998; 393: 692–695. [CrossRef] [PubMed]
King WM Zhou W. Neural basis of disjunctive eye movements. Ann N Y Acad Sci . 2002; 956: 273–283. [CrossRef] [PubMed]
Van Horn MR Sylvestre PA Cullen KE. The brain stem saccadic burst generator encodes gaze in three-dimensional space. J Neurophysiol . 2008; 99: 2602–2616. [CrossRef] [PubMed]
Westheimer G Mitchell DE. The sensory stimulus for disjunctive eye movements. Vision Res . 1969; 9: 749–755. [CrossRef] [PubMed]
Semmlow J Wetzel P. Dynamic contributions of the components of binocular vergence. J Opt Soc Am . 1979; 69: 639–645. [CrossRef] [PubMed]
Rashbass C Westheimer G. Disjunctive eye movements. J Physiol . 1961; 159: 339–360. [CrossRef] [PubMed]
Erkelens CJ. A dual visual-local feedback model of the vergence eye movement system. J Vis . 2011; 11: 1–14. [CrossRef]
White JM Sparks DL Stanford TR. Saccades to remembered target locations: an analysis of systematic and variable errors. Vision Res . 1994; 34: 79–92. [CrossRef] [PubMed]
Baker JT Harper TM Snyder LH. Spatial memory following shifts of gaze. I. Saccades to memorized world-fixed and gaze-fixed targets. J Neurophysiol . 2003; 89: 2564–2576. [CrossRef] [PubMed]
Leigh RJ Zee DS. The Neurology of Eye Movements . New York: Oxford University Press; 2006: x, 763.
Kumar AN Han YH Liao K Leigh RJ. Tests of Hering- and Helmholtz-type models for saccade-vergence interactions by comparing visually guided and memory-guided movements. Ann N Y Acad Sci . 2005; 1039: 466–469. [CrossRef] [PubMed]
Klier EM Hess BJ Angelaki DE. Differences in the accuracy of human visuospatial memory after yaw and roll rotations. J Neurophysiol . 2006; 95: 2692–2697. [CrossRef] [PubMed]
Klier EM Hess BJ Angelaki DE. Human visuospatial updating after passive translations in three-dimensional space. J Neurophysiol . 2008; 99: 1799–1809. [CrossRef] [PubMed]
Robinson DA. A method of measuring eye movement using a scleral search coil in a magnetic field. IEEE Trans Biomed Eng . 1963; 10: 137–145. [PubMed]
Rosenfield M. Tonic vergence and vergence adaptation. Optom Vis Sci . 1997; 74: 303–328. [CrossRef] [PubMed]
Collewijn H Erkelens CJ Steinman RM. Binocular co-ordination of human horizontal saccadic eye movements. J Physiol . 1988; 404: 157–182. [CrossRef] [PubMed]
Helmholtz H. Helmholtz's Treatise on Physiological Optics [English translation of Handbuch der physiologischen Optik, 1910]. New York: Dover; 1910/1962.
Zhang Y Mays LE Gamlin PD. Characteristics of near response cells projecting to the oculomotor nucleus. J Neurophysiol . 1992; 67: 944–960. [PubMed]
Judge SJ Cumming BG. Neurons in the monkey midbrain with activity related to vergence eye movement and accommodation. J Neurophysiol . 1986; 55: 915–930. [PubMed]
Fukushima J Akao T Shichinohe N Kurkin S Kaneko CR Fukushima K. Neuronal activity in the caudal frontal eye fields of monkeys during memory-based smooth pursuit eye movements: comparison with the supplementary eye fields. Cereb Cortex . 2011; 21: 1910–1924. [CrossRef] [PubMed]
Fukushima K Fukushima J Warabi T. Vestibular-related frontal cortical areas and their roles in smooth-pursuit eye movements: representation of neck velocity, neck-vestibular interactions, and memory-based smooth-pursuit. Front Neurol . 2011; 2: 78. [CrossRef] [PubMed]
Footnotes
 Disclosure: J. Dits, None; J.J.M. Pel, None; A. Remmers, None; J. van der Steen, None
Figure 1
 
Schematic top view of visual targets. The fixation LEDs (black closed circles) were placed in line with the subject's left eye: One target was at a distance of 94 cm (“far”) and one at a distance of 31 cm (“near”) in depth. The target LED (gray closed circles) was located at a distance of 45 cm in front and 13 cm to the left relative to the subject's left eye.
Figure 1
 
Schematic top view of visual targets. The fixation LEDs (black closed circles) were placed in line with the subject's left eye: One target was at a distance of 94 cm (“far”) and one at a distance of 31 cm (“near”) in depth. The target LED (gray closed circles) was located at a distance of 45 cm in front and 13 cm to the left relative to the subject's left eye.
Figure 2
 
Setup of trials. Trials are illustrated for the “near” condition. (A) Fixation trials: The subjects were instructed to fixate the target LED that was presented for 5 seconds. Fixation trials were used to calculate the errors in horizontal gaze and vergence toward the target LED. (B) Memory-guided trials: The subjects were asked to fixate the fixation LED while the target LED was turned on for 80 ms. Subjects had to maintain fixation on the fixation LED, and at the same time remember the location of the peripheral flashed target LED. 1800 ms after the target LED was extinguished, the fixation LED was turned off and the subject had to make a saccade to the remembered location of the target LED.
Figure 2
 
Setup of trials. Trials are illustrated for the “near” condition. (A) Fixation trials: The subjects were instructed to fixate the target LED that was presented for 5 seconds. Fixation trials were used to calculate the errors in horizontal gaze and vergence toward the target LED. (B) Memory-guided trials: The subjects were asked to fixate the fixation LED while the target LED was turned on for 80 ms. Subjects had to maintain fixation on the fixation LED, and at the same time remember the location of the peripheral flashed target LED. 1800 ms after the target LED was extinguished, the fixation LED was turned off and the subject had to make a saccade to the remembered location of the target LED.
Figure 3
 
Points in time used for error calculation. Simulation of horizontal gaze (top panel) and vergence (bottom panel) against time. We determined horizontal gaze and vergence at fixed points during each task: the initial position when the eyes were fixated on the fixation LED (t 1) and after the memory-guided eye movement, but before slow drifting eye movements (t 2).
Figure 3
 
Points in time used for error calculation. Simulation of horizontal gaze (top panel) and vergence (bottom panel) against time. We determined horizontal gaze and vergence at fixed points during each task: the initial position when the eyes were fixated on the fixation LED (t 1) and after the memory-guided eye movement, but before slow drifting eye movements (t 2).
Figure 4
 
Example of memory-guided trials for the “near” condition. The examples of eye movements were obtained from subject 6. Horizontal gaze of both eyes is plotted against time in the top panel, and vergence is plotted against time in the bottom panel. The bars on top of the horizontal gaze traces indicate the illumination of the fixation (“near”) and target LED. The dashed lines are the subject-specific correct gaze and vergence values retrieved from the fixation trials. Leftward eye movements are made toward the memorized location of the target LED. Gaze of the left eye (red traces) is more accurate than gaze of the right eye gaze (blue traces). Vergence decreased during the memory-guided eye movement.
Figure 4
 
Example of memory-guided trials for the “near” condition. The examples of eye movements were obtained from subject 6. Horizontal gaze of both eyes is plotted against time in the top panel, and vergence is plotted against time in the bottom panel. The bars on top of the horizontal gaze traces indicate the illumination of the fixation (“near”) and target LED. The dashed lines are the subject-specific correct gaze and vergence values retrieved from the fixation trials. Leftward eye movements are made toward the memorized location of the target LED. Gaze of the left eye (red traces) is more accurate than gaze of the right eye gaze (blue traces). Vergence decreased during the memory-guided eye movement.
Figure 5
 
Example of memory-guided trials for the “far” condition. The examples of eye movements were obtained from subject 6. Horizontal gaze of both eyes is plotted against time in the top panel, and vergence is plotted against time in the bottom panel. The bars on top of the horizontal gaze traces indicate the illumination of the fixation (“far”) and target LED. The dashed lines are the subject-specific correct gaze and vergence values retrieved from the fixation trials. Leftward eye movements are made toward the memorized location of the target LED. Gaze of the left eye (red traces) is more accurate than gaze of the right eye gaze (blue traces). Vergence increased during the memory-guided eye movement.
Figure 5
 
Example of memory-guided trials for the “far” condition. The examples of eye movements were obtained from subject 6. Horizontal gaze of both eyes is plotted against time in the top panel, and vergence is plotted against time in the bottom panel. The bars on top of the horizontal gaze traces indicate the illumination of the fixation (“far”) and target LED. The dashed lines are the subject-specific correct gaze and vergence values retrieved from the fixation trials. Leftward eye movements are made toward the memorized location of the target LED. Gaze of the left eye (red traces) is more accurate than gaze of the right eye gaze (blue traces). Vergence increased during the memory-guided eye movement.
Figure 6
 
Example of visually guided versus memory-guided gaze shifts. The examples of eye movements were obtained from subject 1. Horizontal gaze (top panels) and eye velocity (bottom panels) of both eyes are plotted against time. The smaller inset in each panel shows the corresponding vergence position and velocity at ×2.5 magnification along the y-axis. Visually guided trials are presented in the left panels. During the “far” fixation LED, the gaze shift and peak velocity of the right eye were larger. In the “near” fixation condition, intrasaccadic peak velocity of the left eye was slightly larger than that of the right eye. Memory-guided traces are presented in the right panels. In both the “near” and “far” fixation conditions, the left (abducting) eye was faster than the right (adducting) eye during the gaze shift to the target. Note that the peak velocities in the memory-guided trials were lower than in the visually guided trials.
Figure 6
 
Example of visually guided versus memory-guided gaze shifts. The examples of eye movements were obtained from subject 1. Horizontal gaze (top panels) and eye velocity (bottom panels) of both eyes are plotted against time. The smaller inset in each panel shows the corresponding vergence position and velocity at ×2.5 magnification along the y-axis. Visually guided trials are presented in the left panels. During the “far” fixation LED, the gaze shift and peak velocity of the right eye were larger. In the “near” fixation condition, intrasaccadic peak velocity of the left eye was slightly larger than that of the right eye. Memory-guided traces are presented in the right panels. In both the “near” and “far” fixation conditions, the left (abducting) eye was faster than the right (adducting) eye during the gaze shift to the target. Note that the peak velocities in the memory-guided trials were lower than in the visually guided trials.
Figure 7
 
Eye velocity ratios. The differences between right and left eye peak velocities are plotted as right eye–left eye velocity ratios. Velocity ratios are plotted for each subject both for the visually guided (black bars) and for the memory-guided (gray bars) “near” (bottom) and “far” (top) fixation conditions. In the “far” fixation visually guided condition, the velocity ratio was above 1, indicating that the peak velocity of the right (adducting) eye was larger. In all other conditions the left (abducting) eye had a larger peak velocity.
Figure 7
 
Eye velocity ratios. The differences between right and left eye peak velocities are plotted as right eye–left eye velocity ratios. Velocity ratios are plotted for each subject both for the visually guided (black bars) and for the memory-guided (gray bars) “near” (bottom) and “far” (top) fixation conditions. In the “far” fixation visually guided condition, the velocity ratio was above 1, indicating that the peak velocity of the right (adducting) eye was larger. In all other conditions the left (abducting) eye had a larger peak velocity.
Table 1. 
 
Average Errors and Standard Deviation per Subject
Table 1. 
 
Average Errors and Standard Deviation per Subject
Subject Specific Gaze Error, deg ± SD Fixation Trial No. of Trials Eye Dominance
Near Far Near Far
1 Left −0.86 ± 2.18 −0.94 ± 2.00 −16.24 7 6 Right
Right −1.22 ± 2.23 2.26 ± 2.00 −23.57
Version −1.04 ± 2.05 0.66 ± 1.94 −19.90
Vergence 0.20 ± 1.50 −3.30 ± 0.83 7.33
2 Left 2.31 ± 3.69 0.30 ± 2.76 −17.31 8 8 Right
Right 2.88 ± 2.35 3.31 ± 2.61 −24.79
Version 2.59 ± 2.96 1.80 ± 2.66 −21.06
Vergence −0.71 ± 1.91 −3.15 ± 0.61 7.48
3 Left −2.11 ± 1.49 −2.97 ± 1.23 −16.04 6 7 Left
Right −5.71 ± 2.50 −5.04 ± 1.95 −22.75
Version −3.91 ± 1.87 −4.00 ± 1.56 −19.40
Vergence 3.85 ± 1.63 2.29 ± 1.11 6.71
4 Left 2.22 ± 1.88 0.26 ± 1.38 −16.24 10 10 Right
Right 3.56 ± 1.51 2.43 ± 1.39 −22.97
Version 2.89 ± 1.68 1.34 ± 1.36 −19.61
Vergence −1.35 ± 0.58 −2.17 ± 0.53 6.73
5 Left −2.40 ± 1.54 −3.50 ± 2.67 −16.00 7 9 Right
Right −0.09 ± 1.35 −0.02 ± 2.10 −22.66
Version −1.24 ± 1.42 −1.76 ± 2.37 −19.33
Vergence −2.29 ± 0.52 −3.46 ± 0.75 6.67
6 Left 2.81 ± 2.30 2.91 ± 2.53 −15.95 9 8 Right
Right 4.37 ± 2.43 5.52 ± 2.83 −22.59
Version 3.59 ± 2.36 4.21 ± 2.67 −19.38
Vergence −1.57 ± 0.47 −2.61 ± 0.51 6.85
7 Left 1.23 ± 2.43 −4.21 ± 1.67 −16.52 9 9 Left
Right 1.69 ± 2.21 −2.03 ± 1.97 −24.46
Version 1.46 ± 1.46 −3.12 ± 1.81 −20.05
Vergence −0.45 ± 0.58 −2.17 ± 0.50 7.94
8 Left −1.31 ± 2.32 2.08 ± 1.88 −16.08 7 8 Right
Right −0.07 ± 2.43 2.40 ± 1.45 −22.55
Version −0.69 ± 2.36 2.24 ± 1.59 −19.32
Vergence −1.34 ± 0.59 −0.42 ± 1.14 6.47
Table 2. 
 
Horizontal Gaze and Vergence Errors
Table 2. 
 
Horizontal Gaze and Vergence Errors
Horizontal Gaze Error, deg Vergence Error, deg
Mean ± SD P Value* Mean ± SD P Value*
Near fixation, divergence
 Left eye 0.59 ± 2.97 P = 0.13 −0.67 ± 1.83 P < 0.01
 Right eye 1.23 ± 3.44 P < 0.01
 Version 1.69 ± 3.10 P < 0.05
Far fixation, convergence
 Left eye −0.67 ± 3.23 P = 0.11 −2.12 ± 1.65 P < 0.001
 Right eye 1.42 ± 3.43 P < 0.01
 Version 1.22 ± 3.82 P = 0.37
Table 3. 
 
Peak Velocities per Subject
Table 3. 
 
Peak Velocities per Subject
Subject Gaze Shift Peak Velocity, deg/s ± SD
Near Far
Visually guided
 1 Left eye −192.29 ±32.58 −173.84 ±20.96
Right eye −172.51 ±34.72 −202.67 ±19.36
 2 Left eye −258.46 ±28.76 −337.02 ±21.22
Right eye −237.45 ±27.40 −340.14 ±22.48
 3 Left eye −369.86 ±18.88 −367.52 ±11.02
Right eye −360.59 ±17.46 −395.95 ±15.18
Memory guided
 1 Left eye −126.31 ±23.57 −145.55 ±17.29
Right eye −109.43 ±18.78 −136.67 ±21.44
 2 Left eye −165.34 ±38.45 −220.81 ±49.67
Right eye −144.18 ±35.15 −209.03 ±46.19
 3 Left eye −190.20 ±47.81 −205.03 ±42.29
Right eye −148.91 ±40.36 −180.56 ±41.52
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×