Investigative Ophthalmology & Visual Science Cover Image for Volume 47, Issue 1
January 2006
Volume 47, Issue 1
Free
Eye Movements, Strabismus, Amblyopia and Neuro-ophthalmology  |   January 2006
Recording Three-Dimensional Eye Movements: Scleral Search Coils versus Video Oculography
Author Affiliations
  • Mark M. J. Houben
    From the Departments of Neuroscience and
  • Janine Goumans
    From the Departments of Neuroscience and
    Otorhinolaryngology, Erasmus University Medical Center, Rotterdam, The Netherlands.
  • Johannes van der Steen
    From the Departments of Neuroscience and
Investigative Ophthalmology & Visual Science January 2006, Vol.47, 179-187. doi:https://doi.org/10.1167/iovs.05-0234
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Mark M. J. Houben, Janine Goumans, Johannes van der Steen; Recording Three-Dimensional Eye Movements: Scleral Search Coils versus Video Oculography. Invest. Ophthalmol. Vis. Sci. 2006;47(1):179-187. https://doi.org/10.1167/iovs.05-0234.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

purpose. This study compared the performance of a video-based infrared three-dimensional eye tracker device (Chronos) with the scleral search coil method.

methods. Three-dimensional eye movements were measured simultaneously with both systems during fixation, saccades, optokinetic stimulation, and vestibular stimulation.

results. Comparison of fixation positions between −15° and +15° showed that horizontal and vertical eye position signals of the two systems were highly correlated (R 2 = 0.99). Torsion values measured by coils and the video system were significantly different (P < 0.001). Saccade main sequence parameters of coil and video signals were in good agreement. Gains of torsion in response to optokinetic stimulation (cycloversion and cyclovergence) were not significantly different (P > 0.05). Gain values of the vestibulo-ocular reflex as determined from coil and video signals showed good agreement for rotations. However, there was more variability in the video signals for translations, possibly due to relative motion between the head and cameras.

conclusions. Lower time resolution, possible instability of the head device of the video system, and inherent small instabilities of pupil tracking algorithms make the coil system the best choice when measuring eye movement responses with high precision or when involving high-frequency head motion. For less demanding and for static tests and measurements longer than a half an hour, the latest generation infrared video system is a good alternative to scleral search coils. However, the quality of torsion of the infrared video system is less compared with scleral search coils and needs further technological improvement.

Because the oculomotor system is such a good model to relate sensory input to motor output, eye movement measurements are an important tool to study brain function. In humans, eye movement recordings are extensively used in behavioral and cognitive neuropsychological experiments. In addition, eye movement recordings are an important and sensitive tool to diagnose neurologic, ophthalmologic, and vestibular disorders. 
For a long time, the scleral search coil technique has been regarded as the gold standard for measuring eye movements. The scleral search coils were introduced by Robinson 1 and were modified by Collewijn and colleagues 2 into its present form: a single copper coil embedded in a soft silicon annulus. This coil initiated measurement of horizontal and vertical eye movements with unprecedented precision 2 in many laboratories all over the world. With a modified version of the coil, it became possible to also measure torsional components. 3 4 Since then, three-dimensional (3D) coils have proved to be of indispensable value to study 3D kinematics of eye movements. 5 6 7 8 9 10 11  
Despite its precision and low signal-to-noise ratio, a drawback of scleral search coils is that, because of their invasive nature, they can only be used for a maximum duration of 30 to 60 minutes. Therefore, there has been a continued search for alternative, noninvasive recording techniques capable of accurate measurement of 3D eye movement. Although several video-based, eye-tracking devices now exist that can provide a good alternative to the scleral search coil method for measuring two-dimensional (2D) eye movements, 12 when it comes to recording 3D eye movements, choices of commercially available video recording systems are limited. The main problem of 3D infrared (IR) eye measurement systems was their low frame-sampling rate (typically approximately 50 Hz). In 2001, the first version of a novel IR video-based 3D eye-tracking device with a frame rate of 200 Hz (Chronos Eye Tracker; Chronos Vision, Berlin, Germany) was introduced. Since then, numerous improvements have been made on its hardware and software. 
IR eye-movement trackers in general and 3D IR eye trackers in particular are increasingly used in fundamental investigations, as well as for clinical diagnosis. Therefore, we performed a comparison of 3D eye movements measured by the scleral search coil method with eye movements measured simultaneously by the video-based Chronos system. To allow a good overall comparison of performance, we tested the two methods simultaneously under static (fixations and saccades) and dynamic (optokinetic and vestibular stimulation) conditions. 
Methods
Subjects
Four healthy subjects (three male, one female) without any history or clinical signs of oculomotor or vestibular abnormalities participated in the experiment. Ages were 25, 29, 46, and 53 years. Three subjects had brown eyes; the other subject had blue eyes. One subject had corrected vision (refractive error of −3D and −5D for left and right eye, respectively) but did not wear his glasses during our experiments. Three subjects had worn scleral search coils before. All subjects participated on a voluntary basis and gave their informed consent. Experimental procedures were approved by the Medical Ethical Committee of the Erasmus University Medical Center and adhered to the Declaration of Helsinki for research involving human subjects. 
Eye Movement Recordings
Images of both eyes were recorded with an IR video-based, eye-tracking system (Chronos system; Chronos Vision), capable of 3D eye-position measurement. The Chronos system is available in two versions. One is a stand-alone system that plugs into the parallel board of a personal computer (PC). The other version, which we used, is a PC-based system with dedicated hardware and software. In our experiment, subjects wore the head device that was delivered with the system. It has laterally mounted digital IR cameras (Fig. 1) , ensuring a free field of view and allowing measurement of eye rotations of at least ±30° horizontally and ±25° vertically. The head was immobilized by using an individually molded silastic dental-impression bite bar. The image sequences of both eyes were sampled at a frequency of 200 Hz and were stored to a hard disc for offline extraction of eye positions. For recording and offline calculation we used software supplied by Chronos Vision in 2004 (etd2.exe, version 3.4.0.0 and iris.exe, version 2.1.6.1, respectively). 
Chronos’ offline algorithm for calculation of horizontal and vertical eye position is based on a circle approximation technique (Hough transform), which fits a circle to the pupil perimeter. 13 Eye responses to calibration data were used during offline evaluation by the Chronos software for correction of the geometric projection error in tertiary eye positions and for the transformation from pixel to Euler coordinates. Torsional eye positions were calculated by correlation of an iris signature (iris luminance profile derived from circular sampling around the iris) of the current frame with a predefined reference signature. 13 The resulting raw 3D eye positions are expressed in Fick angles. The inset of Figure 1shows an example image during offline analysis of an eye recorded by the Chronos system and with the pupil fit and iris signature indicated. Although the Chronos software provides scleral marker tracking as an alternative to the iris segment correlation, we did not use it, because the software did not allow us to get a wide-angle video image of the eye, and it would invalidate the noninvasiveness of the video system. 
Because the scleral search coils do not cover the pupil and the iris, we were able to measure 3D eye positions simultaneously with the video system and scleral search coils (see Fig. 1 ). We used a standard 25-kHz two-field coil system (Model EMP3020; Skalar Medical, Delft, The Netherlands) based on the amplitude detection method by Robinson. 1 Standard dual coils embedded in a silicone annulus (Combi Coils; Skalar Medical) were inserted in each eye. Before insertion, the eyes were anesthetized with a few drops of oxybuprocain (0.4%) in HCl (pH 4.0). Both coils were precalibrated in vitro by mounting them on a gimbal system placed in the center of the magnetic fields. The coil signals were passed through an analog low-pass filter (cutoff, 500 Hz) and were sampled online at a frequency of 200 Hz and with 12-bit precision by the analog to digital converter (ADC) board provided in the Chronos system. Thus, we obtained synchronous IR video and search coil signals. In addition, we also sampled the coil data at 1000 Hz on a CED system (Cambridge Electronic Design, Cambridge, UK). All data were stored to a hard disc for offline analysis. 
Stimuli
Visual stimuli were back-projected onto a translucent screen in front of each subject at a distance of 189 cm. The center of the visual target was horizontally and vertically aligned with the center point between the subject’s eyes. Subjects were tested under the following static and dynamic conditions. 
Fixations.
Subjects viewed a succession of 25 white dots, one at a time, against a black background. The horizontal and vertical position of the dots were at eccentricities of −15°, −7.5°, 0°, 7.5°, and 15°, together forming a grid of 5 × 5 targets. Each target was visible for 4 seconds. Subjects were asked to fixate on the targets as accurately as possible. 
Voluntary Saccades.
We tested voluntary saccades in full detail in one subject. While looking at a grid pattern with horizontal and vertical grid lines at 5° intervals, the subject was instructed to make voluntary saccades at random toward the cross-sections of horizontal and vertical grid lines. In this way, saccades with amplitudes over a range of 5° to 30° in any direction were recorded. 
Optokinetic Stimulation.
Two identical patterns of 1000 random dots were presented dichoptically on the projection screen. Subjects looked at the center of the patterns, whereas the patterns oscillated sinusoidally in the same direction (cycloversion) or opposite direction (cyclovergence) about the line of sight. The frequency was 0.24 Hz, and the peak-to-peak amplitude was 4° for the cycloversion condition and 2° for the cyclovergence condition. The duration of each stimulus was 70 seconds. 
Vestibular Stimulation.
To deliver vestibular stimuli, a motion platform (FCS, Schiphol, The Netherlands) capable of rotatory and linear motion in any direction was used. Subjects were seated in a rigid chair mounted on the platform and securely fastened with heavy-duty seat belts as used in racing cars. Further fixation of subjects during the whole-body motion was ensured by a bite bar connected to a solid cubic frame rigidly mounted on the platform and a vacuum cushion folded around the head and around a ring that was fixed to the chair. The center of rotation for all axes of rotation of the platform was set under software control to be midway between the ears. We performed sinusoidal tests on all subjects in the light as well as in the dark. In the light condition, subjects fixated on a target that was projected onto the screen in an otherwise darkened room. The platform was sinusoidally rotated or translated about or along one of three orthogonal axes in space: the gravity aligned axis (“yaw” rotation and “heave” translation), the occipitonasal axis (“roll” rotation and “surge” translation), and the interaural axis (“pitch” rotation and “sway” translation). Naming of rotation and translation axes is in accordance with definitions used in aviation and simulation (see also description by Houben et al. 14 ). Stimulation frequency was 0.5 Hz, the duration was 14 seconds (including 2 seconds of fade-in and fade-out time). Stimulus peak-to-peak amplitude was 8° for rotations and 0.2 m for translations, resulting in peak accelerations of 40°/s2 and 1.0 m/s2, respectively. 
We determined coil artifacts induced by motion of the platform (and cubic frame containing the field coils) with one search coil attached to the bite bar and another one attached to the forehead of the subject. Peak-to-peak artifacts due to the motion as measured by the coils were <0.1° in any direction. 
Data Analysis
Eye-movement signals obtained with the Chronos IR video system were analyzed with the supplied analysis software. The software converts the raw pixel data per eye into Fick coordinates by using a five-target calibration. For this, we used five targets from the fixation condition (with the 5 × 5 visual targets): the center target and the four targets 7.5° left, right, above, and below the center. 
For the coil system, eye responses to all 25 fixation targets were used to iteratively calculate the sensitivities and the misalignment of the coils, minimizing the deviations in horizontal and vertical eye positions from the target positions. By means of a 3D matrix conversion to correct for misalignment of the coils, coil voltages were transformed into Fick angles. 15 16 The simultaneous use of the Chronos system with the search coils resulted in small inhomogeneities of the magnetic field caused by the metal in the Chronos head device. This induced small aberrations in, especially, the vertical eye positions. We compensated for these small inhomogeneities and nonlinearities in the magnetic fields by two three-layer back-propagation neural networks that fitted the fixation data to the target locations. 17  
Velocity signals were calculated as a 5-point central difference of eye position samples (low-pass characteristics of 50 Hz). 18 Fixations (defined as intersaccadic intervals) were marked automatically in the coil signal by detection of saccades by using a velocity threshold of 100°/s and a minimum fixation length of 2 seconds The same fixation intervals were taken for the video signal. Mean horizontal, vertical, and torsional positions during each fixation were calculated independently for the coil and video signal. 
For analysis of optokinetic stimulation, torsion signals of both eyes were used to yield cycloversion (average of left and right eye torsion, clockwise motion defined as positive) and cyclovergence signals (difference between left and right eye torsion, convergence defined as positive). From the cycloversion coil signal, saccades were removed by using a velocity threshold of 20°/s. In the cyclovergence signal, no saccades were present because by taking the difference between the left eye and right eye torsion, saccades were automatically eliminated. 
For analysis of vestibular stimulation, stimulus amplitude during translations (in meters) was translated into the geometrically required angle (in degrees) by taking into account the distance of 189 cm from the subject’s eyes to the projection screen. This allowed easier comparison of the results for angular and linear stimulation, as well as derivation of the gain of eye responses during translations in terms of required eye rotation at the used viewing distance. Saccades were removed from the raw data by using a velocity threshold of 12°/s, 18 and the smooth components, as well as the stimulus signal, were converted to the frequency domain by using a Fast Fourier Transform. Gain and phase of left and right eye responses were calculated from the real and imaginary components. 19  
Results
Fixations
Eye positions for right and left eye as recorded with the coil and IR video system simultaneously during fixation of a target grid are shown in Figure 2 . The target grid of 5 × 5 targets was scanned row by row from bottom left to top right target. This pattern can be observed in the time traces. The top two panels show the coil data, whereas the lower two panels give the video data. The coil and video signals are comparable, but, at larger eccentricities, occasionally erroneous eye positions can be observed in the video data, especially for torsional eye positions. 
In Figure 3 , vertical eye positions during fixation are plotted as function of horizontal eye positions, thus revealing the target grid. The extracted fixation positions (indicated by crosses) correspond to the presented grid of fixation targets (indicated by the intersections of the dashed lines). The mean square differences between the true and the actual fixation positions, averaged over left and right eye of all subjects, is 0.56° (horizontal) and 0.62° (vertical eye positions) for the video system and 0.096° (horizontal) and 0.080° (vertical eye positions) for the coil system. Although differences between positions recorded by the coil and video system can be observed (compare upper to lower panels of Fig. 3 ), within 10° from the center position, the fixation positions as determined from the coil and video signals are very similar. The mean square differences between coil and video for each fixation position, averaged across subjects, is 0.55° for horizontal and 0.61° for vertical eye positions. 
In contrast, the correspondence between torsional eye positions measured by coil and video data during fixation was considerably less than for horizontal and vertical positions. Mean torsion at the fixation positions (Fig. 4)show larger discrepancies between coil and video data. Especially for tertiary positions, occasionally large discrepancies (up to 5°) are observed. Please note that the seemingly dramatic differences in torsional positions shown in Figure 4have an exaggeration factor of 10 and are actually very small. The mean square differences between coil and video for each fixation position, averaged across subjects, is 1.9° for torsional eye positions. 
A quantitative comparison between measured eye positions of both systems during fixations is given in Figure 5 . This figure shows that, although there is a good linear relation throughout the measured range between the coil and the video signal for horizontal and vertical fixation positions (slopes near one), for torsion, it is much lower. Individual fit parameters are listed in Table 1 . The mean slope (averaged over the right and left eyes of all subjects) does not significantly differ from 1.0 at the 1% level for horizontal (Student’s two-tailed t-test, P = 0.14, t = 1.66, df = 7) and vertical positions (P = 0.58, t = 0.59, df = 7) but does for torsion (P < 0.001, t = 7.58, df = 7). 
Voluntary Saccades
Figure 6shows a detailed saccade profile of a 30° saccade with two corrective saccades measured simultaneously with coils and the video system. Although the overall shape of the saccade measured with the two systems is very similar, small differences exist in the details of the saccades. These differences have an impact on a quantitative saccade analysis. For instance, the alignment of the two eyes directly after the first saccade is different. This results in a 5°/s higher peak velocity of the left eye for the video signal compared with the coil signal. Note that this difference cannot be due to the load of the coil, because coil and video signals were measured simultaneously. Neither can it be due to differences in temporal resolution, because both signals were collected with the same sampling frequency. To compare the coil and the video system over a range of saccades, we plotted the main sequence of horizontal and vertical saccades (Fig. 7) . Absolute values of left and right saccades, as well as up and down for vertical saccades were pooled. Figure 7shows that the results obtained with coils and video signals are very similar (t-test, P = 0.94 or higher; see Table 2 ). Still, small differences can be observed. Firstly, for large horizontal saccades there is a tendency of higher peak velocities for video than for coil signals. Secondly, saccades measured with the video system have a slightly longer duration at large amplitudes. 
One possible cause for this variability is differences in signal noise. For this reason, we determined the signal noise levels of the two systems. The insets in Figure 6show a high magnification of the position and velocity signals of the right eye. In this subject, the noise level measured with the coil on the eye was <0.02°. When we mounted the scleral coil on a gimbal system, noise levels were a factor of 10 smaller (0.002°). Actual eye positions during fixation measured with the Chronos system, resulted in noise levels of 0.2°. This value is approximately 30 times higher than the reported noise levels of 0.006° measured on a gimbal system. 13  
In summary, in a real eye-movement recording on a human subject, noise levels of the Chronos video signals are a factor of 10 higher than noise levels of the coil signals. The differences in noise levels are even more clearly visible in the velocity signals. On average, we found a 1°/s basic noise level during fixation for the video signals compared with 0.1°/s for the coil signals. When averaged across subjects, the overall differences in noise were less pronounced. The SD around the mean during each fixation interval, averaged over all targets, the left eye, and the right eye, was, for the coils, 0.25° (horizontal), 0.10° (vertical), and 0.22° (torsional eye positions); for the video signals, these values were 0.32°, 0.26°, and 0.41°, respectively. Note that in this situation, microsaccades, drift, and blinks may all contribute to the variability. 
Optokinetic Stimulation
We also compared torsion signals in response with visual stimulation about the visual axis. Torsion signals of both eyes, as measured by the coil and the video systems, were used to yield cycloversion and cyclovergence signals. Figure 8shows the cycloversion and cyclovergence as a function of time for one subject. One major drawback of the video signals compared with the coil signals was the noise level. To reduce the noise in the cycloversion and the cyclovergence signals of the video system, the signals were low-pass filtered (using a fifth order low-pass digital Butterworth filter) with a cutoff frequency of 20 Hz. The figure shows that although the video signals indeed contained much more noise, they do contain the shape of the coil data as apparent in the envelopes. Gain values (ratio of spectral magnitude between torsional component of eye movement and stimulus movement at the stimulus frequency of 0.24 Hz) are shown in Figure 9 . Individual gains, as well as the mean and SD, are depicted for the cycloversion (left) and cyclovergence (right) condition. Although in both conditions the mean gain is lower for the video system compared with the coil system, the difference is not significant at the 0.05 level for both cycloversion (Student’s paired t-test, P = 0.19, t = 1.67, df = 3) and cyclovergence (P = 0.11, t = 2.24, df = 3). Notice that, although the coil and the video systems lead to comparable results with regard to the gain of torsional eye responses, the high noise levels prohibit the use of saccade removal software routines for the video signals. 
Vestibular Stimulation
Figure 10shows example traces of responses to vestibular sinusoidal rotatory stimulation (called the vestibulo-ocular reflex). The left panel shows that horizontal eye movement responses to yaw stimulation in the light are fully compensatory (gain of approximately 1). Responses to roll stimulation in the light (right panel) show compensatory torsion, with a gain of approximately 0.4. For horizontal and vertical eye movement components, the simultaneously recorded coil and video data are virtually identical. Also torsion signals are comparable, although there is a slightly increased noise level in the video signal compared with the coil signal. 
Figure 11shows the gain averaged over the left eye and the right eye of all subjects for horizontal, vertical, and torsional eye movements obtained from the coil and the video data. To be able to calculate a gain for translational stimuli, we used the eye position in degrees that is geometrically required for full compensation (see the Data Analysis section). Differences, by means of Student’s paired t-tests (df = 7), between gains as determined from the coil and the video signals that are significant at the 0.05 level are indicated in Figure 11by asterisks; differences significant at the 0.01 level are indicated by double asterisks. The gains for rotations in the light calculated from the stimulus and eye responses are in a comparable range for the coil and video data. Significant differences exist for translations in light and dark. 
During whole-body translation, differences may occur as a result of possible slippage between the headband and the skin with the IR video system. This problem is inherent to all video-based eye trackers mounted on the head and is caused by relative motion between the camera and the eye, which results in incorrect measurement of eye movements. This may explain the larger discrepancies between gains calculated from coil and video signals during translations compared with rotations. 
Discussion
This study was driven by the question of how good IR video eye movement recording devices are in comparison with the scleral search coil method. This question has become an actual issue, because IR video systems have become increasingly popular in research laboratories and in the clinical setting. 
Previous studies reported a good performance of video oculography compared with scleral search coils. Van der Geest and Frens 12 compared the performance of a 2D video-based eye tracker (Eyelink I; SR Research Ltd., Mississauga, Ontario, Canada) with 2D scleral search coils. They found a very good correspondence between the video and the coil output, with a high correlation of fixation positions (average discrepancy, <1° over a tested range of 40 by 40° of visual angle) and linear fits near one (range, 0.994 to 1.096) for saccadic properties. However, the video system they used, with a sampling rate of 250 Hz, was not capable of measuring torsion. Clarke et al. 13 tested the Chronos system we used. They did this in a highly conditional setup comprising an artificial eye in front of an imaging camera. The artificial eye, with clear iris landmarks, was mounted on a three-axis gimbal and aligned orthogonally to the imaging camera. The measurement resolution for the horizontal, vertical, and torsional positions was 0.006°, 0.005°, and 0.016°, respectively. In this ideal setup, measurement error for positions in the range of −20° to 20° was 0.1° for horizontal and vertical positions and 0.4° for torsional positions. They also compared the Chronos system with scleral search coils, much like our verification test with fixations. Subjects successively fixated on targets arranged in a 5° interval grid with horizontal range of −20° to 20° and vertical range of −15° to 15°. Eye movements were recorded simultaneously by scleral search coils and by the Chronos system. In the coils that they used, black markers were embedded. Offline, eye positions were calculated by tracking these markers with marker tracking software. The eye positions were recorded at a sampling rate of 50 Hz instead of 200 Hz. The measured system noise was in the order of 0.1° for both coil and video signals. This is in contrast to the smaller noise levels of coils compared with video signals we found. One problem when comparing signal noises in the two systems is that the sources of noise are different. In coil signals, there is a physically extremely low signal noise, which is dependent on the magnetic field strength and the number of coil windings. The same applies to the extraction of a position signal with the Chronos video system from a stationary artificial eye. The differences arise when one wants to measure the movements of a real eye in human subjects. Variations in coil signals are mainly determined by the ability of the subjects to hold fixation. This means that the signal noise of a coil attached to a real eye calculated over a short time span is low (<0.02°). In contrast, because the extraction of position signals by the video system is based on tracking of the pupil with continuous variations in diameter, video signals show a larger variability, which is inherent to measuring a biological signal. A comparison of noise levels over longer periods of time gives another picture, because then the noise is more governed by fixation stability. When we calculated the noise by taking the SD around the mean during each fixation interval and averaging over all targets and all eyes, average noise levels for coil signals during fixation were 0.25°, 0.10°, and 0.22° for horizontal, vertical, and torsional eye positions. For the Chronos signals measured at 200 Hz, these values were 0.32°, 0.26°, and 0.41°. These numbers are higher than the value of 0.1° reported by Clarke et al., 13 for both coil and video signals, which may be partially due to the lower sampling rate of the video signals in their experiment. 
The simultaneous measurement of saccades by the Chronos video system and scleral search coils shows that saccade parameters such as peak velocity and duration are not completely identical. The video system gives slightly higher peak velocities at larger amplitudes. In contrast, durations are slightly shorter for saccades measured with the search coil. The same amplitude velocity and duration relationships were found by van der Geest and Frens. 12 They explained this difference between coil and video signals by assuming that the load of the coil would influence saccade dynamics. In their study, coil and video data were not measured simultaneously from the same eye. In contrast, we did measure both coil and video signals simultaneously and still found differences. This means that the explanation of differences in load are not the direct explanation for the observed differences in peak velocity and duration. In our view differences in noise level are a more likely explanation. A general conclusion that can be drawn from our data on saccades is that when it comes to fine details, coil signals provide a better signal stability. Therefore, coil signals are better suited for the analysis of fine details of eye movements, e.g., vergence. 
Some critical notes to our comparison study should be made. Firstly, although the scleral search coil method is generally accepted as the gold standard for accurate eye- movement measurements in oculomotor research, insertion of a coil may slightly influence the dynamics of the eye. 20 21 However, as we recorded simultaneously with the coil and video systems, this does not affect our comparison of the two systems. Secondly, possible slippage of the coils, especially during eye blinking, and the influence of orientation of the exiting wire of the scleral search coil annulus on torsion measurement after saccades, 22 23 may reduce the accuracy of measurement of eye movement and consequently affect our comparison between coil and video system. Finally, we are aware of the fact that simultaneous recording by scleral search coil and video system may influence the video measurements due to distortion of the image of the eye and may increase the risk that the image is not properly focused. This may make it harder for the Chronos software to track the pupil’s position, particularly the torsional component. On the other hand, the inner ring of the coil was at the limbus and thus outside the part of the image used for tracking the eye by the IR video system. We tested the influence of the coil on the video system performance in one subject by recording both eyes with the video system during fixation, with a coil inserted in only one eye. Differences in mean fixation positions between the two eyes were not significant for any eye movement direction (Student’s paired t-test, P = 1). 
One of the results of our study is that measurement of torsion by the iris segment correlation technique, as applied by the Chronos system, may have to be improved to be reliable enough for research on eye movements. Bos and De Graaf 24 have shown that when the center of the pupil is not well defined, the use of a single segment in the iris may result in relatively large errors in calculated torsion. These errors were shown to be sinusoidally dependent on the location of the segment on an imaginary circle around the rotation center. That is, errors in diametrically positioned segments are equal in magnitude but opposite in sign. Therefore, a fairly simple strategy to overcome these errors is to not restrain the analysis to one single segment of the iris but to average two or more diametrically positioned iris segments. An automatic procedure for using this technique, which selects and recovers a set of 36 distinct iris segments, is described by Groen et al. 25 This technique will also overcome incorrect torsion calculation resulting from occlusion of the pupil by eyelids. The manufacturer is aware of this problem 13 and intends to include this algorithm in the Chronos software in the near future. However, it may be difficult to find multiple paired areas on the iris that can be used as iris segments, due to continuous occlusion by eyelids, eyelashes, and corneal reflection. 
The alternative method to obtain torsional eye positions is the use of scleral markers. This is available in the Chronos software. 13 However, this technique needs a clear marker to be put on the sclera. In the present study, a marker was imbedded in the annulus of the scleral search coil, but, because the marker was on the edge of the recorded frame, we could not test this method. The software version we had at the time of our experiments (etd2.exe, v.3.4.0.0) did not store the entire image of the eye but only a part of it around the online calculated pupil center. New software, v.3.7.0.2, available by the manufacturer at the moment of this writing, has the option to set the proportion of the image that will be recorded. A general disadvantage of the use of markers is that it breaches the noninvasiveness of the video system above the coil system. 
Based on our experience in recording with both systems, we give a brief overview of advantages and disadvantages of video oculography and scleral search coil recordings. 
The main disadvantage of video recordings compared with coils is their limited sampling frequency (currently, 200 Hz for the Chronos system, unlimited for coils, that is, limited by the sampling rate of the ADC board). This is especially of importance when investigating fast eye movements such as saccades or responses to impulsive vestibular stimulation where accurate latency and gain measurements over a short time span are required. Another disadvantage of IR video systems is the difficulty in tracking eye positions in the dark because of large pupils and increased occlusion of the pupils by eyelids. If eyelashes or droopy eyelids partly occlude the pupil, proper detection of eye position may be deteriorated. Also, dark eyelashes may be problematic, because these may be confused with the pupil. With IR video systems, subjects generally cannot wear glasses due to obstruction by the head device. On the other hand, soft contact lenses do not seem to influence performance. In contrast, with scleral search coils only hard lenses can be used. 
A disadvantage of coil measurements is the possible lower acceptance of coils over video recordings by volunteers and patients due to the invasive nature of coils. Also the fact that measuring time is limited to approximately 30 to 60 minutes with scleral search coils limits the experimental design. Other disadvantages of coils are, in our opinion, their limited lifetime and vulnerability. The advantage of video systems over the coil system is their noninvasive nature and ease of use in healthy subjects and patients. Also, because no large setup, such as magnetic fields in the coil system, is needed, the video system with its head device with cameras and PC allows bedside testing. 
The accuracy of the offline analysis techniques is similar in both systems. IR video systems generally come with software that allows one to calibrate eye positions, and the data are automatically transformed into eye position data (usually Fick angles). Algorithms for 3D corrections of coil data have been extensively described in the literature, 4 15 16 26 but it requires skilled expertise to write the appropriate software. An additional advantage of the Chronos system is that it, unlike other online precalibrated video-based systems, stores video images on a disc and extracts eye positions from these recordings offline. Although this procedure is disc space and time consuming, it may be advantageous because of the possibility to reanalyze data. 
In conclusion, the lower time resolution and possible fixation problem of the Chronos system makes the coil system the best choice when measuring 3D eye movements, especially for high sensitivity, high frequency, and/or short-duration dynamic responses. For less time and velocity demanding tests and measurements longer than half an hour, the noninvasive IR Chronos system is a good alternative to scleral search coils. Further technical improvements are underway in the Chronos video system for the measurement of torsion. 
 
Figure 1.
 
Photograph of the setup with the eye-movement recording devices. A subject who is held in place with a bite bar wears the Chronos head device and is surrounded by a cubic frame that contains the field coils. The inset shows an image, recorded and used by the Chronos recording and analysis software, of a subject’s eye with scleral search coil inserted (partially visible around the limbus). Also shown is the fit through the pupil (white circle) and its center position (white cross), as well as an iris signature with two segments (curved white lines to the right of the pupil).
Figure 1.
 
Photograph of the setup with the eye-movement recording devices. A subject who is held in place with a bite bar wears the Chronos head device and is surrounded by a cubic frame that contains the field coils. The inset shows an image, recorded and used by the Chronos recording and analysis software, of a subject’s eye with scleral search coil inserted (partially visible around the limbus). Also shown is the fit through the pupil (white circle) and its center position (white cross), as well as an iris signature with two segments (curved white lines to the right of the pupil).
Figure 2.
 
Example time traces of eye positions of subject JS during fixation of the 5 × 5 target matrix. LE, left eye; RE, right eye; Coi, positions as measured by the coil system; Video, positions as measured by the video system; H, horizontal (up, rightward); V, vertical; T, torsional (up, clockwise) eye positions.
Figure 2.
 
Example time traces of eye positions of subject JS during fixation of the 5 × 5 target matrix. LE, left eye; RE, right eye; Coi, positions as measured by the coil system; Video, positions as measured by the video system; H, horizontal (up, rightward); V, vertical; T, torsional (up, clockwise) eye positions.
Figure 3.
 
Horizontal versus vertical eye positions of subject JS during fixation. Crosses indicate extracted fixation positions (i.e., mean eye positions during each fixation epoch; see Data Analysis section). Left panels: left eye (LE) data; right panels: right eye (RE) data. Top panels: coil data; bottom panels: video data.
Figure 3.
 
Horizontal versus vertical eye positions of subject JS during fixation. Crosses indicate extracted fixation positions (i.e., mean eye positions during each fixation epoch; see Data Analysis section). Left panels: left eye (LE) data; right panels: right eye (RE) data. Top panels: coil data; bottom panels: video data.
Figure 4.
 
Mean torsion of the left eye (left panels) and right eye (right panels) of subject JS during fixation as a function of the horizontal and vertical position of the eye as derived from the coil (upper panels) and video (lower panels) signals. The rotation angle of the depicted tilted bar compared with horizontal equals the torsion multiplied by an exaggeration factor of 10 for better visibility (clockwise torsion as seen from the subject is clockwise deviation from horizontal).
Figure 4.
 
Mean torsion of the left eye (left panels) and right eye (right panels) of subject JS during fixation as a function of the horizontal and vertical position of the eye as derived from the coil (upper panels) and video (lower panels) signals. The rotation angle of the depicted tilted bar compared with horizontal equals the torsion multiplied by an exaggeration factor of 10 for better visibility (clockwise torsion as seen from the subject is clockwise deviation from horizontal).
Figure 5.
 
Upper three panels: correlation between coil and video data regarding horizontal (top left), vertical (top middle), and torsional (top right panel) eye positions of subject JS during fixation. Measured fixation positions are shown by dots (gray, right eye; black, left eye). Also shown are the unity line (dashed gray line) and straight lines fitted through the data points. Lower three panels: corresponding discrepancies between video and coil data as function of eccentricity (that is, mean position of video and coil data).
Figure 5.
 
Upper three panels: correlation between coil and video data regarding horizontal (top left), vertical (top middle), and torsional (top right panel) eye positions of subject JS during fixation. Measured fixation positions are shown by dots (gray, right eye; black, left eye). Also shown are the unity line (dashed gray line) and straight lines fitted through the data points. Lower three panels: corresponding discrepancies between video and coil data as function of eccentricity (that is, mean position of video and coil data).
Table 1.
 
Fit Parameters Per Subject
Table 1.
 
Fit Parameters Per Subject
Horizontal* Vertical* Torsional*
Slope R 2 Slope R 2 Slope R 2
Left Right Left Right Left Right Left Right Left Right Left Right
JS 1.0 0.99 1.0 1.0 1.0 1.0 0.99 1.0 0.55 0.71 0.38 0.53
JR 0.96 0.99 0.99 0.99 1.0 0.99 0.99 0.99 0.11 0.58 0.0025 0.24
BW 0.99 0.96 0.99 1.0 0.96 1.0 0.99 0.99 0.12 0.31 0.026 0.49
JB 0.97 1.0 1.0 0.99 0.98 1.0 1.0 0.97 0.51 0.21 0.42 0.032
Mean† 0.98 1.0 0.99 0.99 0.39 0.27
Figure 6.
 
Example saccade made by subject JS. Shown are right (solid line) and left (dashed line) eye position (top panels) and velocity (bottom panels) as a function of time, measured by the coil (left panels) and video (right panels) system. Shaded insets show a magnification of the signals measured while the eyes fixate.
Figure 6.
 
Example saccade made by subject JS. Shown are right (solid line) and left (dashed line) eye position (top panels) and velocity (bottom panels) as a function of time, measured by the coil (left panels) and video (right panels) system. Shaded insets show a magnification of the signals measured while the eyes fixate.
Figure 7.
 
Main sequence plots of horizontal and vertical saccades made by subject JS. The top panels show amplitude peak-velocity relations of horizontal (left panels) and vertical (right panels) saccades. The lower panels show amplitude versus duration. Video data are shown by solid circles, coil data by open circles. The data in the top panels were fitted with an exponential curve, the lower two panels with a linear equation. Fits through video and coil data points are shown by solid and dashed lines, respectively.
Figure 7.
 
Main sequence plots of horizontal and vertical saccades made by subject JS. The top panels show amplitude peak-velocity relations of horizontal (left panels) and vertical (right panels) saccades. The lower panels show amplitude versus duration. Video data are shown by solid circles, coil data by open circles. The data in the top panels were fitted with an exponential curve, the lower two panels with a linear equation. Fits through video and coil data points are shown by solid and dashed lines, respectively.
Table 2.
 
Parameters of Lines Fitted through the Data in Figure 7 , as well as R 2 of Fit
Table 2.
 
Parameters of Lines Fitted through the Data in Figure 7 , as well as R 2 of Fit
Velocity* a, † b, † R 2 P Value t-test
Horizontal
 Coil 274 9.23 0.76 0.99
 Video 284 9.69 0.75
Vertical
 Coil 285 10.0 0.76 0.99
 Video 287 10.0 0.76
Duration, ‡ c, § d, ∥ R 2 P Value t-test
Horizontal
 Coil 75 3.7 0.59 0.99
 Video 71 3.8 0.63
Vertical
 Coil 75 3.7 0.59 0.94
 Video 70 3.2 0.57
Figure 8.
 
Stimulus, cycloversion, and cyclovergence as function of time for subject JR. Video data are shown in gray and coil data superimposed in black.
Figure 8.
 
Stimulus, cycloversion, and cyclovergence as function of time for subject JR. Video data are shown in gray and coil data superimposed in black.
Figure 9.
 
Individual and mean gains with SD for cycloversion (left panel) and cyclovergence (right panel) as calculated from coil (black) and video (gray) data.
Figure 9.
 
Individual and mean gains with SD for cycloversion (left panel) and cyclovergence (right panel) as calculated from coil (black) and video (gray) data.
Figure 10.
 
Example responses of the right eye of subject JS to sinusoidal yaw (left traces) and roll (right traces) rotations in the light. Horizontal (H), vertical (V) and torsional (T) eye positions as measured by the coils and the IR video system are shown in separate traces. The stimulus trace (-S) is the inversed rotation of the platform (and thus the head of the subject) about the yaw (earth-vertical) or roll (nasal-occipital) axis. Note, that during roll measurement, two eye blinks occurred (vertical peaks).
Figure 10.
 
Example responses of the right eye of subject JS to sinusoidal yaw (left traces) and roll (right traces) rotations in the light. Horizontal (H), vertical (V) and torsional (T) eye positions as measured by the coils and the IR video system are shown in separate traces. The stimulus trace (-S) is the inversed rotation of the platform (and thus the head of the subject) about the yaw (earth-vertical) or roll (nasal-occipital) axis. Note, that during roll measurement, two eye blinks occurred (vertical peaks).
Figure 11.
 
Gain of eye responses during sinusoidal stimulation in the light (upper panel) and in darkness (lower panel) grouped per motion type. Mean responses of eight eyes are shown. Error bars indicate one SD. Significant differences between gains calculated from coil and video data at the 0.05 (*) and 0.01 (**) significance level are marked by asterisks. Note the different y-axis scaling.
Figure 11.
 
Gain of eye responses during sinusoidal stimulation in the light (upper panel) and in darkness (lower panel) grouped per motion type. Mean responses of eight eyes are shown. Error bars indicate one SD. Significant differences between gains calculated from coil and video data at the 0.05 (*) and 0.01 (**) significance level are marked by asterisks. Note the different y-axis scaling.
RobinsonDA. A method of measuring eye movement using a scleral search coil in a magnetic field. IEEE Trans Biomed Eng. 1963;10:137–145. [PubMed]
CollewijnH, van der MarkF, JansenTC. Precise recording of human eye movements. Vision Res. 1975;15:447–450. [CrossRef] [PubMed]
CollewijnH, van der SteenJ, FermanL, JansenTC. Human ocular counterroll: assessment of static and dynamic properties from electromagnetic scleral coil recordings. Exp Brain Res. 1985;59:185–196. [CrossRef] [PubMed]
FermanL, CollewijnH, JansenTC, Van den BergAV. Human gaze stability in the horizontal, vertical and torsional direction during voluntary head movements, evaluated with a three-dimensional scleral induction coil technique. Vision Res. 1987;27:811–828. [CrossRef] [PubMed]
BrunoP, Van den BergAV. Torsion during saccades between tertiary positions. Exp Brain Res. 1997;117:251–265. [CrossRef] [PubMed]
FermanL, CollewijnH, Van den BergAV. A direct test of Listing’s law. II. Human ocular torsion measured under dynamic conditions. Vision Res. 1987;27:939–951. [CrossRef] [PubMed]
HoogeIT, Van den BergAV. Visually evoked cyclovergence and extended Listing’s law. J Neurophysiol. 2000;83:2757–2775. [PubMed]
KlierEM, CrawfordJD. Human oculomotor system accounts for 3-D eye orientation in the visual-motor transformation for saccades. J Neurophysiol. 1998;80:2274–2294. [PubMed]
Schmid-PriscoveanuA, StraumannD, KoriAA. Torsional vestibulo-ocular reflex during whole-body oscillation in the upright and the supine position. I. Responses in healthy human subjects. Exp Brain Res. 2000;134:212–219. [CrossRef] [PubMed]
StraumannD, ZeeDS, SolomonD, KramerPD. Validity of Listing’s law during fixations, saccades, smooth pursuit eye movements, and blinks. Exp Brain Res. 1996;112:135–146. [PubMed]
TweedD, SieveringD, MisslischH, FetterM, ZeeD, KoenigE. Rotational kinematics of the human vestibuloocular reflex. I. Gain matrices. J Neurophysiol. 1994;72:2467–2479. [PubMed]
van der GeestJN, FrensMA. Recording eye movements with video-oculography and scleral search coils: a direct comparison of two methods. J Neurosci Methods. 2002;114:185–195. [CrossRef] [PubMed]
ClarkeAH, DitterichJ, DrüenK, SchönfeldU, SteinekeC. Using high frame-rate CMOS sensors for three-dimensional eye tracking. Behav Res Methods Instrum Comput. 2002;34:549–560. [CrossRef] [PubMed]
HoubenMMJ, GoumansJ, De JongsteAHC, van der SteenJ. Angular and linear vestibulo-ocular responses in humans. Ann N Y Acad Sci. 2005;1039:68–80. [CrossRef] [PubMed]
HaslwanterT. Mathematics of three-dimensional eye rotations. Vision Res. 1995;35:1727–1739. [CrossRef] [PubMed]
SchorRH, FurmanJM. The “practical mathematics” of recording three-dimensional eye position using scleral coils. Methods. 2001;25:164–185. [CrossRef] [PubMed]
GoossensHHLM, Van OpstalAJ. Human eye-head coordination in two dimensions under different sensorimotor conditions. Exp Brain Res. 1997;114:542–560. [CrossRef] [PubMed]
van der SteenJ, BrunoP. Unequal amplitude saccades produced by aniseikonic patterns: effects of viewing distance. Vision Res. 1995;35:3459–3471. [CrossRef] [PubMed]
van der SteenJ, CollewijnH. Ocular stability in the horizontal, frontal and sagittal planes in the rabbit. Exp Brain Res. 1984;56:263–274. [CrossRef] [PubMed]
FrensMA, van der GeestJN. Scleral search coils influence saccade dynamics. J Neurophysiol. 2002;88:692–698. [CrossRef] [PubMed]
SmeetsJBJ, HoogeITC. Nature of variability in saccades. J Neurophysiol. 2002;90:12–20.
BergaminO, BizzarriS, StraumannS. Ocular torsion during voluntary blinks in human. Invest Ophthalmol Vis Sci. 2002;43:3438–3443. [PubMed]
BergaminO, RamatS, StraumannS, ZeeDS. Influence of orientation of exiting wire of search coil annulus on torsion after saccades. Invest Ophthalmol Vis Sci. 2004;45:131–137. [CrossRef] [PubMed]
BosJE, De GraafB. Ocular torsion quantification with video images. IEEE Trans Biomed Eng. 1994;41:351–357. [CrossRef] [PubMed]
GroenE, BosJE, NackenPFM, De GraafB. Determination of ocular torsion by means of automatic pattern recognition. IEEE Trans Biomed Eng. 1996;43:471–479. [CrossRef] [PubMed]
HessBJM, Van OpstalAJ, StraumannD, HeppK. Calibration of three-dimensional eye position using search coil signals in the Rhesus monkey. Vision Res. 1992;32:1647–1654. [CrossRef] [PubMed]
Figure 1.
 
Photograph of the setup with the eye-movement recording devices. A subject who is held in place with a bite bar wears the Chronos head device and is surrounded by a cubic frame that contains the field coils. The inset shows an image, recorded and used by the Chronos recording and analysis software, of a subject’s eye with scleral search coil inserted (partially visible around the limbus). Also shown is the fit through the pupil (white circle) and its center position (white cross), as well as an iris signature with two segments (curved white lines to the right of the pupil).
Figure 1.
 
Photograph of the setup with the eye-movement recording devices. A subject who is held in place with a bite bar wears the Chronos head device and is surrounded by a cubic frame that contains the field coils. The inset shows an image, recorded and used by the Chronos recording and analysis software, of a subject’s eye with scleral search coil inserted (partially visible around the limbus). Also shown is the fit through the pupil (white circle) and its center position (white cross), as well as an iris signature with two segments (curved white lines to the right of the pupil).
Figure 2.
 
Example time traces of eye positions of subject JS during fixation of the 5 × 5 target matrix. LE, left eye; RE, right eye; Coi, positions as measured by the coil system; Video, positions as measured by the video system; H, horizontal (up, rightward); V, vertical; T, torsional (up, clockwise) eye positions.
Figure 2.
 
Example time traces of eye positions of subject JS during fixation of the 5 × 5 target matrix. LE, left eye; RE, right eye; Coi, positions as measured by the coil system; Video, positions as measured by the video system; H, horizontal (up, rightward); V, vertical; T, torsional (up, clockwise) eye positions.
Figure 3.
 
Horizontal versus vertical eye positions of subject JS during fixation. Crosses indicate extracted fixation positions (i.e., mean eye positions during each fixation epoch; see Data Analysis section). Left panels: left eye (LE) data; right panels: right eye (RE) data. Top panels: coil data; bottom panels: video data.
Figure 3.
 
Horizontal versus vertical eye positions of subject JS during fixation. Crosses indicate extracted fixation positions (i.e., mean eye positions during each fixation epoch; see Data Analysis section). Left panels: left eye (LE) data; right panels: right eye (RE) data. Top panels: coil data; bottom panels: video data.
Figure 4.
 
Mean torsion of the left eye (left panels) and right eye (right panels) of subject JS during fixation as a function of the horizontal and vertical position of the eye as derived from the coil (upper panels) and video (lower panels) signals. The rotation angle of the depicted tilted bar compared with horizontal equals the torsion multiplied by an exaggeration factor of 10 for better visibility (clockwise torsion as seen from the subject is clockwise deviation from horizontal).
Figure 4.
 
Mean torsion of the left eye (left panels) and right eye (right panels) of subject JS during fixation as a function of the horizontal and vertical position of the eye as derived from the coil (upper panels) and video (lower panels) signals. The rotation angle of the depicted tilted bar compared with horizontal equals the torsion multiplied by an exaggeration factor of 10 for better visibility (clockwise torsion as seen from the subject is clockwise deviation from horizontal).
Figure 5.
 
Upper three panels: correlation between coil and video data regarding horizontal (top left), vertical (top middle), and torsional (top right panel) eye positions of subject JS during fixation. Measured fixation positions are shown by dots (gray, right eye; black, left eye). Also shown are the unity line (dashed gray line) and straight lines fitted through the data points. Lower three panels: corresponding discrepancies between video and coil data as function of eccentricity (that is, mean position of video and coil data).
Figure 5.
 
Upper three panels: correlation between coil and video data regarding horizontal (top left), vertical (top middle), and torsional (top right panel) eye positions of subject JS during fixation. Measured fixation positions are shown by dots (gray, right eye; black, left eye). Also shown are the unity line (dashed gray line) and straight lines fitted through the data points. Lower three panels: corresponding discrepancies between video and coil data as function of eccentricity (that is, mean position of video and coil data).
Figure 6.
 
Example saccade made by subject JS. Shown are right (solid line) and left (dashed line) eye position (top panels) and velocity (bottom panels) as a function of time, measured by the coil (left panels) and video (right panels) system. Shaded insets show a magnification of the signals measured while the eyes fixate.
Figure 6.
 
Example saccade made by subject JS. Shown are right (solid line) and left (dashed line) eye position (top panels) and velocity (bottom panels) as a function of time, measured by the coil (left panels) and video (right panels) system. Shaded insets show a magnification of the signals measured while the eyes fixate.
Figure 7.
 
Main sequence plots of horizontal and vertical saccades made by subject JS. The top panels show amplitude peak-velocity relations of horizontal (left panels) and vertical (right panels) saccades. The lower panels show amplitude versus duration. Video data are shown by solid circles, coil data by open circles. The data in the top panels were fitted with an exponential curve, the lower two panels with a linear equation. Fits through video and coil data points are shown by solid and dashed lines, respectively.
Figure 7.
 
Main sequence plots of horizontal and vertical saccades made by subject JS. The top panels show amplitude peak-velocity relations of horizontal (left panels) and vertical (right panels) saccades. The lower panels show amplitude versus duration. Video data are shown by solid circles, coil data by open circles. The data in the top panels were fitted with an exponential curve, the lower two panels with a linear equation. Fits through video and coil data points are shown by solid and dashed lines, respectively.
Figure 8.
 
Stimulus, cycloversion, and cyclovergence as function of time for subject JR. Video data are shown in gray and coil data superimposed in black.
Figure 8.
 
Stimulus, cycloversion, and cyclovergence as function of time for subject JR. Video data are shown in gray and coil data superimposed in black.
Figure 9.
 
Individual and mean gains with SD for cycloversion (left panel) and cyclovergence (right panel) as calculated from coil (black) and video (gray) data.
Figure 9.
 
Individual and mean gains with SD for cycloversion (left panel) and cyclovergence (right panel) as calculated from coil (black) and video (gray) data.
Figure 10.
 
Example responses of the right eye of subject JS to sinusoidal yaw (left traces) and roll (right traces) rotations in the light. Horizontal (H), vertical (V) and torsional (T) eye positions as measured by the coils and the IR video system are shown in separate traces. The stimulus trace (-S) is the inversed rotation of the platform (and thus the head of the subject) about the yaw (earth-vertical) or roll (nasal-occipital) axis. Note, that during roll measurement, two eye blinks occurred (vertical peaks).
Figure 10.
 
Example responses of the right eye of subject JS to sinusoidal yaw (left traces) and roll (right traces) rotations in the light. Horizontal (H), vertical (V) and torsional (T) eye positions as measured by the coils and the IR video system are shown in separate traces. The stimulus trace (-S) is the inversed rotation of the platform (and thus the head of the subject) about the yaw (earth-vertical) or roll (nasal-occipital) axis. Note, that during roll measurement, two eye blinks occurred (vertical peaks).
Figure 11.
 
Gain of eye responses during sinusoidal stimulation in the light (upper panel) and in darkness (lower panel) grouped per motion type. Mean responses of eight eyes are shown. Error bars indicate one SD. Significant differences between gains calculated from coil and video data at the 0.05 (*) and 0.01 (**) significance level are marked by asterisks. Note the different y-axis scaling.
Figure 11.
 
Gain of eye responses during sinusoidal stimulation in the light (upper panel) and in darkness (lower panel) grouped per motion type. Mean responses of eight eyes are shown. Error bars indicate one SD. Significant differences between gains calculated from coil and video data at the 0.05 (*) and 0.01 (**) significance level are marked by asterisks. Note the different y-axis scaling.
Table 1.
 
Fit Parameters Per Subject
Table 1.
 
Fit Parameters Per Subject
Horizontal* Vertical* Torsional*
Slope R 2 Slope R 2 Slope R 2
Left Right Left Right Left Right Left Right Left Right Left Right
JS 1.0 0.99 1.0 1.0 1.0 1.0 0.99 1.0 0.55 0.71 0.38 0.53
JR 0.96 0.99 0.99 0.99 1.0 0.99 0.99 0.99 0.11 0.58 0.0025 0.24
BW 0.99 0.96 0.99 1.0 0.96 1.0 0.99 0.99 0.12 0.31 0.026 0.49
JB 0.97 1.0 1.0 0.99 0.98 1.0 1.0 0.97 0.51 0.21 0.42 0.032
Mean† 0.98 1.0 0.99 0.99 0.39 0.27
Table 2.
 
Parameters of Lines Fitted through the Data in Figure 7 , as well as R 2 of Fit
Table 2.
 
Parameters of Lines Fitted through the Data in Figure 7 , as well as R 2 of Fit
Velocity* a, † b, † R 2 P Value t-test
Horizontal
 Coil 274 9.23 0.76 0.99
 Video 284 9.69 0.75
Vertical
 Coil 285 10.0 0.76 0.99
 Video 287 10.0 0.76
Duration, ‡ c, § d, ∥ R 2 P Value t-test
Horizontal
 Coil 75 3.7 0.59 0.99
 Video 71 3.8 0.63
Vertical
 Coil 75 3.7 0.59 0.94
 Video 70 3.2 0.57
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×