March 2009
Volume 50, Issue 3
Free
Eye Movements, Strabismus, Amblyopia and Neuro-ophthalmology  |   March 2009
Video-Based Head Movement Compensation for Novel Haploscopic Eye-Tracking Apparatus
Author Affiliations
  • Kristina Irsch
    From the Krieger Children’s Eye Center at The Wilmer Institute, The Johns Hopkins University School of Medicine, Baltimore, Maryland; and the
    Kirchhoff Institute for Physics, University of Heidelberg, Heidelberg, Germany.
  • Nicholas A. Ramey
    From the Krieger Children’s Eye Center at The Wilmer Institute, The Johns Hopkins University School of Medicine, Baltimore, Maryland; and the
  • Anton Kurz
    From the Krieger Children’s Eye Center at The Wilmer Institute, The Johns Hopkins University School of Medicine, Baltimore, Maryland; and the
  • David L. Guyton
    From the Krieger Children’s Eye Center at The Wilmer Institute, The Johns Hopkins University School of Medicine, Baltimore, Maryland; and the
  • Howard S. Ying
    From the Krieger Children’s Eye Center at The Wilmer Institute, The Johns Hopkins University School of Medicine, Baltimore, Maryland; and the
Investigative Ophthalmology & Visual Science March 2009, Vol.50, 1152-1157. doi:https://doi.org/10.1167/iovs.08-2739
  • Views
  • PDF
  • Share
  • Tools
    • Alerts
      ×
      This feature is available to authenticated users only.
      Sign In or Create an Account ×
    • Get Citation

      Kristina Irsch, Nicholas A. Ramey, Anton Kurz, David L. Guyton, Howard S. Ying; Video-Based Head Movement Compensation for Novel Haploscopic Eye-Tracking Apparatus. Invest. Ophthalmol. Vis. Sci. 2009;50(3):1152-1157. https://doi.org/10.1167/iovs.08-2739.

      Download citation file:


      © ARVO (1962-2015); The Authors (2016-present)

      ×
  • Supplements
Abstract

purpose. To describe a video-based head-tracking technique to compensate for torsional, horizontal, and vertical in-plane head movements during pupil/iris-tracking video-oculography with a tilting haploscope.

methods. Custom software was developed for image acquisition and off-line analysis for a novel haploscopic viewing device. Head movements were constrained to the frontal plane with the use of a bite plate and a forehead rest. Head movements were monitored by tracking black adhesive dots with a white border placed near the subject’s inner canthi. Dot and pupil positions were computed using feature detection based on the Hough transform modified for ellipses. With the inter-dot distance and the relative vertical shifts of the dots during head motion, each video frame was rotated and translated to remove the effects of head movement from eye movement data. This technique was verified with a model head and with healthy subjects, who were asked to strain their heads purposefully against the bite plate during the recording. Head-tracker performance during 45° head tilting was also assessed.

results. Validation experiments with the model head indicated a linear relationship, between true and measured positions, with a Pearson’s correlation coefficient of R = 1.00. For human subjects, binocular video-oculographic recordings showed essential elimination of head movement artifacts from the recorded eye movements.

conclusions. Tracking black dots placed near the inner canthi is an effective method of compensation for horizontal, vertical, and torsional in-plane head movements during pupil and iris crypt-based video-oculography.

With the primary objective of achieving better understanding of the mechanisms of cyclovertical strabismus, we recently constructed and reported a special haploscope that allows assessment of simultaneous horizontal, vertical, and torsional eye movements binocularly through the use of videooculography. 1 The eye-tracking apparatus of this novel research haploscope makes use of two apparatus-mounted webcams for pupil/iris-based video-oculography, so that a fixed head position with respect to the cameras is a major requirement. A small amount of head movement can appear erroneously as a large eye movement. Head-versus-camera motion was reduced by fixing the subject to the apparatus with his or her head against a forehead rest and with an adjustable and lockable rigid bite plate. However, small head movements still occurred during the recording, especially during tilting of the whole apparatus, limiting the experiments to very cooperative subjects who were able to maintain good head stabilization. 
Relative motion between cameras and eyes remains a potential source of persistent artifacts with video-based eye-tracking approaches, even with head-mounted systems that were in general considered to be less susceptible to head-versus-camera movements because the cameras are directly affixed to the subject’s head by means of a helmet, forming a fixed unit with the eyes. However, in an experimental setup that requires investigations across a variety of head orientations, slippage of the helmet is possible and yields incorrect measurement of eye movements, even if the subject is believed to be immobilized by means of a bite plate or a chin rest. 2 As we seek to understand more about the mechanisms underlying cyclovertical strabismus, 3 reliable binocular eye movement recordings under different head tilt positions are of special interest, but we cannot rely on extremely cooperative subjects who can keep their heads unnaturally still through an entire experiment. 
In this article, we describe a head-tracking technique based on monitoring black dots with white borders near the subject’s medial canthi, essentially stationary during eye movements, to compensate for in-plane head movements during pupil/iris-tracking video-oculography with our haploscopic eye-tracking apparatus. 
Methods
Haploscopic Eye-Tracking Apparatus
The haploscope design has been described in detail. 1 Briefly, the haploscopic eye-tracking apparatus is a custom-built haploscope constructed from an old Bausch and Lomb arc perimeter, equipped with eye-tracking cameras for binocular video-oculographic recordings. The entire apparatus is capable of tilting up to 45° to the subject’s left or right. Figure 1illustrates the apparatus in two views, drafted with a three-dimensional mechanical CAD (computer-aided design) program (SolidWorks; Dassault Systèmes S.A., Suresnes, France). The haploscope features a forehead rest and an adjustable bite plate to fix the subject in a set viewing position directly in front of two cold mirrors (Edmund Optics, Inc., Barrington, NJ) positioned at 45° from the midsagittal plane. An infrared light-emitting diode (880 nm; OD-50L; Opto Diode Corp., Inc., Newbury Park, CA) is mounted at the inferotemporal corner of each mirror to illuminate the respective eye of the subject. Reflected near infrared light from the subject’s face and eyes is transmitted by the 45° mirrors to video cameras (Web Digital Camera, Hong Kong) on the other side of the mirrors. The 45° mirrors reflect, and thus optically superimpose, the images of the target patterns, each subtending more than 50°, that are attached to the arc perimeter’s arms on either side of the subject. The entire arc can be translated anteriorly or posteriorly to induce horizontal disparity, allowing convergence or divergence as necessary to help maintain fusion. An overhead lamp provides equal illumination to the two circular viewing targets. When the arc is tilted, one target moves upward and the other moves downward, inducing vertical disparity and requiring vertical vergence to maintain fusion. The targets may be rotated individually to provide torsional disparity. Just behind the cold mirrors, two apparatus-mounted infrared-sensitive cameras record the positions of the subject’s eyes. These USB video cameras are connected to a desktop computer that controls video frame capture using custom acquisition software (Matlab; Mathworks, Inc., Natick, MA) and stores the recorded binocular eye image sequences, time synchronized, for off-line analysis. 
Eye movements are calculated with commercial eye-tracking software (IRIS; Chronos Vision, Berlin, Germany) with pupil detection and iris orientation. Pupil detection uses the Hough transform to detect edges in the image, then fits ellipses to the edges. 4 5 6 A polar correlation algorithm is used for the calculation of torsional eye position, relying on user-defined, pupil-concentric arcs of iris detail. 7 During the frame-by-frame evaluation, the image intensity values of the iris segment are convolved with values adjacent to the arc. When the signal is strongest, the convolution function becomes peaked, and the offset is recorded as a torsional movement. 
Head-Tracking Technique
The head tracker is needed to track subject head motion reliably and accurately over the duration of an entire haploscope experiment. To achieve this objective, the head tracker makes use of the same videooculographic techniques and software used for eye tracking, which we previously validated. 1 Subjects are positioned in the haploscope as described. One black adhesive dot (2 mm) with a white border is affixed near each medial canthus (Fig. 2A) . The medial canthus is chosen for its stability and relative insensitivity to facial expressions. 8 This location also appears in the video-oculographic image used for eye tracking. The area containing the black dot is extracted from each video frame, and separate image sequence files for the black dot and pupil are created. This is necessary to prevent confusion between dot and pupil tracking. The dot’s image is processed in the same manner as the pupil, using the Chronos off-line algorithm for feature detection based on the Hough transform modified for ellipses, which provides an accurate black dot image position for each video frame. The following assumptions are used for calculating head motion in image space: head movement in the vertical and horizontal dimensions can be estimated directly from relative frame-to-frame dot movement, the inter-dot distance (Fig. 2B , D) is constant, and the dot location relative to its location in a defined reference image (Fig. 2C , ΔR, ΔL) can be used to calculate head tilt (Fig. 2D) . This horizontal, vertical, and tilt information was then used to correct eye position. 
The stepwise procedure for eliminating head motion from the eye movement data is as follows: video-oculographic recordings are compensated for head tilt by counter-rotating each video sequence according to the calculated amount of head tilt, as described; the resultant head tilt-compensated eye and dot images are processed separately (IRIS; Chronos Vision), and estimated horizontal and vertical head translation is subtracted from corresponding pupil positions; the compensated horizontal and vertical eye positions are converted into Fick coordinates, calibrating for every subject. 
Verification with Model Head
To validate the dot-based head-tracking technique, a model head and human subjects were used. The model head included the capability to simulate motion with 3 degrees of freedom (df), in-plane tilt, and horizontal and vertical translation. It was constructed from two linear stages (PT1; Thorlabs, Newton, NJ) for translational movements and a rotary stage (PR01; Thorlabs) for simulation of head tilt and featured two adhesive black dots with white borders placed 25 mm apart. The model head was attached to the haploscope as a human subject would be aligned, with each black-on-white dot visible in its respective camera image, and the model head was parallel to the image plane. Separate trials were conducted to assess each degree of freedom. The model head was manually driven through ± 10 mm in 1-mm steps in each translational dimension and through ±10° in increments of 1° about the tilt axis. Fifty frames were acquired at each of these 21 incremental steps for each dimension. Translation and tilt increments (± 0.1 mm and ± 0.1°) were verified with a micrometer. 
Statistical Analysis
The recorded dot images were processed using pupil detection, as described, and the tracking results were correlated with the model using Pearson’s correlation. More precisely, the Pearson product-moment correlation coefficient (R) was calculated to determine the degree of linear relationship between the head-tracker output and the model head position. To assess accuracy of the in-plane head-tracking technique, root mean square (RMS) residuals were determined for each degree of freedom by computing the square root of the mean of squared differences between true and tracked positions at each collected increment. To determine precision for each degree of freedom, the standard deviation of the 50 measurements at each incremental step was calculated. Mean and maximum values of these 21 standard deviations for each degree of freedom were then determined and used as a measure of the head tracker’s precision. 
Verification with Human Subjects
To test the ability of the video-based head tracker to monitor and compensate for real head movements, its performance was also evaluated with human subjects. Healthy subjects with good visual acuity at near (>J1) were recruited for the investigations, which were approved by The Johns Hopkins University Institutional Review Board and adhered to the tenets of the Declaration of Helsinki. Before the experiment, informed consent was obtained from all subjects after the nature and possible consequences of the study were explained. 
In previous trials, subjects were asked to hold their heads extremely still on the bite plate. In our study, video-oculographic images of both eyes and dots were recorded as the subjects were instructed to strain their heads purposefully against the bite plate in the horizontal, vertical, and torsional dimensions while maintaining central fixation on the superimposed targets. 
Next we investigated the performance of the head tracker during a regular tilting experiment. We did not instruct the subjects to hold their heads still but simply to bite down on the bite plate with the head against the forehead rest. During the tilting experiment, the whole apparatus was slowly tilted 45° to the right and to the left. The three-dimensional eye movement recordings were analyzed with and without head-tracker compensation. 
Results
Verification Experiments
Model head experiments showed good correlation between model head position and head-tracking position. RMS residual, a measure of accuracy, reached maximums of 0.31 mm, 0.11 mm, and 0.23° in the horizontal, vertical, and tilt dimensions, respectively. Maximum standard deviation, a measure of precision, reached 0.02 mm, 0.06 mm, and 0.02°. Mean values are presented in Table 1 . Figure 3correlates the model head with head-tracker data for incremental distances traversed along each axis. The Pearson product-moment correlation coefficients were R = 1.00, with unit slopes for each axis. 
To assess head-tracker performance with human subjects, two experiments were conducted. In the first experiment, subjects were positioned in the haploscope and asked to move their heads against the resistance provided by the rigid bite plate. Eye-tracker data from one representative subject are presented in Figure 4 , as the subject attempted to tilt his head to the right and then to the left against the bite plate. This subject produced uncompensated horizontal and vertical eye movements from the eye tracker’s perspective (eye + head movement) of approximately 11° and 5°, respectively. After subtracting head movement, the true eye position was obtained. 
In the second experiment, subjects were secured with the bite plate and forehead mount and asked to look straight ahead as the entire apparatus, including the subject, was slowly tilted 45° to either side. Figure 5shows a typical example of a left tilt. Although bite plate and forehead rests were present during the tilting experiment, the eyes appeared to deviate approximately 2.5° horizontally and up to 2° vertically. From the eye tracker’s perspective, the subject seemed to develop a right hyperdeviation with left head tilt in addition to the expected ocular counter-roll of both eyes in the clockwise direction. This apparent right hyperdeviation and leftward deviation of both eyes, present in the uncompensated tracings, were greatly reduced in the traces when compensated for head-versus-camera motion. 
Discussion
The video-based head tracker described in this article enables compensation for horizontal, vertical, and torsional in-plane head movements during pupil/iris-tracking videooculography with our tilting haploscope. Two adhesive dots affixed near the medial canthi were used to monitor head motion on the bite plate, which were tracked using the ordinary feature detection method already applied for pupil tracking. As a result, the strictly fixed position of the head was no longer a major requirement, allowing more comfort to the subject investigated. Furthermore, this technique minimized the need for calibration to just once before the experiment to obtain individual transformation information later required for conversion from two-dimensional video-oculographic image space to three-dimensional eye coordinate space (Fick coordinates). 
Various other methods have been proposed to compensate for head-versus-camera movement, 9 10 11 12 13 but all require additional hardware or have resolution deficiencies. For example, though commercial video-oculography-based systems are available that account for head movements (Applied Science Laboratories, SR Research, EyeLink, Chronos Vision), they rely on additional sensors attached to the subject’s head and track sensor motion. Not only can these distract the subject, the mounting of the sensors is critical. Tracking errors will be introduced if proper positioning is not achieved. Ronsse et al. 9 suggested tracking three markers affixed to the subject’s head to determine head position. To measure the position of these points on the head, another camera must be used. Other approaches, such as the those proposed by Yoo et al. 10 and Zhu et al., 11 involve tracking of corneal light reflexes in addition to the pupil to eliminate the influence of head movement. However, the relatively small pixel size of the corneal light reflex and, hence, the low feature resolution, may cause inexact detection of the features and thereby decrease tracking accuracy. 10 Image-based methods have been reported to estimate head position using distinctive facial features, such as eye and mouth corners. 12 13 These methods rely heavily on complex algorithms, and the whole face must be visible to extract the features, potentially decreasing the resolution of eye tracking because of the reduced pixel size of the eye’s image with respect to the whole image frame. 13  
Our head-tracking solution does not require additional hardware or complex algorithms, and the subject is relieved of any additional equipment attached to the head. The main advantage of our dot-based head-tracking technique is that it provides a simple means for head movement compensation during pupil/iris-tracking video-oculographic recordings and can be incorporated into existing video-oculographic systems without loss of eye-tracking resolution. 
Our validation experiments showed that head-tracker measurements and model head position were highly correlated (R = 1.00) for all three degrees of freedom in the frontal plane. These results for accuracy and precision of the head tracker are consistent with those previously reported for eye tracking. 1  
Assessment of performance of the head tracker with human subjects confirmed its ability to reduce erroneous eye movements. The remaining horizontal and vertical eye displacements seen in Figure 4Brepresent the expected physiologic eye movement required to maintain fixation on the central target when tilting against the bite plate. The residual torsion represents the torsion in the same direction as the head tilt that is characteristically seen when an attempt is made to tilt the head rapidly, as was the case here. 14  
Some limitations of our method should be discussed. First, even though the medial canthus has been reported as the most stable feature in the face and is relatively insensitive to facial expressions, 8 we are aware that facial gestures can still cause the dots to move (tracked as head motion from the head-tracker perspective). However, our algorithm accounts for false head movement caused by facial expressions by monitoring the inter-dot distance. In a separate experiment, we looked at the induced movements of the medial canthi with facial gestures, such as brow lifting, brow lowering, frown, and forced eye closing. All these gestures caused a change in the distance between right and left dots, so that whenever the head tracker detected a change greater than 1% in the immutable individual inter-dot distance, this motion was identified as a facial gesture, and the accompanying segment of recorded data was removed from further analysis. 
Second, head movements that brought dots outside the field of view of each camera could not be compensated. The largest horizontal and vertical translations we could compensate for were 15.99 mm and 17.98 mm, respectively. Head tilts of up to 32.52° could be compensated for. The presence of the bite plate, however, restricted large head movements, and both cameras were adjusted for each subject before the experiment to ensure that the pupils and dots were centered within the viewing field of each webcam. 
Third, out-of-plane head movements were not accurately compensated. To be complete, head motion would have to be described with 6 df (3 df translational, 3 df rotational). However, the purpose of our head tracker is to compensate for residual movement on the bite plate and forehead rest, which more effectively impede head motion in the remaining 3 df out of plane. The most head movement that could be expected from our specific haploscopic experimental design occurred while the apparatus and upper body of the subject were tilted 45° to the left or right, producing predominately false horizontal eye movement. Assessment of head-tracker performance during such a haploscopic tilting experiment showed that our method was capable of compensating for head-versus-camera motion that occurred during the tilting process. For the range of expected head motion on the bite plate encountered in an experiment, the simplification of considering only in-plane head motion appears reasonable. 
In conclusion, tracking black dots placed near the inner canthi is a simple but effective method of compensating for horizontal, vertical, and torsional in-plane head movements during pupil/iris-based video-oculography. It will allow future investigations with our tilting haploscope in less cooperative subjects, including children, than has been possible. Analyzing patients in addition to healthy subjects, especially patients with congenital and acquired forms of superior oblique paresis, is of major importance in further exploring our hypothesis that basic cyclovertical deviations can masquerade as congenital superior oblique paresis. 3 Our goals are to better understand the mechanisms underlying cyclovertical strabismus and to understand the mechanisms of strabismus in general. 
Figure 1.
 
The haploscope is illustrated in two views. (A) Oblique view of entire apparatus is tilted 45° to the subject’s right: note overhead lamp, angled mirrors, and opposed viewing targets. (B) Close-up view shows the apparatus-mounted infrared-sensitive camera configuration, positioned behind the cold mirrors. Note inferotemporal infrared light-emitting diodes, bite plate, and forehead rest.
Figure 1.
 
The haploscope is illustrated in two views. (A) Oblique view of entire apparatus is tilted 45° to the subject’s right: note overhead lamp, angled mirrors, and opposed viewing targets. (B) Close-up view shows the apparatus-mounted infrared-sensitive camera configuration, positioned behind the cold mirrors. Note inferotemporal infrared light-emitting diodes, bite plate, and forehead rest.
Figure 2.
 
Head-tracking mechanism. (A) Representative images show black dots with white borders placed near the subject’s inner canthi, used for monitoring horizontal, vertical, and torsional in-plane head movements. Dot detection uses the Hough transform to detect edges in the image, then fits ellipses to the edges (B). The inter-dot distance (D) and the relative vertical shifts (ΔR and ΔL) of the dots with respect to the reference position are found (C) and are used to calculate the head tilt (Φ) (D).
Figure 2.
 
Head-tracking mechanism. (A) Representative images show black dots with white borders placed near the subject’s inner canthi, used for monitoring horizontal, vertical, and torsional in-plane head movements. Dot detection uses the Hough transform to detect edges in the image, then fits ellipses to the edges (B). The inter-dot distance (D) and the relative vertical shifts (ΔR and ΔL) of the dots with respect to the reference position are found (C) and are used to calculate the head tilt (Φ) (D).
Table 1.
 
Accuracy and Precision with Model Head
Table 1.
 
Accuracy and Precision with Model Head
Head Tilt (°)HorizontalVertical
Right Camera (mm)Left Camera (mm)Right Camera (mm)Left Camera (mm)
RMS residuals0.110.130.130.050.06
SD0.010.010.010.010.01
Figure 3.
 
Correlation between video-based head tracker (HT) and model for torsional (A), horizontal (B), and vertical (C) head movements. Black squares and gray triangles represent measured dot positions in the left and right camera images, respectively. R = 1.00 between HT and model head in each panel, and the least-squares trend line fitted to the left and right camera data had a slope of 1.00.
Figure 3.
 
Correlation between video-based head tracker (HT) and model for torsional (A), horizontal (B), and vertical (C) head movements. Black squares and gray triangles represent measured dot positions in the left and right camera images, respectively. R = 1.00 between HT and model head in each panel, and the least-squares trend line fitted to the left and right camera data had a slope of 1.00.
Figure 4.
 
Video-oculographic eye movement recording during purposefully generated head motion (A) without and (B) with head movement compensation. Horizontal, vertical, and torsional eye positions in degrees are plotted versus time in seconds. Upward deflections of the horizontal and vertical tracings correspond to rightward and upward eye rotations, respectively, and positive torsion represents clockwise eye movements, all from the subject’s perspective looking forward. A healthy subject was continuously recorded in straight head position but with attempted right head tilt, and then left head tilt, against the bite plate. Modified from Irsch K, et al. IOVS 2008;49:ARVO E-Abstract 1803.
Figure 4.
 
Video-oculographic eye movement recording during purposefully generated head motion (A) without and (B) with head movement compensation. Horizontal, vertical, and torsional eye positions in degrees are plotted versus time in seconds. Upward deflections of the horizontal and vertical tracings correspond to rightward and upward eye rotations, respectively, and positive torsion represents clockwise eye movements, all from the subject’s perspective looking forward. A healthy subject was continuously recorded in straight head position but with attempted right head tilt, and then left head tilt, against the bite plate. Modified from Irsch K, et al. IOVS 2008;49:ARVO E-Abstract 1803.
Figure 5.
 
Head-tracker performance during a haploscopic tilting experiment. Horizontal, vertical, and torsional eye positions in degrees are plotted versus time in seconds. Upward deflections of the horizontal and vertical tracings correspond to rightward and upward eye rotations, respectively, and positive torsion represents clockwise eye movements, all from the subject’s perspective looking forward. (A) Noncompensated video-oculographic eye movement recording from a healthy subject shows apparent eye positions, recorded in an upright head position, then tilted to the left 45°, then upright again. (B) Same recording is corrected for head-versus-camera motion occurring during the tilting process.
Figure 5.
 
Head-tracker performance during a haploscopic tilting experiment. Horizontal, vertical, and torsional eye positions in degrees are plotted versus time in seconds. Upward deflections of the horizontal and vertical tracings correspond to rightward and upward eye rotations, respectively, and positive torsion represents clockwise eye movements, all from the subject’s perspective looking forward. (A) Noncompensated video-oculographic eye movement recording from a healthy subject shows apparent eye positions, recorded in an upright head position, then tilted to the left 45°, then upright again. (B) Same recording is corrected for head-versus-camera motion occurring during the tilting process.
 
RameyNA, YingHS, IrschK, MüllenbroichMC, VaswaniR, GuytonDL. A novel haploscopic viewing apparatus with a three-axis eye tracker. J AAPOS. 2008;12:498–503.
HoubenMMJ, GoumansJ, van der SteenJ. Recording three-dimensional eye movements: scleral search coils versus video oculography. Invest Ophthalmol Vis Sci. 2006;47:179–187.
GuytonDL. Ocular torsion reveals the mechanisms of cyclovertical strabismus: the Weisenfeld Lecture. Invest Ophthalmol Vis Sci. 2008;49:847–857.
HoughPVC. Methods and means for recognizing complex patterns. ;US patent 3,069,654. December 18, 1962
DudaRO, HartPE. Use of the Hough transformation to detect lines and curves in pictures. Commun ACM. 1972;15:11–15.
TsujiS, MatsumotoF. Detection of ellipses by a modified Hough transform. IEEE Trans Comput. 1978;27:777–781.
HatamianM, AndersonDJ. Design considerations for a real-time ocular counterroll instrument. IEEE Trans Biomed Eng. 1983;30:278–88.
TianY, KanadeT, CohnJ. Dual-state parametric eye tracking. Presented at: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition; March 28–30, 2000; Grenoble, France. ;
RonsseR, WhiteO, LefèvreP. Computation of gaze orientation under unstrained head movements. J Neurosci Methods. 2007;159:158–169.
YooDH, ChungMJ. A novel non-intrusive eye gaze estimation using cross-ratio under large head motion. Comput Vision Image Understanding. 2005;98:25–51.
ZhuZ, QiangJ. Eye gaze tracking under natural head movements. Presented at: Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition; June 20–25, 2005; San Diego, CA. ;
WangJG, SungE. Pose determination of human faces by using vanishing points. Pattern Recognition. 2001;34:2427–2445.
MatsumotoY, ZelinskyA. An algorithm for real-time stereo vision implementation of head pose and gaze direction measurement. Presented at: Proceedings of the Fourth IEEE International Conference on Automatic Face and Gesture Recognition; March 28–30, 2000; Grenoble, France. ;
SchwormHD, YggeJ, PansellT, LennerstrandG. Assessment of ocular counterroll during head tilt using binocular video oculography. Invest Ophthalmol Vis Sci. 2002;43:662–667.
×
×

This PDF is available to Subscribers Only

Sign in or purchase a subscription to access this content. ×

You must be signed into an individual account to use this feature.

×