The spatially distributed iris features make them ideal for video-based tracking, a method employed in eye-tracking devices to track a subject's gaze.
14 However, so far video-oculography has limited these methods mostly to the detection of the pupil border or limbus to determine the angle of the eye.
15 Most studies have accomplished gaze angle detection through the employment of filter-, intensity-, or appearance-based image template matching methods that take into account photometric properties of the eye region. Such methods typically detect the iris, pupil, and/or eye corners by looking for specific edge or blob patterns, which proves to be especially useful for low-resolution eye images or long-distance recordings “in the wild.”
16 Specifically, the pupil-based methods come with several disadvantages, as well, mostly related to artifacts and measurement inaccuracies caused by pupil characteristics. Pupil-based gaze angle detection depends on detection of the center of the pupil.
17 The center position depends, however, on the shape of the pupil, which varies (so far) unpredictably as a function of its size,
18 causing errors in position estimation.
19,20 Also, the pupil border is highly deformable
12 and may flexibly wobble during fast rotational accelerations or decelerations of the eye,
21,22 causing small but significant errors in eye position and saccade velocity detection. Finally, the border of the pupil provides only a limited number of unique tracking points. Detecting the limbus (i.e., sclera–iris border) rather than the pupil–iris border
23–27 solves some of these problems but then incorporates another problem related to occlusion. Due to its peripheral location, the limbus is more likely to be occluded by the upper and lower eyelids, sometimes leaving only a small portion of the limbus visible, which may lead to inaccurate elliptical model fitting.