Abstract
Purpose :
Locating the eye region is an important pre-requisite for accurately extracting eye movement information from video footage. This is particularly challenging when the face is partially occluded (e.g. to enable monocular viewing) and when head motion is present. We determined the eye-region-detection success rate of a new approach based on the Viola-Jones algorithm using video footage of partially occluded faces.
Methods :
Within a prospective, observational study, adult participants (N = 22) were recorded with an infra-red camera as they viewed a drifting arrays of annular optotypes designed to induce optokinetic nystagmus (5 sec/trial, 55 trials for each eye). Participants’ heads were not restrained. Stimuli were viewed monocularly and three occlusion methods were used for the non-viewing eye (a hand (n=1), a phone (n=3), a book (n=6), and eye patch (n=12). The Viola-Jones algorithm was trained to detect eye regions using 568 positive images and 1102 negative images obtained from: 1) our own recordings; 2) the MIT face database, and 3) the FEI face database. A calculated z-score method was applied in post-processing to remove falsely detected eye regions. A commercially available face and eye tracker (Visage SDK, Netherlands Visage Technologies) was used as a benchmark. The success rate of each approach was determined by the number of successfully detected eyes/total eyes.
Results :
The median overall success rate for the new method was 100%, IQR = [99.25, 100]. Only 4/44 videos had a success rate lower than 87%, compared with Visage SDK which failed to detect a monocular eye in 55% of 22 participants. A review indicated that the z-score approach improved detection from 52.5%, 100%, and 94.4% respectively to 100%, 96.98%, and 100%. (a test set of 3 videos).
Conclusions :
A tool for detecting eyes within partially covered, moving faces was successfully developed. The post processing z-score method is a promising approach that improves eye region detection in video.
This is an abstract that was submitted for the 2018 ARVO Annual Meeting, held in Honolulu, Hawaii, April 29 - May 3, 2018.