CVC Eye-Tracking DB

This is an eye-tracking dataset of 48 videos. The videos belong to 12 subjects and they can be used to test the performance of an eye-tracking system. It is intended to be an easy way to evaluate the accuracy in several scenarios. For easy access, the videos are divided in 4 according the experimental setup used.
The dataset contains the videos recorded during the eye-tracking experiments for testing the accuracy of CVC Eye-Tracker. Out of the 18 original subjects, videos for 5 subjects were left out as they did not give permission for inclusion in a public dataset. Another subject’s videos are also excluded because of poor concentration. The final dataset contains 12 subjects with 4 different setups for each subject.

Structure file

The videos for each subject are located in the folders represented by the subject’s order number during experiments. In these folders, there exist 8 files which correspond to 2 information files (1 video file, 1 saved commands file) for 4 different experimental setups.The name of the video file follows the following structure:

SUBJECTNO_SETUP_RESOLUTION.avi (e.g. 01_std_720.avi)

This file contains the recorded face video of the subject for a single experimental setup.

Commands file

The second file (commands file) which has a text format is renamed as follows:

SUBJECTNO_SETUP_RESOLUTION_commands.txt (e.g. 01_std_720_commands.avi)

This file contains the frame numbers where specific actions (SELECT,CALIBRATE,TEST) were taken during experiments.

SELECT  performs the selection of feature points on the face. In the proposed setup, a combination of Viola-Jones detectors, a novel eye-corner detection algorithm and geometrical constraints are used for selecting a total of 8 feature points (eye corners, eyebrow corners, nosetrils, mouth corners). The applications using the dataset are strongly recommended to use this frame for point selection and other initialization tasks, because the face and eyes are mostly stable and facing the stimuli display directly at this instant. Otherwise, any other frame until the calibration procedure may be used as wished.

START initialises the calibration procedure. The calibration target is shown on several locations on the stimuli display one after another. The target stays at each location for 30 frames. Out of these 30 frames, the initial 10 frames should be excluded from data collection, as this is the time period necessary for the subject to fix his/her gaze on the target. The locations of the target on the stimuli display and their order are explained in the next section of this document.

Finally, TEST runs the testing procedure. The test target is shown on the same locations as the calibration, however in this phase it stays at each location for 20 frames. Again, the first 10 frames should be discarded and only estimations for the last 10 frams should be considered.

Row format

Each row in the command files show the frame number and the action taken on that frame. For example:

100 SELECT
151 CALIBRATE
621 TEST

Experimental Setups

Four different experimental setups are included in the dataset.

Standard setup
Standard setup Implementation of an eye-tracker software solution which works on common setups consisting of a processing device (computer, smart phone, etc.) and an ordinary webcam (or smart phone camera). The proposed system provides a cheap alternative for applications where commercial eye-trackers are infeasible.

 

Extreme camera placement
extreme camera placement Subject is facing the stimuli display’s center from 80 cm away. The camera is located on top of the stimuli display, 19.4 cm towards the left from the center. The positions and order of training and test targets can be found under setups/ext folder under dataset root folder. This setup may be used to observe the effects of camera placement.

 

Chinrest
03_std-cr Subject is facing the stimuli display’s center from 80 cm away. The camera is located on top of the stimuli display, in the center. A chinrest is employed to stable the subject’s head pose. The positions and order of training and test targets can be found under setups/std-cr folder under dataset root folder. This setup may be used to observe the effects of headpose stability.

 

iPad
04_ipad Subject is facing the stimuli display’s center from 40 cm away. The camera is located on top of the stimuli display, in the center. The positions and order of training and test targets can be found under setups/ipad folder under dataset root folder. This setup may be used to test for mobile device scenarios.

The dimensions and resolution of the monitor, and other useful information about the setups can be found in the related master’s thesis, included in the database.

Using the datasets

The system that will be designed to use this dataset needs to keep track of the frame count at any time (the count starts at 1 with the initial frame). While starting the processing of a video, the system also should read the commands file and the setup file (e.g.: DATASET/setups/std/calpoints.txt) into its memory.

SELECT:
(optionally) chooses the necessary feature points on the face or do any other initialization as necessary.
CALIBRATE:
start training, skipping the first 10 frames and using the next 20 frames for each training target. This phase lasts 30 * (target count) frames in total (for std setup 30*15=450 frames).
TEST:
start estimating the gaze point.  For each target, the first 10 frames may be skipped in this phase or not taken into account in the error calculation phase. Also, if a blink detector is implemented, the blinking frames may optionally be skipped or not taken into account in error calculations.

A sample case

Let’s analyze the case for the command file with the following contents:

100 SELECT
151 CALIBRATE
621 TEST

In this case, the application can carry out the initialization procedures on 100th frame, then start calibrating on the 151st frame. The frames after this point should be used to train the application. Remembering that the first 10 frames for each target should be discarded: the frames 161-180 should be used as the training input for the first target, 191-210 for the second target, and so on. The number of targets will be deduced from the corresponding experimental setup file (setups/[std|ext|std-cr|ipad]/calpoints.txt) and there will be 20 usable frames for each target.

In testing, the similar rule holds: testing starts on frame 621 and the ground truth for frames 631-640 corresponds to the first target defined in the experimental setup file (setups/[std|ext|std-cr|ipad]/testpoints.txt). During testing, the application is expected to calculate 10 gaze estimations for each target.

The setup files (setups/*/calpoints.txt and setups/*/testpoints.txt) contain the target positions as percentage of screen width and height (between 0 and 1), these values should be mapped to a screen resolution of 1920×1080 (which was used during experiments). For explanation of error calculations in degrees, refer to the master thesis.

Licence

As the subjects reserve the right to revoke their permission for inclusion of their videos in this dataset, the dataset or its components cannot be shared in any location other than the original webpage. Furthermore, the original videos or any other videos including portions of them cannot be used in public demonstrations and webpages without the consent of the dataset author.

Citation

Papers making use of this public dataset must cite the original PETMEI paper:

Ferhat, O., & Vilariño, F. "A Cheap Portable 
Eye-Tracker Solution for Common Setups
 " 3rd International Workshop on
 Pervasive Eye Tracking and Mobile Eye-Based 
Interaction (PETMEI), 2013

For further questions and doubts about the dataset, please contact Onur Ferhat (oferhat@cvc.uab.es).

References

  1. Ferhat, O. “Eye-Tracking with Webcam-Based Setups: Implementation of a Real-Time System and an Analysis of Factors Affecting Performance” Master Thesis, Universitat Autonoma de Barcelona, 2012 (PDF included in the dataset)
  2. Ferhat, O., & Vilariño, F. “A Cheap Portable Eye-Tracker Solution for Common Setups
    ” 3rd International Workshop on Pervasive Eye Tracking and Mobile Eye-Based Interaction (PETMEI), 2013

Download

Please register to download the sets. If you have already done so, proceed to the database download section.