This gaze tracking project is a continued part based on previous real-time pupil detection project.
There are two main types of methods to find point of gaze from a remote video-based system.
This method requires many-point calibration to establish polynomial equations to map pupil-glints vector to the screen location. Large head movements will decrease a lot the mapping accuracy.
The advantage of 3D model-based method is the simple calibration required to find subject specific eye parameters. This method overcomes the degradation in accuracy due to large head movements.