This repository has four major components:
-
Gaze
- Pupil and gaze tracking models and scripts are included here.
- From the well-known methods we have either implemented or under implementation:
- pupil-labs
- RITnet
- DeepVog
- EllsegI and II To be added!
-
Odometry
- The pipeline in order to bring T265 head pose tracking data into bilogically correct frame of reference.
-
Scene
- Marker detection pipeline
- Pupil-lab's circular marker
- 8x9 checkerboard
- April-tag markers
- Mediapipe model for:
- Hand tracking
- Face tracking
- Face mesh tracking
- Body pose tracking
- Marker detection pipeline
-
visualization
- Pupil tracking visualization:
- Pupil-labs pipeline: The ellipse and confidence values
- RITnet pipeline: pupil, iris, scelera, skin masks and ellipse fit confidence values
- DeepVog pipeline: pupil region, 3D gaze vector, network output mask
- Gaze overlaid video:
- World and super imposed eye videos plus the gaze point
- World and the detected marker positions
- Detected object bounding box (Yolo)
- Detected object by mediapipe
- Eye image annotation tool:
- This tool is used to create semantic segmentation mask for eye images as ground truth
- Pupil tracking visualization:
The documentation for this is under development. Follow the steps here which guides you step by step. (Ping Kamran if you don't have access)
- Data Loading
- Improve performance during loading
- Add April Tag detection Pipeline
- Merge from local branch
- Add Intrinsics and Extrinsics Pipeline
- Merge from local branch and refactor as one script
- Add 3D Gaze Calibration Pipeline
- Incorporate Pupil-lab's latest pye3d into the repo
- DeepVog Pupil Tracking Pipeline
- RITnet Pupil Tracking Pipeline
- Merge from local repo
- Data Saving
- Run-time performance improvement:
- Test and use other formats i.e. pandas, h5py, etc.
- Run-time performance improvement: