This project is about evaluating Single Object Tracking by applying the object tracker SiamMask on the Audi Autonomous Driving Dataset A2D2 and KITTI.
The goal was to test, how applicable SiamMask to the task of tracking individual vehicles in those datasets without explicitly fine-tuning it.
For details see the whole report
The GUI in this repository can be used to visualize the application of the tracker on the A2D2, see:
To download and install the repository follow the steps:
- Download the project
git clone https://github.com/samukie/single-object-tracking.git
cd single-object-tracking
- Setup the environment
conda create -n sot python=3.7
conda activate sot
conda install pip opencv pyqt
pip install -r requirements.txt
bash make.sh
- Download a subset of the A2D2
- Start the GUI
cd evaluation
python GUI.py --config "../SiamMask/experiments/siammask_sharp/config_davis.json" \
--resume "../SiamMask/experiments/siammask_sharp/SiamMask_DAVIS.pth" --dataset "path/to/a2d2_root" \
--object_lookup "path/to/a2d2_class_list.json"
The tracker can be evaluated by using the humanly annotated semantic segmentation and instance segmentation images which were provided by human annotators:
To evaluate SiamMask on the A2D2 or KITTI dataset, specify your own config file and execute:
python evaluate_dataset.py --eval_config configs/your_config.yaml
There are two evaluation modes, the IoU and the end-of-track detection.
In the A2D2, only a subset of the video frames is annotated with segmentations.
This leads to fail cases, where the tracker switches to other objects, if the gap between the scene displayed in two consecutive frames is too big:
This problem does not occur, when applying the tracker on the KITTI dataset: