Skip to content

Latest commit

 

History

History
101 lines (68 loc) · 7.55 KB

README.md

File metadata and controls

101 lines (68 loc) · 7.55 KB

LFB

Introduction

[ALGORITHM]

@inproceedings{wu2019long,
  title={Long-term feature banks for detailed video understanding},
  author={Wu, Chao-Yuan and Feichtenhofer, Christoph and Fan, Haoqi and He, Kaiming and Krahenbuhl, Philipp and Girshick, Ross},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={284--293},
  year={2019}
}

Model Zoo

AVA2.1

Model Modality Pretrained Backbone Input gpus Resolution mAP log json ckpt
lfb_nl_kinetics_pretrained_slowonly_r50_4x16x1_20e_ava_rgb.py RGB Kinetics-400 slowonly_r50_4x16x1 4x16 8 short-side 256 24.11 log json ckpt
lfb_avg_kinetics_pretrained_slowonly_r50_4x16x1_20e_ava_rgb.py RGB Kinetics-400 slowonly_r50_4x16x1 4x16 8 short-side 256 20.17 log json ckpt
lfb_max_kinetics_pretrained_slowonly_r50_4x16x1_20e_ava_rgb.py RGB Kinetics-400 slowonly_r50_4x16x1 4x16 8 short-side 256 22.15 log json ckpt
  • Notes:
  1. The gpus indicates the number of gpu we used to get the checkpoint. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.
  2. We use slowonly_r50_4x16x1 instead of I3D-R50-NL in the original paper as the backbone of LFB, but we have achieved the similar improvement: (ours: 20.1 -> 24.11 vs. author: 22.1 -> 25.8).
  3. Because the long-term features are randomly sampled in testing, the test accuracy may have some differences.
  4. Before train or test lfb, you need to infer feature bank with the lfb_slowonly_r50_ava_infer.py. For more details on infer feature bank, you can refer to Train part.
  5. You can also dowonload long-term feature bank from AVA_train_val_float32_lfb or AVA_train_val_float16_lfb, and then put them on lfb_prefix_path.

Train

a. Infer long-term feature bank for training

Before train or test lfb, you need to infer long-term feature bank first.

Specifically, run the test on the training, validation, testing dataset with the config file lfb_slowonly_r50_ava_infer (The config file will only infer the feature bank of training dataset and you need set dataset_mode = 'val' to infer the feature bank of validation dataset in the config file.), and the shared head LFBInferHead will generate the feature bank.

A long-term feature bank file of AVA training and validation datasets with float32 precision occupies 3.3 GB. If store the features with float16 precision, the feature bank occupies 1.65 GB.

You can use the following command to infer feature bank of AVA training and validation dataset and the feature bank will be stored in lfb_prefix_path/lfb_train.pkl and lfb_prefix_path/lfb_val.pkl.

# set `dataset_mode = 'train'` in lfb_slowonly_r50_ava_infer.py
python tools/test.py configs/detection/lfb/lfb_slowonly_r50_ava_infer.py \
    checkpoints/YOUR_BASELINE_CHECKPOINT.pth --eval mAP

# set `dataset_mode = 'val'` in lfb_slowonly_r50_ava_infer.py
python tools/test.py configs/detection/lfb/lfb_slowonly_r50_ava_infer.py \
    checkpoints/YOUR_BASELINE_CHECKPOINT.pth --eval mAP

We use slowonly_r50_4x16x1 checkpoint from slowonly_kinetics_pretrained_r50_4x16x1_20e_ava_rgb to infer feature bank.

b. Train LFB

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train LFB model on AVA with half-precision long-term feature bank.

python tools/train.py configs/detection/lfb/lfb_nl_kinetics_pretrained_slowonly_r50_4x16x1_20e_ava_rgb.py \
  --validate --seed 0 --deterministic

For more details and optional arguments infos, you can refer to Training setting part in getting_started.

Test

a. Infer long-term feature bank for testing

Before train or test lfb, you also need to infer long-term feature bank first. If you have generated the feature bank file, you can skip it.

The step is the same with Infer long-term feature bank for training part in Train.

b. Test LFB

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test LFB model on AVA with half-precision long-term feature bank and dump the result to a csv file.

python tools/test.py configs/detection/lfb/lfb_nl_kinetics_pretrained_slowonly_r50_4x16x1_20e_ava_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval mAP --out results.csv

For more details, you can refer to Test a dataset part in getting_started.