Skip to content

Latest commit

 

History

History
208 lines (155 loc) · 40.7 KB

README.md

File metadata and controls

208 lines (155 loc) · 40.7 KB

TSN

Introduction

[ALGORITHM]

@inproceedings{wang2016temporal,
  title={Temporal segment networks: Towards good practices for deep action recognition},
  author={Wang, Limin and Xiong, Yuanjun and Wang, Zhe and Qiao, Yu and Lin, Dahua and Tang, Xiaoou and Van Gool, Luc},
  booktitle={European conference on computer vision},
  pages={20--36},
  year={2016},
  organization={Springer}
}

Model Zoo

UCF-101

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x3_75e_ucf101_rgb [1] 8 ResNet50 ImageNet 83.03 96.78 8332 ckpt log json

[1] We report the performance on UCF-101 split1.

HMDB51

config gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x8_50e_hmdb51_imagenet_rgb 8 ResNet50 ImageNet 48.95 80.19 21535 ckpt log json
tsn_r50_1x1x8_50e_hmdb51_kinetics400_rgb 8 ResNet50 Kinetics400 56.08 84.31 21535 ckpt log json
tsn_r50_1x1x8_50e_hmdb51_mit_rgb 8 ResNet50 Moments 54.25 83.86 21535 ckpt log json

Kinetics-400

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r50_1x1x3_100e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 70.60 89.26 x x 4.3 (25x10 frames) 8344 ckpt log json
tsn_r50_1x1x3_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 70.42 89.03 x x x 8343 ckpt log json
tsn_r50_dense_1x1x5_50e_kinetics400_rgb 340x256 8x3 ResNet50 ImageNet 70.18 89.10 69.15 88.56 12.7 (8x10 frames) 7028 ckpt log json
tsn_r50_320p_1x1x3_100e_kinetics400_rgb short-side 320 8x2 ResNet50 ImageNet 70.91 89.51 x x 10.7 (25x3 frames) 8344 ckpt log json
tsn_r50_320p_1x1x3_110e_kinetics400_flow short-side 320 8x2 ResNet50 ImageNet 55.70 79.85 x x x 8471 ckpt log json
tsn_r50_320p_1x1x3_kinetics400_twostream [1: 1]* x x ResNet50 ImageNet 72.76 90.52 x x x x x x x
tsn_r50_1x1x8_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 71.80 90.17 x x x 8343 ckpt log json
tsn_r50_320p_1x1x8_100e_kinetics400_rgb short-side 320 8x3 ResNet50 ImageNet 72.41 90.55 x x 11.1 (25x3 frames) 8344 ckpt log json
tsn_r50_320p_1x1x8_110e_kinetics400_flow short-side 320 8x4 ResNet50 ImageNet 57.76 80.99 x x x 8473 ckpt log json
tsn_r50_320p_1x1x8_kinetics400_twostream [1: 1]* x x ResNet50 ImageNet 74.64 91.77 x x x x x x x
tsn_r50_video_320p_1x1x3_100e_kinetics400_rgb short-side 320 8 ResNet50 ImageNet 71.11 90.04 x x x 8343 ckpt log json
tsn_r50_dense_1x1x8_100e_kinetics400_rgb 340x256 8 ResNet50 ImageNet 70.77 89.3 68.75 88.42 12.2 (8x10 frames) 8344 ckpt log json
tsn_r50_video_1x1x8_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 71.79 90.25 x x x 21558 ckpt log json
tsn_r50_video_dense_1x1x8_100e_kinetics400_rgb short-side 256 8 ResNet50 ImageNet 70.40 89.12 x x x 21553 ckpt log json

Here, We use [1: 1] to indicate that we combine rgb and flow score with coefficients 1: 1 to get the two-stream prediction (without applying softmax).

Kinetics-400 Data Benchmark (8-gpus, ResNet50, ImageNet pretrain; 3 segments)

In data benchmark, we compare:

  1. Different data preprocessing methods: (1) Resize video to 340x256, (2) Resize the short edge of video to 320px, (3) Resize the short edge of video to 256px;
  2. Different data augmentation methods: (1) MultiScaleCrop, (2) RandomResizedCrop;
  3. Different testing protocols: (1) 25 frames x 10 crops, (2) 25 frames x 3 crops.
config resolution training augmentation testing protocol top1 acc top5 acc ckpt log json
tsn_r50_multiscalecrop_340x256_1x1x3_100e_kinetics400_rgb 340x256 MultiScaleCrop 25x10 frames 70.60 89.26 ckpt log json
x 340x256 MultiScaleCrop 25x3 frames 70.52 89.39 x x x
tsn_r50_randomresizedcrop_340x256_1x1x3_100e_kinetics400_rgb 340x256 RandomResizedCrop 25x10 frames 70.11 89.01 ckpt log json
x 340x256 RandomResizedCrop 25x3 frames 69.95 89.02 x x x
tsn_r50_multiscalecrop_320p_1x1x3_100e_kinetics400_rgb short-side 320 MultiScaleCrop 25x10 frames 70.32 89.25 ckpt log json
x short-side 320 MultiScaleCrop 25x3 frames 70.54 89.39 x x x
tsn_r50_randomresizedcrop_320p_1x1x3_100e_kinetics400_rgb short-side 320 RandomResizedCrop 25x10 frames 70.44 89.23 ckpt log json
x short-side 320 RandomResizedCrop 25x3 frames 70.91 89.51 x x x
tsn_r50_multiscalecrop_256p_1x1x3_100e_kinetics400_rgb short-side 256 MultiScaleCrop 25x10 frames 70.42 89.03 ckpt log json
x short-side 256 MultiScaleCrop 25x3 frames 70.79 89.42 x x x
tsn_r50_randomresizedcrop_256p_1x1x3_100e_kinetics400_rgb short-side 256 RandomResizedCrop 25x10 frames 69.80 89.06 ckpt log json
x short-side 256 RandomResizedCrop 25x3 frames 70.48 89.89 x x x

Kinetics-400 OmniSource Experiments

config resolution backbone pretrain w. OmniSource top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r50_1x1x3_100e_kinetics400_rgb 340x256 ResNet50 ImageNet 70.6 89.3 4.3 (25x10 frames) 8344 ckpt log json
x 340x256 ResNet50 ImageNet ✔️ 73.6 91.0 x 8344 ckpt x x
x short-side 320 ResNet50 IG-1B [1] 73.1 90.4 x 8344 ckpt x x
x short-side 320 ResNet50 IG-1B [1] ✔️ 75.7 91.9 x 8344 ckpt x x

[1] We obtain the pre-trained model from torch-hub, the pretrain model we used is resnet50_swsl

Kinetics-600

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r50_video_1x1x8_100e_kinetics600_rgb short-side 256 8x2 ResNet50 ImageNet 74.8 92.3 11.1 (25x3 frames) 8344 ckpt log json

Kinetics-700

config resolution gpus backbone pretrain top1 acc top5 acc inference_time(video/s) gpu_mem(M) ckpt log json
tsn_r50_video_1x1x8_100e_kinetics700_rgb short-side 256 8x2 ResNet50 ImageNet 61.7 83.6 11.1 (25x3 frames) 8344 ckpt log json

Something-Something V1

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x8_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 18.55 44.80 17.53 44.29 10978 ckpt log json
tsn_r50_1x1x16_50e_sthv1_rgb height 100 8 ResNet50 ImageNet 15.77 39.85 13.33 35.58 5691 ckpt log json

Something-Something V2

config resolution gpus backbone pretrain top1 acc top5 acc reference top1 acc reference top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x8_50e_sthv2_rgb height 240 8 ResNet50 ImageNet 32.97 63.62 30.56 58.49 10966 ckpt log json
tsn_r50_1x1x16_50e_sthv2_rgb height 240 8 ResNet50 ImageNet 27.21 55.84 21.91 46.87 8337 ckpt log json

Moments in Time

config resolution gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_1x1x6_100e_mit_rgb short-side 256 8x2 ResNet50 ImageNet 26.84 51.6 8339 ckpt log json

Multi-Moments in Time

config resolution gpus backbone pretrain mAP gpu_mem(M) ckpt log json
tsn_r101_1x1x5_50e_mmit_rgb short-side 256 8x2 ResNet101 ImageNet 61.09 10467 ckpt log json

ActivityNet v1.3

config resolution gpus backbone pretrain top1 acc top5 acc gpu_mem(M) ckpt log json
tsn_r50_320p_1x1x8_50e_activitynet_video_rgb short-side 320 8x1 ResNet50 Kinetics400 73.93 93.44 5692 ckpt log json
tsn_r50_320p_1x1x8_50e_activitynet_clip_rgb short-side 320 8x1 ResNet50 Kinetics400 76.90 94.47 5692 ckpt log json
tsn_r50_320p_1x1x8_150e_activitynet_video_flow 340x256 8x2 ResNet50 Kinetics400 57.51 83.02 5780 ckpt log json
tsn_r50_320p_1x1x8_150e_activitynet_clip_flow 340x256 8x2 ResNet50 Kinetics400 59.51 82.69 5780 ckpt log json

HVU

config[1] tag category resolution gpus backbone pretrain mAP HATNet[2] HATNet-multi[2] ckpt log json
tsn_r18_1x1x8_100e_hvu_action_rgb action short-side 256 8x2 ResNet18 ImageNet 57.5 51.8 53.5 ckpt log json
tsn_r18_1x1x8_100e_hvu_scene_rgb scene short-side 256 8 ResNet18 ImageNet 55.2 55.8 57.2 ckpt log json
tsn_r18_1x1x8_100e_hvu_object_rgb object short-side 256 8 ResNet18 ImageNet 45.7 34.2 35.1 ckpt log json
tsn_r18_1x1x8_100e_hvu_event_rgb event short-side 256 8 ResNet18 ImageNet 63.7 38.5 39.8 ckpt log json
tsn_r18_1x1x8_100e_hvu_concept_rgb concept short-side 256 8 ResNet18 ImageNet 47.5 26.1 27.3 ckpt log json
tsn_r18_1x1x8_100e_hvu_attribute_rgb attribute short-side 256 8 ResNet18 ImageNet 46.1 33.6 34.9 ckpt log json
- Overall short-side 256 - ResNet18 ImageNet 52.6 40.0 41.3 - - -

[1] For simplicity, we train a specific model for each tag category as the baselines for HVU.

[2] The performance of HATNet and HATNet-multi are from the paper Large Scale Holistic Video Understanding. The proposed HATNet is a 2 branch Convolution Network (one 2D branch, one 3D branch) and share the same backbone(ResNet18) with us. The inputs of HATNet are 16 or 32 frames long video clips (which is much larger than us), while the input resolution is coarser (112 instead of 224). HATNet is trained on each individual task (each tag category) while HATNet-multi is trained on multiple tasks. Since there is no released codes or models for the HATNet, we just include the performance reported by the original paper.

Notes:

  1. The gpus indicates the number of gpu we used to get the checkpoint. It is noteworthy that the configs we provide are used for 8 gpus as default. According to the Linear Scaling Rule, you may set the learning rate proportional to the batch size if you use different GPUs or videos per GPU, e.g., lr=0.01 for 4 GPUs x 2 video/gpu and lr=0.08 for 16 GPUs x 4 video/gpu.
  2. The inference_time is got by this benchmark script, where we use the sampling frames strategy of the test setting and only care about the model inference time, not including the IO time and pre-processing time. For each setting, we use 1 gpu and set batch size (videos per gpu) to 1 to calculate the inference time.
  3. The values in columns named after "reference" are the results got by training on the original repo, using the same model settings.

For more details on data preparation, you can refer to

Train

You can use the following command to train a model.

python tools/train.py ${CONFIG_FILE} [optional arguments]

Example: train TSN model on Kinetics-400 dataset in a deterministic option with periodic validation.

python tools/train.py configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py \
    --work-dir work_dirs/tsn_r50_1x1x3_100e_kinetics400_rgb \
    --validate --seed 0 --deterministic

For more details, you can refer to Training setting part in getting_started.

Test

You can use the following command to test a model.

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} [optional arguments]

Example: test TSN model on Kinetics-400 dataset and dump the result to a json file.

python tools/test.py configs/recognition/tsn/tsn_r50_1x1x3_100e_kinetics400_rgb.py \
    checkpoints/SOME_CHECKPOINT.pth --eval top_k_accuracy mean_class_accuracy \
    --out result.json

For more details, you can refer to Test a dataset part in getting_started.