This repo holds the codes of paper: "BSN: Boundary Sensitive Network for Temporal Action Proposal Generation", which is accepted in ECCV 2018.
- 2018.07.09: Codes and feature of BSN
- 2018.07.02: Repository for BSN
Temporal action proposal generation is an important yet challenging problem, since temporal proposals with rich action content are indispensable for analysing real-world videos with long duration and high proportion irrelevant content. This problem requires methods not only generating proposals with precise temporal boundaries, but also retrieving proposals to cover truth action instances with high recall and high overlap using relatively fewer proposals. To address these difficulties, we introduce an effective proposal generation method, named Boundary-Sensitive Network (BSN), which adopts “local to global” fashion. Locally, BSN first locates temporal boundaries with high probabilities, then directly combines these boundaries as proposals. Globally, with Boundary-Sensitive Proposal feature, BSN retrieves proposals by evaluating the confidence of whether a proposal contains an action within its region. We conduct experiments on two challenging datasets: ActivityNet-1.3 and THUMOS14, where BSN outperforms other state-of-the-art temporal action proposal generation methods with high recall and high temporal precision. Finally, further experiments demonstrate that by combining existing action classifiers, our method significantly improves the state-of-the-art temporal action detection performance.
These code is implemented in Tensorflow (>1.0). Thus please install tensorflow first.
To accelerate the training speed, all input feature data are loaded in RAM first. Thus around 7GB RAM is required.
Clone this repo with git, please use:
git clone https://github.com/wzmsltw/BSN-boundary-sensitive-network.git
We support experiments with publicly available dataset ActivityNet 1.3 for temporal action proposal generation now. To download this dataset, please use official ActivityNet downloader to download videos from the YouTube.
To extract visual feature, we adopt TSN model pretrained on the training set of ActivityNet, which is the challenge solution of CUHKÐ&SIAT team in ActivityNet challenge 2016. Please refer this repo TSN-yjxiong to extract frames and optical flow and refer this repo anet2016-cuhk to find pretrained TSN model.
For convenience of training and testing, we rescale the feature length of all videos to same length 100, and we provide the rescaled feature at here Google Cloud or Baidu Yun. If you download features from BaiduYun, please use cat zip_csv_mean_100.z* > csv_mean_100.zip
before unzip. After download and unzip, please put csv_mean_100
directory to ./data/activitynet_feature_cuhk/
.
python TEM_train.py
We also provide trained TEM model in ./model/TEM
.
First, to create directories for outputs.
sh mkdir.sh
python TEM_test.py
sh run_pgm_proposal.sh
sh run_pgm_feature.sh
python PEM_train.py
We also provide trained PEM model in ./model/PEM
.
python PEM_test.py
python Post_processing.py
python eval.py
Please cite the following paper if you feel BSN useful to your research
@inproceedings{BSN2018arXiv,
author = {Tianwei Lin and
Xu Zhao and
Haisheng Su and
Chongjing Wang and
Ming Yang},
title = {BSN: Boundary Sensitive Network for Temporal Action Proposal Generation},
booktitle = {European Conference on Computer Vision},
year = {2018},
}
For any question, please file an issue or contact
Tianwei Lin: wzmsltw@sjtu.edu.cn