This is the official implementation of Attention-based Residual Autoencoder for Video Anomaly Detection .
HSTforU: See HSTforU: Anomaly Detection in Aerial and Ground-based Videos with Hierarchical Spatio-Temporal Transformer for U-net .
MoGuP: See MoGuP:Motion-guided Prediction for Video Anomaly Detection.
- [6/01/2023] Training script of ASTNet is released.
- [5/25/2022] ASTNet is available online.
- [4/21/2022] Code of ASTNet is released!
- Linux or macOS
- Python 3
- PyTorch 1.7.0
The code can be run with Python 3.6 and above.
Install the required packages:
pip install -r requirements.txt
Clone this repo:
git clone https://github.com/vt-le/astnet.git
cd ASTNet/ASTNet
We evaluate ASTNet
on:
A dataset is a directory with the following structure:
$ tree data
ped2/avenue
├── training
│ └── frames
│ ├── ${video_1}$
│ │ ├── 000.jpg
│ │ ├── 001.jpg
│ │ └── ...
│ ├── ${video_2}$
│ │ ├── 00.jpg
│ │ └── ...
│ └── ...
├── testing
│ └── frames
│ ├── ${video_1}$
│ │ ├── 000.jpg
│ │ ├── 001.jpg
│ │ └── ...
│ ├── ${video_2}$
│ │ ├── 000.jpg
│ │ └── ...
│ └── ...
└── ped2/avenue.mat
shanghaitech
├── training
│ └── frames
│ ├── ${video_1}$
│ │ ├── 000.jpg
│ │ ├── 001.jpg
│ │ └── ...
│ ├── ${video_2}$
│ │ ├── 00.jpg
│ │ └── ...
│ └── ...
├── testing
│ └── frames
│ ├── ${video_1}$
│ │ ├── 000.jpg
│ │ ├── 001.jpg
│ │ └── ...
│ ├── ${video_2}$
│ │ ├── 000.jpg
│ │ └── ...
│ └── ...
└── test_frame_mask
├── 01_0014.npy
├── 01_0015.npy
└── ...
Please first download the pre-trained model
Dataset | Pretrained Model |
---|---|
UCSD Ped2 | github / drive |
CUHK Avenue | github / drive |
ShanghaiTech | github / drive |
To evaluate a pretrained ASTNet
on a dataset, run:
python test.py \
--cfg <path/to/config/file> \
--model-file </path/to/pre-trained/model>
For example, to evaluate ASTNet
on Ped2:
python test.py \
--cfg config/ped2_wresnet.yaml \
--model-file pretrained.ped2.pth
To train ASTNet
on a dataset, run:
python train.py \
--cfg <path/to/config/file>
For example, to train ASTNet
on Ped2:
python train.py \
--cfg config/ped2_wresnet.yaml
Notes:
- To change other options, see
<config/config_file.yaml>
.
If you find our work useful for your research, please consider citing:
@article{le2023attention,
title={Attention-based Residual Autoencoder for Video Anomaly Detection},
author={Le, Viet-Tuan and Kim, Yong-Guk},
journal={Applied Intelligence},
volume={53},
number={3},
pages={3240--3254},
year={2023},
publisher={Springer}
}
For any question, please file an issue or contact:
Viet-Tuan Le: vt-le@outlook.com