Skip to content
forked from MCG-NKU/AMT

(TBD) The MindSpore version of [CVPR 2023] AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation

License

Notifications You must be signed in to change notification settings

Men1scus/AMT_MindSpore

 
 

Repository files navigation

$\rm{[MindSpore-phase3]}$ $AMT$

本项目包含了以下论文的mindspore实现:

AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation
Zhen Li*, Zuo-Liang Zhu*, Ling-Hao Han, Qibin Hou, Chun-Le Guo, Ming-Ming Cheng
(* denotes equal contribution)
Nankai University
IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023

[Paper] [Project Page] [Web demos]

文章官方版本仓库链接: MCG-NKU/AMT: Official code for "AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation" (CVPR2023) (github.com)

目前已经完成推理部分以及训练部分大部分代码的mindspore转化

正在进行中的工作

  • 完整代码的mindspore实现

Dependencies and Installation

python 3.8
cuda: 11.6
mindspore: 2.2.11

  1. Clone Repo

    git clone https://github.com/Men1scus/AMT_MindSpore.git
  2. Create Conda Environment and Install Dependencies

    conda env create -f environment.yaml
    conda activate amt
    pip install https://ms-release.obs.cn-north-4.myhuaweicloud.com/2.2.11/MindSpore/unified/x86_64/mindspore-2.2.11-cp38-cp38-linux_x86_64.whl --trusted-host ms-release.obs.cn-north-4.myhuaweicloud.com -i https://pypi.tuna.tsinghua.edu.cn/simple
  3. Download pretrained models for demos from Pretrained Models and place them to the pretrained folder

Quick Demo

Note that the selected pretrained model ([CKPT_PATH]) needs to match the config file ([CFG]).

Creating a video demo, increasing $n$ will slow down the motion in the video. (With $m$ input frames, [N_ITER] $=n$ corresponds to $2^n\times (m-1)+1$ output frames.)

python demos/demo_2x.py -c [CFG] -p [CKPT] -n [N_ITER] -i [INPUT] -o [OUT_PATH] -r [FRAME_RATE]
# e.g. [INPUT]
# -i could be a video / a regular expression / a folder contains multiple images
# -i demo.mp4 (video)/img_*.png (regular expression)/img0.png img1.png (images)/demo_input (folder)

# e.g. a simple usage
python demos/demo_2x.py -c cfgs/AMT-S.yaml -p pretrained/amt-s.ckpt -n 6 -i assets/quick_demo/img0.png assets/quick_demo/img1.png
  • Note: Please enable --save_images for saving the output images (Save speed will be slowed down if there are too many output images)
  • Input type supported: a video / a regular expression / multiple images / a folder containing input frames.
  • Results are in the [OUT_PATH] (default is results/2x) folder.

Pretrained Models

These pretrained models, presented in the .ckpt format, originated from transforming a .pth file.

Dataset 🔗 Download Links Config file Trained on Arbitrary/Fixed
AMT-S [Baidu Cloud] [Google Driver] [cfgs/AMT-S] Vimeo90k Fixed
AMT-L [Baidu Cloud] [Google Driver] [cfgs/AMT-L] Vimeo90k Fixed
AMT-G [Baidu Cloud] [Google Driver] [cfgs/AMT-G] Vimeo90k Fixed
AMT-S [Baidu Cloud(TBD)] [Google Driver(TBD)] [cfgs/AMT-S_gopro] GoPro Arbitrary

About

(TBD) The MindSpore version of [CVPR 2023] AMT: All-Pairs Multi-Field Transforms for Efficient Frame Interpolation

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 99.8%
  • Shell 0.2%