Paper Title: A Novel Double-Tail Generative Adversarial Network for Fast Photo Animation.
2024-10-31
Added a new styles of AnimeGANv3: Portrait to Pixar. 🎃2024-08-28
A repo more suitable for portrait style inference based on the AnimeGANv3 models has been released. Highly recommended.2023-12-10
Added a new AnimeGANv3 model for Portrait to Oil-painting style. Its onnx is available here.2023-11-23
The code and the manuscript are released. 🦃2023-10-31
Added three new styles of AnimeGANv3: Portrait to Cute, 8bit and Sketch-0 style. 👻2023-09-18
Added a new AnimeGANv3 model for Face to Kpop style.2023-01-16
Added a new AnimeGANv3-photo.exe for the inference of AnimeGANv3's onnx model.2023-01-13
Added a new AnimeGANv3 model for Face to comic style.2022-12-25
Added the tiny model (2.4MB) ofNordic myth styleand USA style 2.0. It can go upto 50 FPS on iphone14 with 512*512 input. 🎅2022-11-24
Added a new AnimeGANv3 model for Face to Nordic myth style.🦃2022-11-06
Added a new AnimeGANv3 model for Face to Disney style V1.0.2022-10-31
Added a new AnimeGANv3 model for Face to USA cartoon and Disney style V1.0. 🎃2022-10-07
The USA cartoon Style of AnimeGANv3 is integrated to ProfileProfile with Core ML. Install it by the Apple Store and have a try.2022-09-26
Official online demo is integrated to Huggingface Spaces with Gradio.2022-09-24
Added a new great AnimeGANv3 model for Face to USA cartoon Style.2022-09-18
Update a new AnimeGANv3 model for Photo to Hayao Style.2022-08-01
Added a new AnimeGANv3 onnx model (Colab) for Face to Arcane style.2022-07-13
Added a new AnimeGANv3 onnx model (Colab) for Face to portrait sketch.2021-12-25
The paper of AnimeGANv3 will be released in 2022. 🎄
-
Download this repository and use AnimeGANv3's UI tool and pre-trained *.onnx to turn your photos into anime. 😊
-
🛠️ Installation
-
Clone repo
git clone https://github.com/TachibanaYoshino/AnimeGANv3.git cd AnimeGANv3
-
Install dependent packages
pip install -r requirements.txt
-
Inference with *.onnx
python deploy/test_by_onnx.py -i inputs/imgs/ -o output/results -m deploy/AnimeGANv3_Hayao_36.onnx
-
video to anime with *.onnx
python tools/video2anime.py -i inputs/vid/1.mp4 -o output/results -m deploy/AnimeGANv3_Hayao_36.onnx
-
The paper has been completed in 2022. The study of portrait stylization is an extension of the paper.
Some exhibits 👈
8_USA.mp4
v1.9 | v2.0 |
---|---|
x_sound.Disney-v1.9.mp4 |
x_sound.Disney2.0.mp4 |
10c.mp4
16_sound.mp4
6c-Kpop.mp4
5_sound.Cute.mp4
12_AnimeGANv3_Pixar_sounds.mp4
11_AnimeGANv3_light_Sketch-0_soundsc6.mp4
input | Face | panoramic image |
---|---|---|
cd tools && python edge_smooth.py --dataset Hayao --img_size 256
cd tools && python visual_superPixel_seg_image.py
python train.py --style_dataset Hayao --init_G_epoch 5 --epoch 100
Consider citing as below if you find this repository helpful to your project:
@article{Liu2024dtgan,
title={A Novel Double-Tail Generative Adversarial Network for Fast Photo Animation},
author={Gang LIU and Xin CHEN and Zhixiang GAO},
journal={IEICE Transactions on Information and Systems},
volume={E107.D},
number={1},
pages={72-82},
year={2024},
doi={10.1587/transinf.2023EDP7061}
}
This repo is made freely available to academic and non-academic entities for non-commercial purposes such as academic research, teaching, scientific publications. Permission is granted to use the AnimeGANv3 given that you agree to my license terms. Regarding the request for commercial use, please contact us via email to help you obtain the authorization letter.
Asher Chan asher_chan@foxmail.com