[2024.01.04] - FP16 inference is available, 3x faster! Now the demo can be deployed on GPU with >8GB memory. Enjoy!
[2024.01.04] - HuggingFace Online demo is available here!
[2023.12.28] - ModelScope Online demo is available here!
[2023.12.27] - 🧨We released the latest checkpoint(v1.1) and inference code, check on modelscope in Chinese.
[2023.12.05] - The paper is available at here.
For more AIGC related works of our group, please visit here.
- Release the model and inference code
- Provide publicly accessible demo link
- Provide a free font file(🤔)
- Release tools for merging weights from community models or LoRAs
- Support AnyText in stable-diffusion-webui(🤔)
- Release AnyText-benchmark dataset and evaluation code
- Release AnyWord-3M dataset and training code
AnyText comprises a diffusion pipeline with two primary elements: an auxiliary latent module and a text embedding module. The former uses inputs like text glyph, position, and masked image to generate latent features for text generation or editing. The latter employs an OCR model for encoding stroke data as embeddings, which blend with image caption embeddings from the tokenizer to generate texts that seamlessly integrate with the background. We employed text-control diffusion loss and text perceptual loss for training to further enhance writing accuracy.
# Install git (skip if already done)
conda install -c anaconda git
# Clone anytext code
git clone https://github.com/tyxsspa/AnyText.git
cd AnyText
# Prepare a font file; Arial Unicode MS is recommended, **you need to download it on your own**
mv your/path/to/arialuni.ttf ./font/Arial_Unicode.ttf
# Create a new environment and install packages as follows:
conda env create -f environment.yaml
conda activate anytext
[Recommend]: We release a demo on ModelScope and HuggingFace!
AnyText include two modes: Text Generation and Text Editing. Running the simple code below to perform inference in both modes and verify whether the environment is correctly installed.
python inference.py
If you have advanced GPU (with at least 8G memory), it is recommended to deploy our demo as below, which includes usage instruction, user interface and abundant examples.
export CUDA_VISIBLE_DEVICES=0 && python demo.py
FP16 inference is used as default, and a Chinese-to-English translation model is loaded for direct input of Chinese prompt (occupying ~4GB of GPU memory). The default behavior can be modified, as the following command enables FP32 inference and disables the translation model:
export CUDA_VISIBLE_DEVICES=0 && python demo.py --use_fp32 --no_translator
If FP16 is used and the translation model not used(or load it on CPU, see here), generation of one single 512x512 image will occupy ~7.5GB of GPU memory.
In addition, other font file can be used by(although the result may not be optimal):
export CUDA_VISIBLE_DEVICES=0 && python demo.py --font_path your/path/to/font/file.ttf
Please note that when executing inference for the first time, the model files will be downloaded to: ~/.cache/modelscope/hub
. If you need to modify the download directory, you can manually specify the environment variable: MODELSCOPE_CACHE
.
We use Sentence Accuracy (Sen. ACC) and Normalized Edit Distance (NED) to evaluate the accuracy of generated text, and use the FID metric to assess the quality of generated images. Compared to existing methods, AnyText has a significant advantage in both Chinese and English text generation.
@article{tuo2023anytext,
title={AnyText: Multilingual Visual Text Generation And Editing},
author={Yuxiang Tuo and Wangmeng Xiang and Jun-Yan He and Yifeng Geng and Xuansong Xie},
year={2023},
eprint={2311.03054},
archivePrefix={arXiv},
primaryClass={cs.CV}
}