Skip to content

Latest commit

 

History

History
85 lines (65 loc) · 3.97 KB

README.md

File metadata and controls

85 lines (65 loc) · 3.97 KB

Arbitrary style transfer using adaptive Instance Normalization

This project was built based on the following papers:

With a couple modifications

  • The AdaIN module have trainable params(EPS) (Better training stability)
  • Use pretrained images recovery in the decoder for faster training
  • Added histogram loss and variance loss to better guide the model
  • Added both the style loss from AdaIN(compare mean and var) and the Gram matrices style loss (VincentStyleLoss)
  • Add new augmentations for content and style

Results:

image image image image image image image image

  • Some of the results above use the alpha value higher than 1 (emphasis on the style)

Training

git clone "https://github.com/vTuanpham/Style_transfer.git"
cd "Style_transfer"

Install dependencies first, this might take awhile..

pip install -r requirements.txt
  • Note: Wandb must be the the correct version of 0.13.9 as i only test the artifact logging of this version, newer version yield in error when creating new artifact instance

To train, modify the script in src/scripts/train.sh

bash src/scripts/train.sh 
  • This project is heavily support for wandb logging:

    Every epochs, eval the model performance by inferencing all the images in the src/data/eval_dir

    • image

    Auto log saved checkpoint to wandb with wandb artifacts

    • image

Test

Modified the alpha value (higher mean higher emphasis on the style)

python src/inference.py -cpkt ".pth" --alpha 1 -c "./src/data/eval_dir/content/1.jpg" -s "./src/data/eval_dir/style/1.jpg"  

Or config the src/scripts/inference.sh and:

bash src/scripts/inference.sh

Leave a star ⭐ if you find this useful!

TODO:

  • Add better model checkpoint
  • Easier inference
  • Add docs on all args for training
  • Longer training might reduce noise ?
  • Output image is a bit less saturate than the style
  • Sleep