VGG based calligraphy style transfer for handwritten numbers from MNIST dataset. The goal of this project is to embed a certain calligraphy style to hand-drawn numbers and test if multiple style image use improves the transfer. First, we create a composite image of 10x10 random numbers from the MNIST dataset. Then, create a number of calligraphy style images from which the style features are extracted. We augment each individual style number to create a better feature representation and match to MNIST numbers. The features are applied to the original hand-drawn image to change the numbers so that they would possess the selected style but still represent the original contents as much as possible.
Original paper:
Image Style Transfer Using Convolutional Neural Networks, Gatys L.A. et al, 2016
https://openaccess.thecvf.com/content_cvpr_2016/html/Gatys_Image_Style_Transfer_CVPR_2016_paper.html
Main dependencies:
Video of results:
Before running the code, extract the train-images-idx3-ubyte.zip (MNIST dataset) file in the /mnist folder.
Multiple Styles
3 styles used for style feature extraction:
MNIST image | Output | Output GIF | Style example |
Style 2
1 style image used for feature extraction:
MNIST image | Output | Style example |
5 style images used for feature extraction:
MNIST image | Output | Style example |
100 style images used for feature extraction:
MNIST image | Output | Style example |