Note: Implementation in this repo has been largely outdated. You may refer to my presentation slides for your own implementation.
This is based on one of our prior works on Sketch-to-Image Generation. Freehand sketch can be highly abstract (examples shown below), and learning representations of sketches is not trivial. In contrast to other cross domain learning approaches, like pix2pix and CycleGAN, where a mapping from representations in one domain to those in another domain is learned using translation networks, in Sketch-to-Image Generation, we propose to learn a joint representation of sketch and image.
In this project we intend to add text constraints to sketch-to-image generation, where texts provide the contents and sketches control the shapes. So far, I only tried to add attribute guidance instead of using text embeddings as additional conditions, and this repo demonstrates results on Attribute-Guided Sketch-to-Image Generation.
face | bird | shoe |
---|---|---|
* A few freehand sketches were collected from volunteers.
- Major Contributor: Shangzhe Wu (HKUST)
- Supervisor: Yu-wing Tai (Tencent), Chi-Keung Tang (HKUST)
- Mentor in MLJejuCamp2017: Hyungjoo Cho
This project was developed in Machine Learning Camp Jeju 2017 within one month. More interesting projects can be found in final presentations and program GitHub. Final presentation video can be watched here (partially). Camp 2018 has been launched, and more details can be found here.
- Python 3.5
- Tensorflow 0.12.1
- SciPy
- Clone this repo:
git clone https://github.com/elliottwu/sText2Image.git
cd sText2Image
- Download preprocessed CelebA data (~3GB):
sh ./datasets/download_dataset.sh
sh train.sh
- To monitor training using Tensorboard, copy the following to your terminal and open
localhost:8888
in your browser
tensorboard --logdir=logs_face --port=8888
sh test.sh
- Download pretrained model:
sh download_pretrained_model.sh
- Test pretrained model on CelebA dataset:
python test.py ./datasets/celeba/test/* --checkpointDir checkpoints_face_pretrained --maskType right --batchSize 64 --lam1 100 --lam2 1 --lam3 0.1 --lr 0.001 --nIter 1000 --outDir results_face_pretrained --text_vector_dim 18 --text_path datasets/celeba/imAttrs.pkl
We test our framework with 3 kinds of data, face(CelebA), bird(CUB), and flower(Oxford-102). So far, we have only experimented with face images using attribute vectors as texts information. Here are some preliminary results:
We used CelebA dataset, which also provides 40 attributes for each image. Similar to the text information, attributes control the specific details of the generated images. We chose 18 attrbutes for training.
The following images were generated given sketches and the corresponding attriubtes.
The following images were generated given sketches and the random attriubtes. The controlling effects of the attributes are still under improvement.
The following images were generated given freehand sketches and the random attriubtes. The controlling effects of the attributes are still under improvement.
Codes are based on DCGAN and dcgan-completion.
Consider citing the following paper if you find this repo helpful:
@InProceedings{Lu_2018_ECCV,
author = {Lu, Yongyi and Wu, Shangzhe and Tai, Yu-Wing and Tang, Chi-Keung},
title = {Image Generation from Sketch Constraint Using Contextual GAN},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}