This is an implementation of a simple variational autoencoder which trains on the MNIST dataset and generates similar images of digits.
- CUDA toolkit 7.5
- cuDNN v5
- TensorFlow (https://github.com/tensorflow/tensorflow/tree/r0.10) from r0.10 branch, select binary compatible with the above.
Run python trainScriptClass.py
. It will train a simple 2 layer VAE generator with 2 layer encoder for training. The parameters are defined below:
batch_size
: size of the training and testing batch (for batch size of 20, it nearly reached 11 GB on NVIDIA-Titan X Maxwell)X_size
: size of the input (here it is the total number of pixels)hidden_enc_1_size
: hidden layer 1 size in the encoderhidden_enc_2_size
: hidden layer 2 size in the encoderhidden_gen_1_size
: hidden layer 1 size in the generatorhidden_gen_2_size
: hidden layer 2 size in the generatorz_size
: size of the latent variable
The model trains with a default learning rate of 1e-4 using the adam optimizer.
The model is trained for 200000 iterations and 20 randomly generated samples and the checkpoint of the corresponding model are saved in `generated_class/' directory after every 50000 iterations.
- This was implemented based on the Carl Doersch's tutorial available at: https://arxiv.org/abs/1606.05908
- Another useful reference for implementing VAEs is: https://jmetzen.github.io/2015-11-27/vae.html
Feb 10, 2017
- Added convolution + deconvolution based VAE
- Batch size is again fixed at initialization, have to alter the technique.
======================================
- Added support for a Beta weighting term in the KL Divergence loss
- Batch size is no longer fixed at initialization
- Added functions to encode a given x and decode a given z, and also to perform both these operations to generate an image "like" the given one.
- beta_trainScriptClass_conditional.py adds visualization of latent features