This is PyTorch implementation of Progressive Growing GANs. The network is trainable on custom image dataset.
Place your dataset folder inside data
folder. The training stats are added to repo
folder as the training progresses.
The network training parameters can be configured with following flags.
--train_data_root
- Set your data directory--random_seed
- Random seed to reproduce the experiments--n_gpu
- Number of GPUs for multiple GPU training
--lr
- Learning rate--lr_decay
- Learning rate decay at every resolution transition--eps_drift
- Coefficient for the drift loss--smoothing
- Smoothing factor for smoothed generator--nc
- Number of input channels--nz
- Input dimension of noise--ngf
- Feature dimension of final layer of generator--ndf
- Feature dimension of first layer of discriminator--TICK
- 1 tick = 1000 images = (1000/batch_size) iteration--max_resl
- Maximum resolution (10-->1024, 9-->512, 8-->256)--trns_tick
- Transition tick--stab_tick
- Stabilization tick--gan_type
- GAN training methodology (choices: 'standard', 'wgan', 'wgan-gp', 'lsgan', 'began', 'dragan', 'cgan', 'infogan', 'acgan')--lambda_gp
- Gradient penalty lambda for WGAN-GP--lambda_drift
- Drift loss coefficient for WGAN--gamma
- Equilibrium constant for BEGAN--lambda_k
- Learning rate for k in BEGAN--lambda_info
- Information loss weight for InfoGAN--n_classes
- Number of classes for conditional GANs
--flag_wn
- Use of equalized-learning rate--flag_bn
- Use of batch-normalization (not recommended)--flag_pixelwise
- Use of pixelwise normalization for generator--flag_gdrop
- Use of generalized dropout layer for discriminator--flag_leaky
- Use of leaky ReLU instead of ReLU--flag_tanh
- Use of tanh at the end of the generator--flag_sigmoid
- Use of sigmoid at the end of the discriminator--flag_add_noise
- Add noise to the real image(x)--flag_norm_latent
- Pixelwise normalization of latent vector (z)--flag_add_drift
- Add drift loss
--optimizer
- Optimizer type--beta1
- Beta1 for Adam optimizer--beta2
- Beta2 for Adam optimizer
--use_tb
- Enable tensorboard visualization--save_img_every
- Save images every specified iteration--display_tb_every
- Display progress every specified iteration
Make sure your machine has CUDA enabled GPU(s) if you want to train on GPUs. Change the --n_gpu
flag to positive integral value <= available number of GPUs.
This implementation supports multiple GAN training methodologies:
- Standard GAN (default)
- Wasserstein GAN (WGAN)
- Wasserstein GAN with Gradient Penalty (WGAN-GP)
- Least Squares GAN (LSGAN)
- Boundary Equilibrium GAN (BEGAN)
- DRAGAN
- Conditional GAN (CGAN)
- InfoGAN
- Auxiliary Classifier GAN (ACGAN)
To select a specific training methodology, use the --gan_type
flag:
python main.py --gan_type standard
python main.py --gan_type wgan
python main.py --gan_type wgan-gp
python main.py --gan_type lsgan
python main.py --gan_type began
python main.py --gan_type dragan
python main.py --gan_type cgan
python main.py --gan_type infogan
python main.py --gan_type acgan