Trained Torch version of the DeepLab on the CelebHQ. Also dataset used in the model training are here.
This project requires version of Python of at least 3.10.* Other requirements are listed in the requirements.txt file. To install all requirements:
python3 -m pip install -r requirements.txt
Application uses the following CLI for training models:
options:
-h, --help show this help message and exit
--data DATA Path to the dataframe that represents data. More in datasets/README.md
--output-channels OUTPUT_CHANNELS
--epochs EPOCHS
--batch-size BATCH_SIZE
--tv-split TV_SPLIT Train-validation split, that defines the size of the train part.
--mapping MAPPING Path to the mapping of the layer: index in json format
-d DEVICE, --device DEVICE
The device on which model will train.
After starting the training CelebAMask-HQ dataset will be automatically downloaded. If there is any troubles with downloading using gdown you can access it on Google Drive. For usage you must unzip CelebAMask-HQ.zip into datasets/CelebAMask-HQ folder.
To evaluate you need to download weights of the model from the Google Drive. For example inference use the following command:
python eval.py --mapping datasets/CelebAMask-HQ/mapping.json --model runs/best_weights.pt -i example/input.png -cmap example/color_mapping.json
Example of evaluation is:
In the future weights can be improved.
This list will be updated throughout the time. Contributions are hugely appreciated!
Task name | Progress |
---|---|
Implement Tensorboard logging | ✅ |
Implement callback for precision | ✅ |
Implement callback for recall | ✅ |
Implement script for the evaluation | ✅ |
Train weights and make them public | ✅ |
@inproceedings{CelebAMask-HQ,
title={MaskGAN: Towards Diverse and Interactive Facial Image Manipulation},
author={Lee, Cheng-Han and Liu, Ziwei and Wu, Lingyun and Luo, Ping},
booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2020}
}