This code has been developed and tested on Python 3.6 and PyTorch 0.4.
To quickly get setup, we can create a new Conda environment and install the required packages, like so:
conda env create -f=environment.yml -n depthnet
To use this part you need to first perform the following steps:
- Compile the
FaceWarperServer
which is in the parent directory. - Get data: In the
data/
folder there are three sub-folders:celeba
,celeba_faceswap
, andvgg
, each containing their ownREADME.md
file on how to prepare the data.
One of the applications of CycleGAN was using it to clean up the operation which warps one face onto another. This involves using CycleGAN to learn a mapping between two domains: the domain of faces which have been pasted onto other faces (DepthNet faces) and the domain of ground truth faces. When this is trained, the mapping depthnet -> real face
is the one we are interested in utilising.
Some example images are shown below. (From left to right: source face, target face, DepthNet face, cleanup of DepthNet face)
To train the face swap cleanup model, simply run:
python task_launcher_faceswap.py \
--name=experiment_faceswap \
--batch_size=16 \
--epochs=1000
You can find the pre-trained checkpoint for this here (add --resume=<path_to_pkl>
to the above script).
Since DepthNet only warps the region corresponding to the face, it would be useful to be able to resynthesize the outside region such as the background and hair. In this experiment, CycleGAN maps from the domain consisting of the DepthNet frontalised face and the background of the original face to the domain of ground truth (CelebA) frontal images:
Some examples are shown below. (From left to right: source image, source image + keypoints, frontalised face with DepthNet, CycleGAN combining (3) and background of (2))
To train the background synthesis model, simply run:
python task_launcher_bgsynth.py \
--name=experiment_depthnet_bg_vs_frontal \
--dataset=depthnet_bg_vs_frontal \
--batch_size=16 \
--network=architectures/block9_a6b3.py \
--epochs=500
You can find the pre-trained checkpoint for this here (add --resume=<path_to_pkl>
to the above script).
- Some code has been used from the following repositories: