Motivated by the success of StyleGAN, where stochastic variation is incorporated in generating realistic-looking images, we proposed to focus on the βhairstyle attributes of a face. The right hairstyle can often only be discovered through trials and errors. Thus, being able to virtually βtry onβ a novel hairstyle through a computer vision system seems to hold practical value in reality.
In this project, we propose an end-to-end workflow for editing hair attributes on real faces. Hairstyle Transfer leverages fixed pre-trained GAN models, GAN encoders, and manipulations of the latent code for the semantic editing. Moreover, we further confirmed the linear separability assumption of hair-related semantic attributes.
There are three colab notebooks for this end-to-end workflow.
StyleGAN_Encoder
Generate latent representations of your own imagesGet Attribute Score Pairs
Generate pairs of latent code and scores for boundary training laterTrain Boundaries + Face Editing with Interface GAN
Semantic editing with the boundary obtained
Curious to learn more? Full report is now on the blog.
This implementation is based on StyleGAN and InterFaceGAN. π