-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pretrained weights from EG3D #15
Comments
Hi, thanks for your interest. We use the FFHQ checkpoint of EG3D as it is also a dataset of human faces. We also report metrics without pre-trained weights for a fair comparison with other baselines. |
Thank you for your quick reply and your great work. I would like to ask about when you will open source code. |
At the same time, I also want to consult some designs of pix2pix3D. For gt {Ic, Is}, you have adopted reconstruction loss of images. That means that for a given mask Is and a set of random z, all z will generate Ic for the given Is. Will this lead to a deterioration in the diversity of the model. According to my understanding, without reconstruction loss for images, the network can also complete 'pix2pix' by monitoring the reconstruction loss of Is and the generated mask, and different random z can generate different results. Thank you. |
Thank you for your great work.
The paper says that 'we can significantly reduce the training time to 4 hours if we initialize parts of our model with pretrained weights from EG3D'. As I known, EG3D just has weights for FFHQ dataset. But pix2pix3D uses CelebaHQ-Mask dataset. So I want to know what weights do you use?
Thank you.
The text was updated successfully, but these errors were encountered: