You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have two question regarding hyperparameters for analyzing the model performance on p values.
If you have the hyperparameters to train p = 1, could you share the hyperparameters for the datasets (e.g. CelebA, Seg2Cat, edge2Car)?
The paper says
"Conversely, only sampling random poses (p = 1) gives the best image quality but suffers huge misalignment with input label maps."
I tried to train the proposed model with p = 1 several times with various parameters, but didn't succeed. I assume that it is because when p = 1, the model is not trained with CVC loss, and it makes the training procedure harder.
Additionally, I am wondering if the values on the chart from the Figure 9 are from CelebA dataset. Is it true?
I greatly appreciate your work, thank you!
The text was updated successfully, but these errors were encountered:
JunseongAHN
changed the title
Training with No pose information
Training with No Pose Information
May 30, 2024
Hello, Thank you for sharing your awesome work!
I have two question regarding hyperparameters for analyzing the model performance on p values.
If you have the hyperparameters to train p = 1, could you share the hyperparameters for the datasets (e.g. CelebA, Seg2Cat, edge2Car)?
The paper says
"Conversely, only sampling random poses (p = 1) gives the best image quality but suffers huge misalignment with input label maps."
I tried to train the proposed model with p = 1 several times with various parameters, but didn't succeed. I assume that it is because when p = 1, the model is not trained with CVC loss, and it makes the training procedure harder.
Additionally, I am wondering if the values on the chart from the Figure 9 are from CelebA dataset. Is it true?
I greatly appreciate your work, thank you!
The text was updated successfully, but these errors were encountered: