You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the excellent project! I'm new to GAN and was trying to reproduce the results(reported in the paper) for the 20News dataset. However, my testing accuracy stuck at about 5.2% no matter whether I used a 1% labelled or 10% labelled training dataset. (I tried 1%, 2%, 10%-50% but almost got same results.) Also, the generator training loss is extremely big like up to 1343123137304389. I used my own dataset with different ratios of labelled datasets and the highest accuracy I got is only 38%.
Just wondering does anyone was able to reproduce the results or perhaps knows what is going wrong?
I trained 20News dataset for 15 epochs, lr= 5e-6, dropout = 0.1, noise_size = 100, max_seq_length = 256, batch size = 64.
Appreciate your help!
The text was updated successfully, but these errors were encountered:
Test your lr with different values, 5e-5, 5e-7, etc.
Thank you so much for your reply! After I froze some weights from BERT(and tried several different lr) it is somehow working now on my own dataset. Maximum accuracy around 63%.
Thanks for the excellent project! I'm new to GAN and was trying to reproduce the results(reported in the paper) for the 20News dataset. However, my testing accuracy stuck at about 5.2% no matter whether I used a 1% labelled or 10% labelled training dataset. (I tried 1%, 2%, 10%-50% but almost got same results.) Also, the generator training loss is extremely big like up to 1343123137304389. I used my own dataset with different ratios of labelled datasets and the highest accuracy I got is only 38%.
Just wondering does anyone was able to reproduce the results or perhaps knows what is going wrong?
I trained 20News dataset for 15 epochs, lr= 5e-6, dropout = 0.1, noise_size = 100, max_seq_length = 256, batch size = 64.
Appreciate your help!
The text was updated successfully, but these errors were encountered: