-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for validation set instead of evaluating on test set directly. #12
Comments
@crux82 maybe I can add the validation set logic if you're not interested. |
Dear Mahmoud I am sorry but what you asked is just to implement a "standard" BERT-based model. As an example, I would suggest you take a look at a LAB material I prepared at: https://github.com/crux82/AILC-lectures2021-lab Unfortunately, I think that adding what you ask would just make the GAN-BERT example ... less clear. Hope the above example is clear and useful to implement your baseline. Bests Danilo |
I think you just need to replace val_set by test_set and add a "if" condition to stop when meeting some defined criteria (ex. accuracy), then save the model at that point. |
I see you only evaluate the test set in each epoch, can we add a validation set, with early stopping criteria based on the results/loss on this validation set?
this would also require a way to checkpoint the whole model in order to save the best model configuration against the dev set to be used against the test set at the end of training.
Please let me know if we can add that.
1- dev set support with early stopping criteria
2- checkpointing logic, to save and load the model.
One last question: Can you provide a way to train only the base model (BERT-based) without the GAN components, so that I take these numbers as a reference. So I can tell that the BERT-based model only got the following results against these results. And when we added GAN, we got these results.
The text was updated successfully, but these errors were encountered: