We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using the provided config(completely same), get the result:
Same test on the provided checkpoint, get the result:
There is a 5 points gap between them. @chenxuluo @xiaodongyang
Is there anyone met the same problem like me?
The text was updated successfully, but these errors were encountered:
Training on RTX-6000(8card, 8 sample per gpu), syncbn=True.
Sorry, something went wrong.
For me is self-training&test amota: 58.3; test with provided model amota: 61.1. Without changing any parameters.
The provided checkpoint is finetuned from the detection task. I will update more experiments.
Hi, I'm wondering how to finetune the model on detection task and why would that influence the result so much? Thanks a lot!
No branches or pull requests
Using the provided config(completely same), get the result:
Same test on the provided checkpoint, get the result:
There is a 5 points gap between them. @chenxuluo @xiaodongyang
Is there anyone met the same problem like me?
The text was updated successfully, but these errors were encountered: