You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It has parameters/choices related to training and inference (optimizer, learning rate scheduler, penalty_k, window influence, lr to name a few). Can you please suggest which dataset was used to tune these hyperparameter values? Was it fine-tuned using the test-set itself?
Also, I am particularly intrigued by a statement in the paper: "For each epoch, we randomly sample 20,000 images from LaSOT, 120,000 from COCO, 400,000 from YoutubeBB, 320,000 from GOT10k and 310,000 images from the ImageNet dataset". Can you suggest what was the reason behind choosing such a sampling split and not going for uniform sampling?
The text was updated successfully, but these errors were encountered:
Hi! Thank you for publishing your code.
Your released code has a section dedicated to configuration files corresponding to different tracker modules https://github.com/PinataFarms/FEARTracker/tree/main/model_training/config
It has parameters/choices related to training and inference (optimizer, learning rate scheduler, penalty_k, window influence, lr to name a few). Can you please suggest which dataset was used to tune these hyperparameter values? Was it fine-tuned using the test-set itself?
Also, I am particularly intrigued by a statement in the paper: "For each epoch, we randomly sample 20,000 images from LaSOT, 120,000 from COCO, 400,000 from YoutubeBB, 320,000 from GOT10k and 310,000 images from the ImageNet dataset". Can you suggest what was the reason behind choosing such a sampling split and not going for uniform sampling?
The text was updated successfully, but these errors were encountered: