You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I saw from the README.md that you recommend continue training from a checkpoint to do fine-tuning.
For a completely different domain (probs with a fraction 50-100 hours, compared to the 3000h for pre-training), have anyone tried different fine-tuning strategies:
(1) Adapter?
(2) Freezing certain layers, etc?
I saw from the README.md that you recommend continue training from a checkpoint to do fine-tuning.
For a completely different domain (probs with a fraction 50-100 hours, compared to the 3000h for pre-training), have anyone tried different fine-tuning strategies:
(1) Adapter?
(2) Freezing certain layers, etc?
cc: @mpc001
The text was updated successfully, but these errors were encountered: