Skip to content

Training pipeline for musicgen using a simple dataset format.

License

Notifications You must be signed in to change notification settings

adbcode/musicgen-finetune

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 

Repository files navigation

musicgen-finetune

Training pipeline for musicgen using a simple dataset format.

How to use:

  • Follow instructions in the notebook to load dependencies, assuming you have an environment with Jupyter set up.
  • Requires access to a GPU with 8+ GB VRAM for training (preferably > 16 GB), at least 4 GB VRAM for inference.
  • In Google Colab: replace input_folder_name to a path as per your mounted Google Drive (e.g. /content/...)

NOTE:

  • Longer outputs take progressively more time in inference, especially depending on your hardware.
  • The current model is prone to have silent sections in the ouput, highly influenced by the finetune dataset used (e.g. you use starts and ends of songs).

TODO

  • Support stereo finetune.
  • Optimized training: e.g. LORA

Credits

About

Training pipeline for musicgen using a simple dataset format.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published