Check the source code. Just simple wrapper for the original command line interface.
convert.ps1
convert theckpt
format to diffusers formattrain.ps1
train will train the model. Edit this file to change parameters. See DreamBooth training example for details.back.ps1
would convert the diffusers format back tockpt
format. theckpt
would be half precision and only takes 2.4G.
and check the SOURCE CODE of train_dreambooth.py
for details.
I would copy and paste the description from the original colab for now.
See base_config.yaml
and dreambooth.yaml
for the full list of options.
See base_config.yaml
. I'm using native training by default. I assume you know the difference between native and DreamBooth. If not, read the FAQ.
Use --config
in train.ps1
to specify the config file and use gen_config.py
to generate the config file, modify it if you like but it suits my needs.
You can use WandB (Weight and Bias) to monitor your training process.
pip install wandb
wandb login
# input your wandb API token
You can view sample images from WandB now.
See gen_config.py
See also NovelAI/novelai-aspect-ratio-bucketing.
BucketManager impls NovelAI Aspect Ratio Bucketing, which may greatly improve the quality of outputs according to Novelai's blog
train_text_encoder
is weird. Check the FAQ for details.
DB without prior preservation loss and enable variable instance prompt (Read Prompt TXT) is fine tuning directly.