Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FP16 training] What's the function of patch_accelerator_for_fp16_training #64

Open
Luciennnnnnn opened this issue Sep 20, 2024 · 1 comment

Comments

@Luciennnnnnn
Copy link

Hi, I notice that you hard code allow_fp16 as True, what's the motivation of this, if we do not do this, what's the potential flaw?

@Eugeoter
Copy link
Collaborator

I copy this from here: https://github.com/kohya-ss/sd-scripts/blob/main/library/train_util.py (line 4404).
When enabling mixed_precision='fp16', accelerator's optimizer doesn't allow upscaling fp16 by default and will raise error.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants