-
-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: too many tokens in negative causes weird behavior #4
Comments
一樣的情況,大多數時候無法保持圖像的一致,不過非常感謝作者製作了這個插件 |
the problem occurs if there are more than 75 tokens in the promt or in the negative |
Thank you so much for creating this extension! I can also confirm that 75 tokens is the limit for pos/negative prompts before the frames get split in the middle |
Interesting. I will use your example to test on my side. Will learn how A1111 implemented infinite prompts |
same problem here and temporarily resolved by using less or equal than 75 tokens for both positive and negative prompts |
Seems like the latest update completely broke the generation, now with <75 tokens the animation gets broken up into 2 08a4086 |
In my case, the decision to use less than 75 tokens did not help before either. |
So I messed around in code and found that changing value from 2 to 1 helps in my generations Easy change to test, so anyone is free to give feedback. |
Is there an existing issue for this?
Have you read FAQ on README?
What happened?
<75 tokens, seems WAI
If we double the negative prompt then it will start to produce two sets of images
Behavior holds at batch 24 (12 one scene, 12 another), even with only slightly over 75 tokens in negative
Going down to batch 14 we start to see one half not follow the prompt well
This deteriorates further at batch 12
SD starts to collapse at batch 10
Going down to 73 tokens in negative and we recover expected function
Alternatively, switching scheduler to DDIM with 77 tokens in negative seems more resistant to collapse but something is still wrong (noisier, more washed out color than before)
Also of note, with 73 tokens in negative batch 15 works fine
But go to 77 tokens and it throws an error
Steps to reproduce the problem
See attached screenshots
What should have happened?
It should apply consistent inputs to all frames
Commit where the problem happens
webui:
version: v1.4.1 • python: 3.10.6 • torch: 2.0.1+cu118 • xformers: N/A • gradio: 3.32.0 • checkpoint: e9a14f558d
extension:
sd-webui-animatediff https://github.com/continue-revolution/sd-webui-animatediff master [e8c88a4]
What browsers do you use to access the UI ?
No response
Command Line Arguments
Console logs
Additional information
No response
The text was updated successfully, but these errors were encountered: