-
-
Notifications
You must be signed in to change notification settings - Fork 265
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: Animated Diff is bugging whole Auto1111 whenever i try to generate #552
Comments
So today I made some more testing and I got it working again. It seems the problem is actually the lenght of the prompt. Within 75 tokens it's generating the gift properly. After it's giving me cuda error. I am trying to make 512x512 imgs and within 75 tokens its working very good. I have a 4070super with 12gbvram. Could this be the limitation? |
Pad prompt/negative prompt to be same length ! |
Thanks for the reply, but my issue is perhaps a bit different cheking this solution. Because problem for me is that it stops the generation immediately and Auto1111 is not working anymore. |
Super thank you very much, I tried anyway today and it fixed it! I have a question, does this option affect normal generations? It says it improves performance but at any cost? Thanks! |
'Pad prompt/negative prompt to be same length' does not have any negative costs. 'Pad prompt/negative prompt to be same length' is required for 'Batch cond/uncond' to work. So either enable both (recommended), or disable both (to save negligible amount of VRAM and lose performance). In your case 'Batch cond/uncond' was enabled and 'Pad prompt/negative prompt to be same length' was disabled. |
In your case 'Batch cond/uncond' was enabled and 'Pad prompt/negative prompt to be same length' was disabled. |
Is there an existing issue for this?
Have you read FAQ on README?
What happened?
Whenever I hit generate it gives me this error:
torch._C._cuda_emptyCache()
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with
TORCH_USE_CUDA_DSA
to enable device-side assertions.And Auto1111 just needs to be closed and started again.
What i don't really understand is why the first times I tried this extension It worked and I could generate some cool gifs, and after a few tries it started to give me this error, and now anytime i try, whatever setting/model/prompt I use it just gives me this error costantly. I am not using other extensions together with this one and this is the version i am using:
version: v1.10.1 • python: 3.10.6 • torch: 2.1.2+cu121 • xformers: N/A • gradio: 3.41.2 • checkpoint: 7eb674963a
Steps to reproduce the problem
What should have happened?
Start elaborating the GIF.
Commit where the problem happens
webui: Auto1111
extension: Animatediff
What browsers do you use to access the UI ?
No response
Command Line Arguments
Console logs
Additional information
I tried to find a solution from other posts but i can't seem to find any. Only another thread here saying to delete the venv folder and rebuild it? but it wasn't very clear. Any help would be greatly appreciated! Thanks in advance
The text was updated successfully, but these errors were encountered: