Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Animated Diff is bugging whole Auto1111 whenever i try to generate #552

Closed
2 tasks done
JabuJabu-03 opened this issue Sep 4, 2024 · 6 comments
Closed
2 tasks done

Comments

@JabuJabu-03
Copy link

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

Whenever I hit generate it gives me this error:
torch._C._cuda_emptyCache()
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.

And Auto1111 just needs to be closed and started again.

What i don't really understand is why the first times I tried this extension It worked and I could generate some cool gifs, and after a few tries it started to give me this error, and now anytime i try, whatever setting/model/prompt I use it just gives me this error costantly. I am not using other extensions together with this one and this is the version i am using:

version: v1.10.1  •  python: 3.10.6  •  torch: 2.1.2+cu121  •  xformers: N/A  •  gradio: 3.41.2  •  checkpoint: 7eb674963a

Steps to reproduce the problem

  1. Go to .... Auto1111
  2. Press .... Generate after inserting model/prompt/etc
  3. ... And Automatic would stop working completely.

What should have happened?

Start elaborating the GIF.

Commit where the problem happens

webui: Auto1111
extension: Animatediff

What browsers do you use to access the UI ?

No response

Command Line Arguments

no

Console logs

Traceback (most recent call last):
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\call_queue.py", line 74, in f
        res = list(func(*args, **kwargs))
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\call_queue.py", line 53, in f
        res = func(*args, **kwargs)
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\call_queue.py", line 37, in f
        res = func(*args, **kwargs)
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\txt2img.py", line 109, in txt2img
        processed = processing.process_images(p)
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\processing.py", line 847, in process_images
        res = process_images_inner(p)
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\processing.py", line 988, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\processing.py", line 1346, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditioning(x))
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\sd_samplers_kdiffusion.py", line 230, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\sd_samplers_common.py", line 272, in launch_sampling
        return func()
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\sd_samplers_kdiffusion.py", line 230, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=False, callback=self.callback_state, **extra_params_kwargs))
      File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorate_context
        return func(*args, **kwargs)
      File "C:\Users\franc\Desktop\sduiwebapp\webui\repositories\k-diffusion\k_diffusion\sampling.py", line 594, in sample_dpmpp_2m
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\sd_samplers_cfg_denoiser.py", line 268, in forward
        x_out[a:b] = self.inner_model(x_in[a:b], sigma_in[a:b], cond=make_condition_dict(c_crossattn, image_cond_in[a:b]))
      File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
        return self._call_impl(*args, **kwargs)
      File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\torch\nn\modules\module.py", line 1527, in _call_impl
        return forward_call(*args, **kwargs)
      File "C:\Users\franc\Desktop\sduiwebapp\webui\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 164, in mm_sd_forward
        x_in[_context], sigma_in[_context],
    RuntimeError: CUDA error: device-side assert triggered
    CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.


---
Traceback (most recent call last):
  File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\call_queue.py", line 91, in f
    devices.torch_gc()
  File "C:\Users\franc\Desktop\sduiwebapp\webui\modules\devices.py", line 81, in torch_gc
    torch.cuda.empty_cache()
  File "C:\Users\franc\Desktop\sduiwebapp\system\python\lib\site-packages\torch\cuda\memory.py", line 159, in empty_cache
    torch._C._cuda_emptyCache()
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Additional information

I tried to find a solution from other posts but i can't seem to find any. Only another thread here saying to delete the venv folder and rebuild it? but it wasn't very clear. Any help would be greatly appreciated! Thanks in advance

@JabuJabu-03
Copy link
Author

So today I made some more testing and I got it working again. It seems the problem is actually the lenght of the prompt. Within 75 tokens it's generating the gift properly. After it's giving me cuda error. I am trying to make 512x512 imgs and within 75 tokens its working very good. I have a 4070super with 12gbvram. Could this be the limitation?

@ansstuff
Copy link

ansstuff commented Sep 5, 2024

Pad prompt/negative prompt to be same length !

#83

@JabuJabu-03
Copy link
Author

Thanks for the reply, but my issue is perhaps a bit different cheking this solution. Because problem for me is that it stops the generation immediately and Auto1111 is not working anymore.

@JabuJabu-03
Copy link
Author

Super thank you very much, I tried anyway today and it fixed it! I have a question, does this option affect normal generations? It says it improves performance but at any cost? Thanks!

@ansstuff
Copy link

ansstuff commented Sep 9, 2024

'Pad prompt/negative prompt to be same length' does not have any negative costs.
'Batch cond/uncond' has minor VRAM cost.

'Pad prompt/negative prompt to be same length' is required for 'Batch cond/uncond' to work.

So either enable both (recommended), or disable both (to save negligible amount of VRAM and lose performance).

In your case 'Batch cond/uncond' was enabled and 'Pad prompt/negative prompt to be same length' was disabled.
Thus AnimateDiff was causing CUDA assertion every time your positive prompt exceeded 75 tokens while negative prompt was still 75.
WebUI doesn't handle CUDA errors, so it just stops working until you manually restart.

@JabuJabu-03
Copy link
Author

In your case 'Batch cond/uncond' was enabled and 'Pad prompt/negative prompt to be same length' was disabled.
In fact yeah i only had to enable the "Pad prompt/negative prompt to be same length". I will just keep them both active for normal generations too then.
So cool thank you again I was really struggling to find a solution!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants