Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Error on the first time running animatediff #256

Closed
2 tasks done
Elekez opened this issue Oct 27, 2023 · 1 comment
Closed
2 tasks done

[Bug]: Error on the first time running animatediff #256

Elekez opened this issue Oct 27, 2023 · 1 comment

Comments

@Elekez
Copy link

Elekez commented Oct 27, 2023

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits of both this extension and the webui

Have you read FAQ on README?

  • I have updated WebUI and this extension to the latest version

What happened?

after install animatediff and download model, I try to run it but got errors

Steps to reproduce the problem

  1. Go to .... txt2img, enter prompt
  2. Press ....generate
  3. ...

What should have happened?

should generate gif

Commit where the problem happens

webui: A1111 1.6
extension: newest

What browsers do you use to access the UI ?

No response

Command Line Arguments

-xformers --upcast-sampling --no-hashing --always-batch-cond-uncond --medvram --precision full --no-half

Console logs

*** Error completing request
*** Arguments: ('task(m56nl9t188jur0f)', '', '', [], 20, 'Euler a', 1, 1, 7, 512, 512, False, 0.7, 2, 'Latent', 0, 0, 0, 'Use same che
ckpoint', 'Use same sampler', '', '', [], <gradio.routes.Request object at 0x000001D56FEF4970>, 0, False, '', 0.8, -1, False, -1, 0, 0
, 0, <scripts.animatediff_ui.AnimateDiffProcess object at 0x000001D571065210>, False, False, 'positive', 'comma', 0, False, False, '',
 1, '', [], 0, '', [], 0, '', [], True, False, False, False, 0, False) {}
    Traceback (most recent call last):
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\call_queue.py", line 57, in f
        res = list(func(*args, **kwargs))
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\call_queue.py", line 36, in f
        res = func(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\txt2img.py", line 55, in txt2img
        processed = processing.process_images(p)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\processing.py", line 732, in process_images
        res = process_images_inner(p)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\processing.py", line 867, in process_images_inner
        samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strengt
h=p.subseed_strength, prompts=p.prompts)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\processing.py", line 1140, in sample
        samples = self.sampler.sample(self, x, conditioning, unconditional_conditioning, image_conditioning=self.txt2img_image_conditi
oning(x))
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\sd_samplers_kdiffusion.py", line 235, in sample
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=Fa
lse, callback=self.callback_state, **extra_params_kwargs))
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\sd_samplers_common.py", line 261, in launch_sampling
        return func()
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\sd_samplers_kdiffusion.py", line 235, in <lambda>
        samples = self.launch_sampling(steps, lambda: self.func(self.model_wrap_cfg, x, extra_args=self.sampler_extra_args, disable=Fa
lse, callback=self.callback_state, **extra_params_kwargs))
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\utils\_contextlib.py", line 115, in decorat
e_context
        return func(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\repositories\k-diffusion\k_diffusion\sampling.py", line 145, in sample_e
uler_ancestral
        denoised = model(x, sigmas[i] * s_in, **extra_args)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_
impl
        return forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 274
, in mm_cfg_forward
        x_out = mm_sd_forward(self, x_in, sigma_in, cond_in, image_cond_in, make_condition_dict) # hook
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\extensions\sd-webui-animatediff\scripts\animatediff_infv2v.py", line 188
, in mm_sd_forward
        out = self.inner_model(x_in[_context], sigma_in[_context], cond=make_condition_dict(cond_in[_context], image_cond_in[_context]
))
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_
impl
        return forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\repositories\k-diffusion\k_diffusion\external.py", line 112, in forward 
        eps = self.get_eps(input * c_in, self.sigma_to_t(sigma), **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\repositories\k-diffusion\k_diffusion\external.py", line 138, in get_eps 
        return self.inner_model.apply_model(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\sd_hijack_utils.py", line 17, in <lambda>
        setattr(resolved_obj, func_path[-1], lambda *args, **kwargs: self(*args, **kwargs))
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\sd_hijack_utils.py", line 28, in __call__
        return self.__orig_func(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py"
, line 858, in apply_model
        x_recon = self.model(x_noisy, t, **cond)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_
impl
        result = forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\repositories\stable-diffusion-stability-ai\ldm\models\diffusion\ddpm.py"
, line 1335, in forward
        out = self.diffusion_model(x, t, context=cc)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1538, in _call_
impl
        result = forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\modules\sd_unet.py", line 91, in UNetModel_forward
        return ldm.modules.diffusionmodules.openaimodel.copy_of_UNetModel_forward_for_webui(self, x, timesteps, context, *args, **kwar
gs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\
openaimodel.py", line 797, in forward
        h = module(h, emb, context)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_
impl
        return forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\repositories\stable-diffusion-stability-ai\ldm\modules\diffusionmodules\
openaimodel.py", line 86, in forward
        x = layer(x)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_
impl
        return forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\extensions\sd-webui-animatediff\motion_module.py", line 86, in forward  
        return self.temporal_transformer(input_tensor, encoder_hidden_states, attention_mask)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_
impl
        return forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\extensions\sd-webui-animatediff\motion_module.py", line 150, in forward 
        hidden_states = block(hidden_states, encoder_hidden_states=encoder_hidden_states, video_length=video_length)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_
impl
        return forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\extensions\sd-webui-animatediff\motion_module.py", line 212, in forward 
        hidden_states = attention_block(
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_
impl
        return forward_call(*args, **kwargs)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\extensions\sd-webui-animatediff\motion_module.py", line 567, in forward 
        hidden_states = self._memory_efficient_attention(query, key, value, attention_mask, optimizer_name)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\extensions\sd-webui-animatediff\motion_module.py", line 467, in _memory_
efficient_attention
        hidden_states = xformers.ops.memory_efficient_attention(
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 192, in memo
ry_efficient_attention
        return _memory_efficient_attention(
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 290, in _mem
ory_efficient_attention
        return _memory_efficient_attention_forward(
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\xformers\ops\fmha\__init__.py", line 310, in _mem
ory_efficient_attention_forward
        out, *_ = op.apply(inp, needs_gradient=False)
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\xformers\ops\fmha\cutlass.py", line 175, in apply
        out, lse, rng_seed, rng_offset = cls.OPERATOR(
      File "E:\pinokio\api\sd-webui.pinokio.git\automatic1111\venv\lib\site-packages\torch\_ops.py", line 502, in __call__
        return self._op(*args, **kwargs or {})
    RuntimeError: CUDA error: invalid configuration argument
    CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
    For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
    Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.

Additional information

No response

@continue-revolution
Copy link
Owner

#204

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants