Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extracting Lora from 2 models doesn't work #416

Open
CasanovaSan opened this issue Nov 1, 2024 · 0 comments
Open

Extracting Lora from 2 models doesn't work #416

CasanovaSan opened this issue Nov 1, 2024 · 0 comments

Comments

@CasanovaSan
Copy link

So i tried to extract a lora from a pony merge using the lora extract thing

And i got this error

Calculating sha256 for C:\AI\StableDif\Packages\Stable Diffusion WebUI\models\Stable-diffusion\sd\CassyCartoonV4.4.fp16.safetensors: 1c3578a90aa563a9ee0f0607ab52e19847e2aa03753f30596f08c7530f6c4423
Loading weights [1c3578a90a] from C:\AI\StableDif\Packages\Stable Diffusion WebUI\models\Stable-diffusion\sd\CassyCartoonV4.4.fp16.safetensors
Creating model from config: C:\AI\StableDif\Packages\Stable Diffusion WebUI\repositories\generative-models\configs\inference\sd_xl_base.yaml
Applying attention optimization: xformers... done.
Traceback (most recent call last):
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\gradio\routes.py", line 488, in run_predict
    output = await app.get_blocks().process_api(
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\gradio\blocks.py", line 1431, in process_api
    result = await self.call_function(
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\gradio\blocks.py", line 1103, in call_function
    prediction = await anyio.to_thread.run_sync(
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\anyio\to_thread.py", line 33, in run_sync
    return await get_asynclib().run_sync_in_worker_thread(
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
    return await future
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
    result = context.run(func, *args)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\gradio\utils.py", line 707, in wrapper
    response = f(*args, **kwargs)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 292, in makelora
    load_model(checkpoint_info)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\extensions\sd-webui-supermerger\scripts\mergers\pluslora.py", line 1570, in load_model
    sd_models.load_model(checkpoint_info)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\modules\sd_models.py", line 869, in load_model
    sd_model.cond_stage_model_empty_prompt = get_empty_cond(sd_model)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\modules\sd_models.py", line 728, in get_empty_cond
    d = sd_model.get_learned_conditioning([""])
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\modules\sd_models_xl.py", line 32, in get_learned_conditioning
    c = self.conditioner(sdxl_conds, force_zero_embeddings=['txt'] if force_zero_negative_prompt else [])
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1518, in _wrapped_call_impl
    return self._call_impl(*args, **kwargs)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1557, in _call_impl
    args_result = hook(self, args)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\modules\lowvram.py", line 55, in send_me_to_gpu
    module_in_gpu.to(cpu)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1160, in to
    return self._apply(convert)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 810, in _apply
    module._apply(fn)
  [Previous line repeated 5 more times]
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 833, in _apply
    param_applied = fn(param)
  File "C:\AI\StableDif\Packages\Stable Diffusion WebUI\venv\lib\site-packages\torch\nn\modules\module.py", line 1158, in convert
    return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
NotImplementedError: Cannot copy out of meta tensor; no data!

Those are the settings i used, is there anything wrong i did?
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant