You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Substituting empty model_c with model_a
model A : 1
model B : 2
model C : 3
alpha,beta : (1.0, 0.25)
weights_alpha : [1.0, 1.0, 1.0, 0.5, 0.25, 0.5, 0.75, 0.75, 0.75, 0.5, 0.25, 0.25, 0.75, 0.75, 0.25, 0.5, 1.0, 1.0, 1.0]
weights_beta : 0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5,0.5
mode : Weight sum
MBW : True
CalcMode : normal
Elemental :
Weights Seed : 52437277
Off : ([], '')
Adjust :
Loading weights [2] from file
Loading weights [1] from file
Loading Model: {'checkpoint_info': {'filename': 'E:\\The_box\\ai\\forgeui\\sd-webui-forge-aki-v1.0\\models\\Stable-diffusion\\1illXL\\4.fp16.safetensors', 'hash': '053491f2'}, 'additional_modules': [], 'unet_storage_dtype': None}
StateDict Keys: {'unet': 1680, 'vae': 248, 'text_encoder': 197, 'text_encoder_2': 518, 'ignore': 0}
Working with z of shape (1, 4, 32, 32) = 4096 dimensions.
K-Model Created: {'storage_dtype': torch.float16, 'computation_dtype': torch.float16}
Model loaded in 1.5s (forge model load: 1.5s).
[Unload] Trying to free all memory for cuda:0 with 0 models keep loaded ... Done.
[Unload] Trying to free 3051.58 MB for cuda:0 with 0 models keep loaded ... Done.
[Memory Management] Target: JointTextEncoder, Free GPU: 23251.00 MB, Model Require: 1559.68 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 20667.32 MB, All loaded to GPU.
Moving model(s) has taken 0.38 seconds
[Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 21478.68 MB ... Done.
Skipping unconditional conditioning (HR pass) when CFG = 1. Negative Prompts are ignored.
[Unload] Trying to free 1024.00 MB for cuda:0 with 1 models keep loaded ... Current free memory is 21478.06 MB ... Done.
[Unload] Trying to free 7656.40 MB for cuda:0 with 0 models keep loaded ... Current free memory is 21477.21 MB ... Done.
[Memory Management] Target: KModel, Free GPU: 21477.21 MB, Model Require: 4897.05 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 15556.16 MB, All loaded to GPU.
Moving model(s) has taken 0.85 seconds
[Unload] Trying to free 4495.36 MB for cuda:0 with 0 models keep loaded ... Current free memory is 16560.33 MB ... Done.
[Memory Management] Target: IntegratedAutoencoderKL, Free GPU: 16560.33 MB, Model Require: 159.56 MB, Previously Loaded: 0.00 MB, Inference Require: 1024.00 MB, Remaining: 15376.78 MB, All loaded to GPU.
Moving model(s) has taken 0.30 seconds
Traceback (most recent call last):
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\python\Lib\site-packages\gradio\queueing.py", line 536, in process_events
response = await route_utils.call_process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\python\Lib\site-packages\gradio\route_utils.py", line 285, in call_process_api
output = await app.get_blocks().process_api(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\python\Lib\site-packages\gradio\blocks.py", line 1923, in process_api
result = await self.call_function(
^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\python\Lib\site-packages\gradio\blocks.py", line 1508, in call_function
prediction = await anyio.to_thread.run_sync( # type: ignore
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\python\Lib\site-packages\anyio\to_thread.py", line 33, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\python\Lib\site-packages\anyio\_backends\_asyncio.py", line 877, in run_sync_in_worker_thread
return await future
^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\python\Lib\site-packages\anyio\_backends\_asyncio.py", line 807, in run
result = context.run(func, *args)
^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\python\Lib\site-packages\gradio\utils.py", line 818, in wrapper
response = f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 153, in smergegen
images = simggen(s_prompt,s_nprompt,s_steps,s_sampler,s_cfg,s_seed,s_w,s_h,s_batch_size,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\extensions\sd-webui-supermerger\scripts\mergers\mergers.py", line 1306, in simggen
processed:Processed = processing.process_images(p)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\modules\processing.py", line 842, in process_images
res = process_images_inner(p)
^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\modules\processing.py", line 990, in process_images_inner
samples_ddim = p.sample(conditioning=p.c, unconditional_conditioning=p.uc, seeds=p.seeds, subseeds=p.subseeds, subseed_strength=p.subseed_strength, prompts=p.prompts)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "E:\The_box\ai\forgeui\sd-webui-forge-aki-v1.0\modules\processing.py", line 1405, in sample
if hasattr(self, 'hr_additional_modules') and 'Use same choices' not in self.hr_additional_modules:
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: argument of type 'NoneType' is not iterable
This error does not occur if high score repair is not turned on.
The text was updated successfully, but these errors were encountered:
DELSAO6
changed the title
Error generating graphs in supermerger tab when forging graphs with high score repair turned on.
Error in supermerger tab when webui forge graphic generation is turned on with high score fixing
Jan 1, 2025
This error does not occur if high score repair is not turned on.
The text was updated successfully, but these errors were encountered: