The extension is quietly corrupted #235
Replies: 15 comments
-
This problem can be reproduced stably without a specific sample, the following is a reference, you'll get half of the image that's almost completely black.
You also get corrupted results using base, just not the same. |
Beta Was this translation helpful? Give feedback.
-
I've found the extension to perform poorly on a class of lora, it's easier to get poor nonsensical results with these models, and rolling back to 741a4d9 is just a reduced impact rather than a fix. If I want to reproduce old results, I also need to roll back to earlier. |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Which lora is it? That wetransfer platform is requesting registration for some reason.
Have you found any version in which two latent loras work correctly in separate regions? It'd be interesting to see if moving the lora to the base, with a low base weight, both settings off, produces a decent output. |
Beta Was this translation helpful? Give feedback.
-
I have never seen a download that requires registration. You may have clicked on the wrong location. Just click "download". The original website is not working every day now, and I can't send it out.
In the preview, it can be clearly seen that it gradually becomes black, as if some result is lost. Using a single lora to cover the base will turn the whole image black, while single lora and single prompt partitions often have one of them losing almost all content. |
Beta Was this translation helpful? Give feedback.
-
On some samples the on option did improve, maybe it should be reversed. @hako-mikan
I'm not confident what version is normal now, because I haven't gotten the same result, and it may cause confusion if there are too many versions rolled back. At present, I can say that 741a4d9 is slightly better. Maybe I need to test last month's version.
This is equivalent to reducing the lora weight in disguise, and the same is true for direct reduction. |
Beta Was this translation helpful? Give feedback.
-
I tested the version that just added Latent mode, and it still gives different results than 741a4d9, just by a relatively small amount. It seems that this is actually two issues, the difference introduced by the update and the compatibility with some lora. |
Beta Was this translation helpful? Give feedback.
-
I tried to specify the weight of Lora in the negative textencoder and U-net, and found that it has a very large impact, which can make the results completely different, and it is even possible to get not only pure black but also pure white output, so I think it is necessary to specify it. On some of my samples, specifying intermediate values resulted in a better blend. |
Beta Was this translation helpful? Give feedback.
-
Thanks for the experiments. I have an idea, will see if I can code it tomorrow. |
Beta Was this translation helpful? Give feedback.
-
It is more reasonable to specify the negative weight of each lora separately, but as you said, the adjustment is very frequent and the meaning is not clear. Saving to a preset doesn't seem to make much sense, do you have evidence that the preset values can still achieve good blending results when multiple loras are blended? I also have two ideas here, maybe we can try to determine a relationship and slightly change the negative weight according to the number of lora used, but this requires a lot of testing; another idea I think is better, whether we can make a lora in the negative weight calculation Affect other lora areas, set a level to make the effect blend, so as to maintain the overall consistency. |
Beta Was this translation helpful? Give feedback.
-
@Symbiomatrix Do you have any progress, I want to observe the effect of gradually changing the lora negative weight in sampling steps, but the extension does not seem to have this kind of intervention. |
Beta Was this translation helpful? Give feedback.
-
It's a difficult LoRA to deal with. I've tried a few things and it seems to improve with smaller values of the CFG scale. assuming a value of 3 or so gave me good results. |
Beta Was this translation helpful? Give feedback.
-
@hako-mikan This Lora has obvious overfitting and is not compatible with many models, but it is just an example, we don't need to achieve good results on it. |
Beta Was this translation helpful? Give feedback.
-
Hello.
So what I'm wondering is, whether this method of calculation might be thrown off by the introduction of loras. Theoretically, if everything works as it should, then a lora should have only affected its own layer adversely, but what we're seeing is that non lora regions are also impacted (and especially when unet is nonzero). Therefore, it might be that uncond is altered, or the difference is nonzero (which would mean the more regions you have, the greater the cumulative effect). Edit: It doesn't look like a nonzero difference, Like hako said, a low cfg over a large number of steps suffers less corruption, and I can see why it might. For lora combination though, still pretty bad, so that might be something else. |
Beta Was this translation helpful? Give feedback.
-
Another thought: If we were to create multiple versions of the uncond layer, one for every region, and match them, would that help the separation? Is that feasible? |
Beta Was this translation helpful? Give feedback.
-
With the update of the extension, some results can no longer be reproduced, the problem is now worse, it no longer works on certain models, the results will gradually become pure black within a few iterations. I started rolling back the WebUI and extensions and even thought there was a hardware failure XD until I rolled back the extensions to 741a4d9 and now it works.
Beta Was this translation helpful? Give feedback.
All reactions