diff --git a/CHANGELOG.md b/CHANGELOG.md
index 43167176f..ae9698e9b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -1,6 +1,6 @@
# Change Log for SD.Next
-## Update for 2023-11-10
+## Update for 2023-11-11
- **Diffusers**
- **LCM** support for any *SD 1.5* or *SD-XL* model!
@@ -20,6 +20,7 @@
- Update to `diffusers==0.23.0`
- **Extra networks**
- Use multi-threading for 5x load speedup
+ - Better Lora trigger words support
- **General**:
- Reworked parser when pasting previously generated images/prompts
includes all `txt2img`, `img2img` and `override` params
@@ -27,6 +28,7 @@
- Support custom upscalers in subfolders
- Support `--ckpt none` to skip loading a model
- **Fixes**
+ - Fix `params.txt` saved before actual image
- Fix inpaint
- Fix manual grid image save
- Fix img2img init image save
diff --git a/README.md b/README.md
index 7958c1b5d..8d1f07300 100644
--- a/README.md
+++ b/README.md
@@ -37,6 +37,8 @@ All Individual features are not listed here, instead check [ChangeLog](CHANGELOG
- Built in installer with automatic updates and dependency management
- Modernized UI with theme support and number of built-in themes
+
![screenshot](html/black-teal.jpg)
+
## Backend support
**SD.Next** supports two main backends: *Original* and *Diffusers* which can be switched on-the-fly:
@@ -44,9 +46,9 @@ All Individual features are not listed here, instead check [ChangeLog](CHANGELOG
- **Original**: Based on [LDM](https://github.com/Stability-AI/stablediffusion) reference implementation and significantly expanded on by [A1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
This is the default backend and it is fully compatible with all existing functionality and extensions
It supports **SD 1.x** and **SD 2.x** models
+ All other model types such as SD-XL, LCM, PixArt, Segmind, Kandinsky, etc. require backend **Diffusers**
- **Diffusers**: Based on new [Huggingface Diffusers](https://huggingface.co/docs/diffusers/index) implementation
- It supports All models listed below
- It is also the *only backend* that supports **Stable Diffusion XL** model
+ It supports *original* SD models as well as *all* models listed below
See [wiki article](https://github.com/vladmandic/automatic/wiki/Diffusers) for more information
## Model support
@@ -58,12 +60,19 @@ Additional models will be added as they become available and there is public int
- [Segmind SSD-1B](https://huggingface.co/segmind/SSD-1B)
- [LCM: Latent Consistency Models](https://github.com/openai/consistency_models)
- [Kandinsky](https://github.com/ai-forever/Kandinsky-2) 2.1 and 2.2
-- [Pixart-α XL 2](https://github.com/PixArt-alpha/PixArt-alpha) Medium and Large
+- [PixArt-α XL 2](https://github.com/PixArt-alpha/PixArt-alpha) Medium and Large
- [Warp Wuerstchen](https://huggingface.co/blog/wuertschen)
- [Tsinghua UniDiffusion](https://github.com/thu-ml/unidiffuser)
- [DeepFloyd IF](https://github.com/deep-floyd/IF) Medium and Large
- [Segmind SD Distilled](https://huggingface.co/blog/sd_distillation) *(all variants)*
+*Notes*:
+- Loading any model other than standard SD 1.x / SD 2.x requires use of backend **Diffusers**
+ Loading any other models using **Original** backend is not supproted
+- Loading manually download model `.safetensors` files is supported for SD 1.x / SD 2.x / SD-XL models only
+ For all other model types, use backend **Diffusers** and use built in Model downloader or
+ select model from Networks -> Models -> Reference list in which case it will be auto-downloaded and loaded
+
## Platform support
- *nVidia* GPUs using **CUDA** libraries on both *Windows and Linux*
@@ -88,8 +97,8 @@ Additional models will be added as they become available and there is public int
- Server can run without virtual environment,
but it is recommended to use it to avoid library version conflicts with other applications
- **nVidia/CUDA** / **AMD/ROCm** / **Intel/OneAPI** are auto-detected if present and available,
- but for any other use case specify required parameter explicitly or wrong packages may be installed
- as installer will assume CPU-only environment
+ For any other use case such as **DirectML**, **ONNX/Olive**, **OpenVINO** specify required parameter explicitly
+ or wrong packages may be installed as installer will assume CPU-only environment
- Full startup sequence is logged in `sdnext.log`, so if you encounter any issues, please check it first
### Run
@@ -98,24 +107,47 @@ Once SD.Next is installed, simply run `webui.ps1` or `webui.bat` (*Windows*) or
Below is partial list of all available parameters, run `webui --help` for the full list:
+ Server options:
+ --config CONFIG Use specific server configuration file, default: config.json
+ --ui-config UI_CONFIG Use specific UI configuration file, default: ui-config.json
+ --medvram Split model stages and keep only active part in VRAM, default: False
+ --lowvram Split model components and keep only active part in VRAM, default: False
+ --ckpt CKPT Path to model checkpoint to load immediately, default: None
+ --vae VAE Path to VAE checkpoint to load immediately, default: None
+ --data-dir DATA_DIR Base path where all user data is stored, default:
+ --models-dir MODELS_DIR Base path where all models are stored, default: models
+ --share Enable UI accessible through Gradio site, default: False
+ --insecure Enable extensions tab regardless of other options, default: False
+ --listen Launch web server using public IP address, default: False
+ --auth AUTH Set access authentication like "user:pwd,user:pwd""
+ --autolaunch Open the UI URL in the system's default browser upon launch
+ --docs Mount Gradio docs at /docs, default: False
+ --no-hashing Disable hashing of checkpoints, default: False
+ --no-metadata Disable reading of metadata from models, default: False
+ --no-download Disable download of default model, default: False
+ --backend {original,diffusers} force model pipeline type
+
Setup options:
+ --debug Run installer with debug logging, default: False
+ --reset Reset main repository to latest version, default: False
+ --upgrade Upgrade main repository to latest version, default: False
+ --requirements Force re-check of requirements, default: False
+ --quick Run with startup sequence only, default: False
--use-directml Use DirectML if no compatible GPU is detected, default: False
--use-openvino Use Intel OpenVINO backend, default: False
--use-ipex Force use Intel OneAPI XPU backend, default: False
--use-cuda Force use nVidia CUDA backend, default: False
--use-rocm Force use AMD ROCm backend, default: False
- --skip-update Skip update of extensions and submodules, default: False
+ --use-xformers Force use xFormers cross-optimization, default: False
--skip-requirements Skips checking and installing requirements, default: False
--skip-extensions Skips running individual extension installers, default: False
--skip-git Skips running all GIT operations, default: False
--skip-torch Skips running Torch checks, default: False
+ --skip-all Skips running all checks, default: False
+ --experimental Allow unsupported versions of libraries, default: False
--reinstall Force reinstallation of all requirements, default: False
- --debug Run installer with debug logging, default: False
- --reset Reset main repository to latest version, default: False
- --upgrade Upgrade main repository to latest version, default: False
--safe Run in safe mode with no user extensions
-
![screenshot](html/black-teal.jpg)
## Notes
@@ -126,7 +158,6 @@ SD.Next comes with several extensions pre-installed:
- [ControlNet](https://github.com/Mikubill/sd-webui-controlnet)
- [Agent Scheduler](https://github.com/ArtVentureX/sd-webui-agent-scheduler)
- [Image Browser](https://github.com/AlUlkesh/stable-diffusion-webui-images-browser)
-- [Rembg Background Removal](https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg)
### **Collab**
@@ -143,10 +174,10 @@ The idea behind the fork is to enable latest technologies and advances in text-t
> *Sometimes this is not the same as "as simple as possible to use".*
-If you are looking an amazing simple-to-use Stable Diffusion tool, I'd suggest [InvokeAI](https://invoke-ai.github.io/InvokeAI/) specifically due to its automated installer and ease of use.
-
General goals:
+- Multi-model
+ - Enable usage of as many as possible txt2img and img2img generative models
- Cross-platform
- Create uniform experience while automatically managing any platform specific differences
- Performance
diff --git a/extensions-builtin/Lora/scripts/lora_script.py b/extensions-builtin/Lora/scripts/lora_script.py
index 7d6a5db6c..49086f8cc 100644
--- a/extensions-builtin/Lora/scripts/lora_script.py
+++ b/extensions-builtin/Lora/scripts/lora_script.py
@@ -30,7 +30,7 @@ def before_ui():
shared.options_templates.update(shared.options_section(('extra_networks', "Extra Networks"), {
# "sd_lora": shared.OptionInfo("None", "Add network to prompt", gr.Dropdown, lambda: {"choices": ["None", *networks.available_networks], "visible": False}, refresh=networks.list_available_networks),
- "sd_lora": shared.OptionInfo("None", "Add network to prompt", gr.Dropdown, {"choices": ["None"]}),
+ "sd_lora": shared.OptionInfo("None", "Add network to prompt", gr.Dropdown, {"choices": ["None"], "visible": False}),
# "lora_show_all": shared.OptionInfo(False, "Always show all networks on the Lora page").info("otherwise, those detected as for incompatible version of Stable Diffusion will be hidden"),
# "lora_hide_unknown_for_versions": shared.OptionInfo([], "Hide networks of unknown versions for model versions", gr.CheckboxGroup, {"choices": ["SD1", "SD2", "SDXL"]}),
}))
diff --git a/extensions-builtin/Lora/ui_extra_networks_lora.py b/extensions-builtin/Lora/ui_extra_networks_lora.py
index 7df667acb..ffc0d95ee 100644
--- a/extensions-builtin/Lora/ui_extra_networks_lora.py
+++ b/extensions-builtin/Lora/ui_extra_networks_lora.py
@@ -33,24 +33,11 @@ def create_item(self, name):
if l.sd_version == network.SdVersion.SDXL:
return None
- # tags from model metedata
- possible_tags = l.metadata.get('ss_tag_frequency', {}) if l.metadata is not None else {}
- if isinstance(possible_tags, str):
- possible_tags = {}
- tags = {}
- for k, v in possible_tags.items():
- words = k.split('_', 1) if '_' in k else [v, k]
- words = [str(w).replace('.json', '') for w in words]
- if words[0] == '{}':
- words[0] = 0
- tags[' '.join(words[1:])] = words[0]
-
item = {
"type": 'Lora',
"name": name,
"filename": l.filename,
"hash": l.shorthash,
- "search_term": self.search_terms_from_path(l.filename) + ' '.join(tags.keys()),
"preview": self.find_preview(l.filename),
"prompt": json.dumps(f" "),
"local_preview": f"{path}.{shared.opts.samples_format}",
@@ -59,16 +46,37 @@ def create_item(self, name):
"size": os.path.getsize(l.filename),
}
info = self.find_info(l.filename)
- item["info"] = info
- item["description"] = self.find_description(l.filename, info) # use existing info instead of double-read
- # tags from user metadata
- possible_tags = info.get('tags', [])
+ tags = {}
+ possible_tags = l.metadata.get('ss_tag_frequency', {}) if l.metadata is not None else {} # tags from model metedata
+ if isinstance(possible_tags, str):
+ possible_tags = {}
+ for k, v in possible_tags.items():
+ words = k.split('_', 1) if '_' in k else [v, k]
+ words = [str(w).replace('.json', '') for w in words]
+ if words[0] == '{}':
+ words[0] = 0
+ tags[' '.join(words[1:])] = words[0]
+ versions = info.get('modelVersions', []) # trigger words from info json
+ for v in versions:
+ possible_tags = v.get('trainedWords', [])
+ if isinstance(possible_tags, list):
+ for tag in possible_tags:
+ if tag not in tags:
+ tags[tag] = 0
+ search = {}
+ possible_tags = info.get('tags', []) # tags from info json
if not isinstance(possible_tags, list):
possible_tags = [v for v in possible_tags.values()]
for v in possible_tags:
- tags[v] = 0
+ search[v] = 0
+ if len(list(tags)) == 0:
+ tags = search
+
+ item["info"] = info
+ item["description"] = self.find_description(l.filename, info) # use existing info instead of double-read
item["tags"] = tags
+ item["search_term"] = f'{self.search_terms_from_path(l.filename)} {" ".join(tags.keys())} {" ".join(search.keys())}'
return item
except Exception as e:
diff --git a/html/reference.json b/html/reference.json
index 49f85b189..86a3dab9f 100644
--- a/html/reference.json
+++ b/html/reference.json
@@ -23,11 +23,6 @@
"path": "segmind/tiny-sd",
"desc": "Segmind's Tiny-SD offers a compact, efficient, and distilled version of Realistic Vision 4.0 and is up to 80% faster than SD1.5",
"preview": "segmind--tiny-sd.jpg"
- },
- "LCM SD-XL": {
- "path": "latent-consistency/lcm-sdxl",
- "desc": "Latent Consistencey Models enable swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion. By distilling classifier-free guidance into the model's input, LCM can generate high-quality images in very short inference time. LCM can generate quality images in as few as 3-4 steps, making it blazingly fast.",
- "preview": "latent-consistency--lcm-sdxl.jpg"
},
"LCM SD-1.5 Dreamshaper 7": {
"path": "SimianLuo/LCM_Dreamshaper_v7",
diff --git a/launch.py b/launch.py
index fdd2fcba7..aeb3a1475 100755
--- a/launch.py
+++ b/launch.py
@@ -128,7 +128,7 @@ def gb(val: float):
process = psutil.Process(os.getpid())
res = process.memory_info()
ram_total = 100 * res.rss / process.memory_percent()
- return f'used={gb(res.rss)} total={gb(ram_total)}'
+ return f'{gb(res.rss)}/{gb(ram_total)}'
def start_server(immediate=True, server=None):
@@ -145,7 +145,7 @@ def start_server(immediate=True, server=None):
if not immediate:
time.sleep(3)
if collected > 0:
- installer.log.debug(f'Memory {get_memory_stats()} Collected {collected}')
+ installer.log.debug(f'Memory: {get_memory_stats()} collected={collected}')
module_spec = importlib.util.spec_from_file_location('webui', 'webui.py')
server = importlib.util.module_from_spec(module_spec)
installer.log.debug(f'Starting module: {server}')
@@ -233,7 +233,7 @@ def start_server(immediate=True, server=None):
if round(time.time()) % 120 == 0:
state = f'job="{instance.state.job}" {instance.state.job_no}/{instance.state.job_count}' if instance.state.job != '' or instance.state.job_no != 0 or instance.state.job_count != 0 else 'idle'
uptime = round(time.time() - instance.state.server_start)
- installer.log.debug(f'Server alive={alive} jobs={instance.state.total_jobs} requests={requests} uptime={uptime} memory {get_memory_stats()} {state}')
+ installer.log.debug(f'Server: alive={alive} jobs={instance.state.total_jobs} requests={requests} uptime={uptime} memory={get_memory_stats()} backend={instance.backend} {state}')
if not alive:
if uv is not None and uv.wants_restart:
installer.log.info('Server restarting...')
diff --git a/modules/generation_parameters_copypaste.py b/modules/generation_parameters_copypaste.py
index e742e32fc..3d6090398 100644
--- a/modules/generation_parameters_copypaste.py
+++ b/modules/generation_parameters_copypaste.py
@@ -211,6 +211,8 @@ def parse_generation_parameters(x: str):
return res
remaining = x[7:] if x.startswith('Prompt: ') else x
remaining = x[11:] if x.startswith('parameters: ') else x
+ if 'Steps: ' in remaining and 'Negative prompt: ' not in remaining:
+ remaining = remaining.replace('Steps: ', 'Negative prompt: , Steps: ')
prompt, remaining = remaining.strip().split('Negative prompt: ', maxsplit=1) if 'Negative prompt: ' in remaining else (remaining, '')
res["Prompt"] = prompt.strip()
negative, remaining = remaining.strip().split('Steps: ', maxsplit=1) if 'Steps: ' in remaining else (remaining, None)
diff --git a/modules/images.py b/modules/images.py
index 28046e159..38e7ebbcb 100644
--- a/modules/images.py
+++ b/modules/images.py
@@ -541,6 +541,8 @@ def atomically_save_image():
entry = { 'id': idx, 'filename': filename, 'time': datetime.datetime.now().isoformat(), 'info': exifinfo }
entries.append(entry)
shared.writefile(entries, fn, mode='w')
+ with open(os.path.join(paths.data_path, "params.txt"), "w", encoding="utf8") as file:
+ file.write(exifinfo)
save_queue.task_done()
diff --git a/modules/processing.py b/modules/processing.py
index 19655eb36..8a87ea68a 100644
--- a/modules/processing.py
+++ b/modules/processing.py
@@ -851,10 +851,6 @@ def infotext(_inxex=0): # dummy function overriden if there are iterations
modules.extra_networks.activate(p, extra_network_data)
if p.scripts is not None and isinstance(p.scripts, modules.scripts.ScriptRunner):
p.scripts.process_batch(p, batch_number=n, prompts=p.prompts, seeds=p.seeds, subseeds=p.subseeds)
- if n == 0:
- with open(os.path.join(modules.paths.data_path, "params.txt"), "w", encoding="utf8") as file:
- processed = Processed(p, [], p.seed, "")
- file.write(processed.infotext(p, 0))
step_multiplier = 1
sampler_config = modules.sd_samplers.find_sampler_config(p.sampler_name)
step_multiplier = 2 if sampler_config and sampler_config.options.get("second_order", False) else 1
diff --git a/modules/ui.py b/modules/ui.py
index ee490a87d..507e628f6 100644
--- a/modules/ui.py
+++ b/modules/ui.py
@@ -1239,11 +1239,14 @@ def webpath(fn):
def html_head():
- script_js = os.path.join(script_path, "javascript", "script.js")
- head = f'\n'
+ head = ''
+ main = ['script.js']
+ for js in main:
+ script_js = os.path.join(script_path, "javascript", js)
+ head += f'\n'
added = []
for script in modules.scripts.list_scripts("javascript", ".js"):
- if script.path == script_js:
+ if script.filename in main:
continue
head += f'\n'
added.append(script.path)
diff --git a/modules/ui_tempdir.py b/modules/ui_tempdir.py
index f9b38de14..25f59125e 100644
--- a/modules/ui_tempdir.py
+++ b/modules/ui_tempdir.py
@@ -4,7 +4,7 @@
from pathlib import Path
import gradio as gr
from PIL import Image, PngImagePlugin
-from modules import shared, errors
+from modules import shared, errors, paths
Savedfile = namedtuple("Savedfile", ["name"])
@@ -69,6 +69,10 @@ def pil_to_temp_file(self, img: Image, dir: str, format="png") -> str: # pylint:
name = tmp.name
img.save(name, pnginfo=(metadata if use_metadata else None))
shared.log.debug(f'Saving temp: image="{name}"')
+ params = ', '.join([f'{k}: {v}' for k, v in img.info.items()])
+ params = params[12:] if params.startswith('parameters: ') else params
+ with open(os.path.join(paths.data_path, "params.txt"), "w", encoding="utf8") as file:
+ file.write(params)
return name
diff --git a/webui.py b/webui.py
index 8397aced3..5b0692ed2 100644
--- a/webui.py
+++ b/webui.py
@@ -44,6 +44,7 @@
pass
state = shared.state
+backend = shared.backend
if not modules.loader.initialized:
timer.startup.record("libraries")
log.setLevel(logging.DEBUG if cmd_opts.debug else logging.INFO)
@@ -274,7 +275,7 @@ def start_ui():
shared.log.info(f'API Docs: {local_url[:-1]}/docs') # pylint: disable=unsubscriptable-object
if share_url is not None:
shared.log.info(f'Share URL: {share_url}')
- shared.log.debug(f'Gradio registered functions: {len(shared.demo.fns)}')
+ shared.log.debug(f'Gradio functions: registered={len(shared.demo.fns)}')
shared.demo.server.wants_restart = False
setup_middleware(app, cmd_opts)
diff --git a/wiki b/wiki
index cd040c02e..c2267ae1f 160000
--- a/wiki
+++ b/wiki
@@ -1 +1 @@
-Subproject commit cd040c02e4a477135ce08efe4d06672b57456c31
+Subproject commit c2267ae1f84489926b0c79cc82b416f30d7b93b0