Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refresh dev #2496

Merged
merged 7 commits into from
Nov 11, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Change Log for SD.Next

## Update for 2023-11-10
## Update for 2023-11-11

- **Diffusers**
- **LCM** support for any *SD 1.5* or *SD-XL* model!
Expand All @@ -20,13 +20,15 @@
- Update to `diffusers==0.23.0`
- **Extra networks**
- Use multi-threading for 5x load speedup
- Better Lora trigger words support
- **General**:
- Reworked parser when pasting previously generated images/prompts
includes all `txt2img`, `img2img` and `override` params
- Add refiner options to XYZ Grid
- Support custom upscalers in subfolders
- Support `--ckpt none` to skip loading a model
- **Fixes**
- Fix `params.txt` saved before actual image
- Fix inpaint
- Fix manual grid image save
- Fix img2img init image save
Expand Down
57 changes: 44 additions & 13 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,16 +37,18 @@ All Individual features are not listed here, instead check [ChangeLog](CHANGELOG
- Built in installer with automatic updates and dependency management
- Modernized UI with theme support and number of built-in themes

<br>![screenshot](html/black-teal.jpg)<br>

## Backend support

**SD.Next** supports two main backends: *Original* and *Diffusers* which can be switched on-the-fly:

- **Original**: Based on [LDM](https://github.com/Stability-AI/stablediffusion) reference implementation and significantly expanded on by [A1111](https://github.com/AUTOMATIC1111/stable-diffusion-webui)
This is the default backend and it is fully compatible with all existing functionality and extensions
It supports **SD 1.x** and **SD 2.x** models
All other model types such as SD-XL, LCM, PixArt, Segmind, Kandinsky, etc. require backend **Diffusers**
- **Diffusers**: Based on new [Huggingface Diffusers](https://huggingface.co/docs/diffusers/index) implementation
It supports All models listed below
It is also the *only backend* that supports **Stable Diffusion XL** model
It supports *original* SD models as well as *all* models listed below
See [wiki article](https://github.com/vladmandic/automatic/wiki/Diffusers) for more information

## Model support
Expand All @@ -58,12 +60,19 @@ Additional models will be added as they become available and there is public int
- [Segmind SSD-1B](https://huggingface.co/segmind/SSD-1B)
- [LCM: Latent Consistency Models](https://github.com/openai/consistency_models)
- [Kandinsky](https://github.com/ai-forever/Kandinsky-2) 2.1 and 2.2
- [Pixart-α XL 2](https://github.com/PixArt-alpha/PixArt-alpha) Medium and Large
- [PixArt-α XL 2](https://github.com/PixArt-alpha/PixArt-alpha) Medium and Large
- [Warp Wuerstchen](https://huggingface.co/blog/wuertschen)
- [Tsinghua UniDiffusion](https://github.com/thu-ml/unidiffuser)
- [DeepFloyd IF](https://github.com/deep-floyd/IF) Medium and Large
- [Segmind SD Distilled](https://huggingface.co/blog/sd_distillation) *(all variants)*

*Notes*:
- Loading any model other than standard SD 1.x / SD 2.x requires use of backend **Diffusers**
Loading any other models using **Original** backend is not supproted
- Loading manually download model `.safetensors` files is supported for SD 1.x / SD 2.x / SD-XL models only
For all other model types, use backend **Diffusers** and use built in Model downloader or
select model from Networks -> Models -> Reference list in which case it will be auto-downloaded and loaded

## Platform support

- *nVidia* GPUs using **CUDA** libraries on both *Windows and Linux*
Expand All @@ -88,8 +97,8 @@ Additional models will be added as they become available and there is public int
- Server can run without virtual environment,
but it is recommended to use it to avoid library version conflicts with other applications
- **nVidia/CUDA** / **AMD/ROCm** / **Intel/OneAPI** are auto-detected if present and available,
but for any other use case specify required parameter explicitly or wrong packages may be installed
as installer will assume CPU-only environment
For any other use case such as **DirectML**, **ONNX/Olive**, **OpenVINO** specify required parameter explicitly
or wrong packages may be installed as installer will assume CPU-only environment
- Full startup sequence is logged in `sdnext.log`, so if you encounter any issues, please check it first

### Run
Expand All @@ -98,24 +107,47 @@ Once SD.Next is installed, simply run `webui.ps1` or `webui.bat` (*Windows*) or

Below is partial list of all available parameters, run `webui --help` for the full list:

Server options:
--config CONFIG Use specific server configuration file, default: config.json
--ui-config UI_CONFIG Use specific UI configuration file, default: ui-config.json
--medvram Split model stages and keep only active part in VRAM, default: False
--lowvram Split model components and keep only active part in VRAM, default: False
--ckpt CKPT Path to model checkpoint to load immediately, default: None
--vae VAE Path to VAE checkpoint to load immediately, default: None
--data-dir DATA_DIR Base path where all user data is stored, default:
--models-dir MODELS_DIR Base path where all models are stored, default: models
--share Enable UI accessible through Gradio site, default: False
--insecure Enable extensions tab regardless of other options, default: False
--listen Launch web server using public IP address, default: False
--auth AUTH Set access authentication like "user:pwd,user:pwd""
--autolaunch Open the UI URL in the system's default browser upon launch
--docs Mount Gradio docs at /docs, default: False
--no-hashing Disable hashing of checkpoints, default: False
--no-metadata Disable reading of metadata from models, default: False
--no-download Disable download of default model, default: False
--backend {original,diffusers} force model pipeline type

Setup options:
--debug Run installer with debug logging, default: False
--reset Reset main repository to latest version, default: False
--upgrade Upgrade main repository to latest version, default: False
--requirements Force re-check of requirements, default: False
--quick Run with startup sequence only, default: False
--use-directml Use DirectML if no compatible GPU is detected, default: False
--use-openvino Use Intel OpenVINO backend, default: False
--use-ipex Force use Intel OneAPI XPU backend, default: False
--use-cuda Force use nVidia CUDA backend, default: False
--use-rocm Force use AMD ROCm backend, default: False
--skip-update Skip update of extensions and submodules, default: False
--use-xformers Force use xFormers cross-optimization, default: False
--skip-requirements Skips checking and installing requirements, default: False
--skip-extensions Skips running individual extension installers, default: False
--skip-git Skips running all GIT operations, default: False
--skip-torch Skips running Torch checks, default: False
--skip-all Skips running all checks, default: False
--experimental Allow unsupported versions of libraries, default: False
--reinstall Force reinstallation of all requirements, default: False
--debug Run installer with debug logging, default: False
--reset Reset main repository to latest version, default: False
--upgrade Upgrade main repository to latest version, default: False
--safe Run in safe mode with no user extensions

<br>![screenshot](html/black-teal.jpg)<br>

## Notes

Expand All @@ -126,7 +158,6 @@ SD.Next comes with several extensions pre-installed:
- [ControlNet](https://github.com/Mikubill/sd-webui-controlnet)
- [Agent Scheduler](https://github.com/ArtVentureX/sd-webui-agent-scheduler)
- [Image Browser](https://github.com/AlUlkesh/stable-diffusion-webui-images-browser)
- [Rembg Background Removal](https://github.com/AUTOMATIC1111/stable-diffusion-webui-rembg)

### **Collab**

Expand All @@ -143,10 +174,10 @@ The idea behind the fork is to enable latest technologies and advances in text-t

> *Sometimes this is not the same as "as simple as possible to use".*

If you are looking an amazing simple-to-use Stable Diffusion tool, I'd suggest [InvokeAI](https://invoke-ai.github.io/InvokeAI/) specifically due to its automated installer and ease of use.

General goals:

- Multi-model
- Enable usage of as many as possible txt2img and img2img generative models
- Cross-platform
- Create uniform experience while automatically managing any platform specific differences
- Performance
Expand Down
2 changes: 1 addition & 1 deletion extensions-builtin/Lora/scripts/lora_script.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ def before_ui():

shared.options_templates.update(shared.options_section(('extra_networks', "Extra Networks"), {
# "sd_lora": shared.OptionInfo("None", "Add network to prompt", gr.Dropdown, lambda: {"choices": ["None", *networks.available_networks], "visible": False}, refresh=networks.list_available_networks),
"sd_lora": shared.OptionInfo("None", "Add network to prompt", gr.Dropdown, {"choices": ["None"]}),
"sd_lora": shared.OptionInfo("None", "Add network to prompt", gr.Dropdown, {"choices": ["None"], "visible": False}),
# "lora_show_all": shared.OptionInfo(False, "Always show all networks on the Lora page").info("otherwise, those detected as for incompatible version of Stable Diffusion will be hidden"),
# "lora_hide_unknown_for_versions": shared.OptionInfo([], "Hide networks of unknown versions for model versions", gr.CheckboxGroup, {"choices": ["SD1", "SD2", "SDXL"]}),
}))
Expand Down
44 changes: 26 additions & 18 deletions extensions-builtin/Lora/ui_extra_networks_lora.py
Original file line number Diff line number Diff line change
Expand Up @@ -33,24 +33,11 @@ def create_item(self, name):
if l.sd_version == network.SdVersion.SDXL:
return None

# tags from model metedata
possible_tags = l.metadata.get('ss_tag_frequency', {}) if l.metadata is not None else {}
if isinstance(possible_tags, str):
possible_tags = {}
tags = {}
for k, v in possible_tags.items():
words = k.split('_', 1) if '_' in k else [v, k]
words = [str(w).replace('.json', '') for w in words]
if words[0] == '{}':
words[0] = 0
tags[' '.join(words[1:])] = words[0]

item = {
"type": 'Lora',
"name": name,
"filename": l.filename,
"hash": l.shorthash,
"search_term": self.search_terms_from_path(l.filename) + ' '.join(tags.keys()),
"preview": self.find_preview(l.filename),
"prompt": json.dumps(f" <lora:{l.get_alias()}:{shared.opts.extra_networks_default_multiplier}>"),
"local_preview": f"{path}.{shared.opts.samples_format}",
Expand All @@ -59,16 +46,37 @@ def create_item(self, name):
"size": os.path.getsize(l.filename),
}
info = self.find_info(l.filename)
item["info"] = info
item["description"] = self.find_description(l.filename, info) # use existing info instead of double-read

# tags from user metadata
possible_tags = info.get('tags', [])
tags = {}
possible_tags = l.metadata.get('ss_tag_frequency', {}) if l.metadata is not None else {} # tags from model metedata
if isinstance(possible_tags, str):
possible_tags = {}
for k, v in possible_tags.items():
words = k.split('_', 1) if '_' in k else [v, k]
words = [str(w).replace('.json', '') for w in words]
if words[0] == '{}':
words[0] = 0
tags[' '.join(words[1:])] = words[0]
versions = info.get('modelVersions', []) # trigger words from info json
for v in versions:
possible_tags = v.get('trainedWords', [])
if isinstance(possible_tags, list):
for tag in possible_tags:
if tag not in tags:
tags[tag] = 0
search = {}
possible_tags = info.get('tags', []) # tags from info json
if not isinstance(possible_tags, list):
possible_tags = [v for v in possible_tags.values()]
for v in possible_tags:
tags[v] = 0
search[v] = 0
if len(list(tags)) == 0:
tags = search

item["info"] = info
item["description"] = self.find_description(l.filename, info) # use existing info instead of double-read
item["tags"] = tags
item["search_term"] = f'{self.search_terms_from_path(l.filename)} {" ".join(tags.keys())} {" ".join(search.keys())}'

return item
except Exception as e:
Expand Down
5 changes: 0 additions & 5 deletions html/reference.json
Original file line number Diff line number Diff line change
Expand Up @@ -23,11 +23,6 @@
"path": "segmind/tiny-sd",
"desc": "Segmind's Tiny-SD offers a compact, efficient, and distilled version of Realistic Vision 4.0 and is up to 80% faster than SD1.5",
"preview": "segmind--tiny-sd.jpg"
},
"LCM SD-XL": {
"path": "latent-consistency/lcm-sdxl",
"desc": "Latent Consistencey Models enable swift inference with minimal steps on any pre-trained LDMs, including Stable Diffusion. By distilling classifier-free guidance into the model's input, LCM can generate high-quality images in very short inference time. LCM can generate quality images in as few as 3-4 steps, making it blazingly fast.",
"preview": "latent-consistency--lcm-sdxl.jpg"
},
"LCM SD-1.5 Dreamshaper 7": {
"path": "SimianLuo/LCM_Dreamshaper_v7",
Expand Down
6 changes: 3 additions & 3 deletions launch.py
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@ def gb(val: float):
process = psutil.Process(os.getpid())
res = process.memory_info()
ram_total = 100 * res.rss / process.memory_percent()
return f'used={gb(res.rss)} total={gb(ram_total)}'
return f'{gb(res.rss)}/{gb(ram_total)}'


def start_server(immediate=True, server=None):
Expand All @@ -145,7 +145,7 @@ def start_server(immediate=True, server=None):
if not immediate:
time.sleep(3)
if collected > 0:
installer.log.debug(f'Memory {get_memory_stats()} Collected {collected}')
installer.log.debug(f'Memory: {get_memory_stats()} collected={collected}')
module_spec = importlib.util.spec_from_file_location('webui', 'webui.py')
server = importlib.util.module_from_spec(module_spec)
installer.log.debug(f'Starting module: {server}')
Expand Down Expand Up @@ -233,7 +233,7 @@ def start_server(immediate=True, server=None):
if round(time.time()) % 120 == 0:
state = f'job="{instance.state.job}" {instance.state.job_no}/{instance.state.job_count}' if instance.state.job != '' or instance.state.job_no != 0 or instance.state.job_count != 0 else 'idle'
uptime = round(time.time() - instance.state.server_start)
installer.log.debug(f'Server alive={alive} jobs={instance.state.total_jobs} requests={requests} uptime={uptime} memory {get_memory_stats()} {state}')
installer.log.debug(f'Server: alive={alive} jobs={instance.state.total_jobs} requests={requests} uptime={uptime} memory={get_memory_stats()} backend={instance.backend} {state}')
if not alive:
if uv is not None and uv.wants_restart:
installer.log.info('Server restarting...')
Expand Down
2 changes: 2 additions & 0 deletions modules/generation_parameters_copypaste.py
Original file line number Diff line number Diff line change
Expand Up @@ -211,6 +211,8 @@ def parse_generation_parameters(x: str):
return res
remaining = x[7:] if x.startswith('Prompt: ') else x
remaining = x[11:] if x.startswith('parameters: ') else x
if 'Steps: ' in remaining and 'Negative prompt: ' not in remaining:
remaining = remaining.replace('Steps: ', 'Negative prompt: , Steps: ')
prompt, remaining = remaining.strip().split('Negative prompt: ', maxsplit=1) if 'Negative prompt: ' in remaining else (remaining, '')
res["Prompt"] = prompt.strip()
negative, remaining = remaining.strip().split('Steps: ', maxsplit=1) if 'Steps: ' in remaining else (remaining, None)
Expand Down
2 changes: 2 additions & 0 deletions modules/images.py
Original file line number Diff line number Diff line change
Expand Up @@ -541,6 +541,8 @@ def atomically_save_image():
entry = { 'id': idx, 'filename': filename, 'time': datetime.datetime.now().isoformat(), 'info': exifinfo }
entries.append(entry)
shared.writefile(entries, fn, mode='w')
with open(os.path.join(paths.data_path, "params.txt"), "w", encoding="utf8") as file:
file.write(exifinfo)
save_queue.task_done()


Expand Down
4 changes: 0 additions & 4 deletions modules/processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -851,10 +851,6 @@ def infotext(_inxex=0): # dummy function overriden if there are iterations
modules.extra_networks.activate(p, extra_network_data)
if p.scripts is not None and isinstance(p.scripts, modules.scripts.ScriptRunner):
p.scripts.process_batch(p, batch_number=n, prompts=p.prompts, seeds=p.seeds, subseeds=p.subseeds)
if n == 0:
with open(os.path.join(modules.paths.data_path, "params.txt"), "w", encoding="utf8") as file:
processed = Processed(p, [], p.seed, "")
file.write(processed.infotext(p, 0))
step_multiplier = 1
sampler_config = modules.sd_samplers.find_sampler_config(p.sampler_name)
step_multiplier = 2 if sampler_config and sampler_config.options.get("second_order", False) else 1
Expand Down
9 changes: 6 additions & 3 deletions modules/ui.py
Original file line number Diff line number Diff line change
Expand Up @@ -1239,11 +1239,14 @@ def webpath(fn):


def html_head():
script_js = os.path.join(script_path, "javascript", "script.js")
head = f'<script type="text/javascript" src="{webpath(script_js)}"></script>\n'
head = ''
main = ['script.js']
for js in main:
script_js = os.path.join(script_path, "javascript", js)
head += f'<script type="text/javascript" src="{webpath(script_js)}"></script>\n'
added = []
for script in modules.scripts.list_scripts("javascript", ".js"):
if script.path == script_js:
if script.filename in main:
continue
head += f'<script type="text/javascript" src="{webpath(script.path)}"></script>\n'
added.append(script.path)
Expand Down
6 changes: 5 additions & 1 deletion modules/ui_tempdir.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from pathlib import Path
import gradio as gr
from PIL import Image, PngImagePlugin
from modules import shared, errors
from modules import shared, errors, paths


Savedfile = namedtuple("Savedfile", ["name"])
Expand Down Expand Up @@ -69,6 +69,10 @@ def pil_to_temp_file(self, img: Image, dir: str, format="png") -> str: # pylint:
name = tmp.name
img.save(name, pnginfo=(metadata if use_metadata else None))
shared.log.debug(f'Saving temp: image="{name}"')
params = ', '.join([f'{k}: {v}' for k, v in img.info.items()])
params = params[12:] if params.startswith('parameters: ') else params
with open(os.path.join(paths.data_path, "params.txt"), "w", encoding="utf8") as file:
file.write(params)
return name


Expand Down
Loading
Loading