Skip to content

DirectML

Seunghoon Lee edited this page Oct 21, 2023 · 22 revisions

DirectML

SD.Next includes support for PyTorch-DirectML.

How to

Add --use-directml on commandline arguments.

For details, go to Installation.

Performance

The performance is quite bad compared to ROCm.

If you are familiar with Linux system, we recommend ROCm.

FAQ

Olive (experimental support)

Olive is an easy-to-use hardware-aware model optimization tool that composes industry-leading techniques across model compression, optimization, and compilation. (from pypi)

Currently, SDXL is not supported.

This feature is EXPERIMENTAL. If you run this, your existing installation may be broken. Run it in a new installation or in a new virtual environment.

How to

You should switch branch to olive.

You don't need to modify your commandline arguments.

Go to System tab → Diffusers Settings and set Diffusers pipeline to ONNX Stable Diffusion (Olive).

Guide on YouTube:

From checkpoint

Model optimization occurs automatically before generation.

Target models can be .safetensors, .ckpt, Diffusers and the optimization takes 5-10 minutes depending on your system.

The optimized models are automatically cached and used later to create images of the same size (height and width).

From Huggingface

If your system memory is not enough to optimize model or you don't want to waste your time to optimize the model yourself, you can download optimized model from Huggingface.

Go to ModelsHuggingface tab and download optimized model.

There's an optimized version of runwayml/stable-diffusion-v1-5.

Guide on YouTube:

Performance

prompt: a castle, best quality negative prompt: worst quality sampler: Euler sampling steps: 20 device: RX 7900 XTX 24GB version: olive-ai(0.3.3) onnxruntime-directml(1.16.1) ROCm(5.6) torch(olive: 1.13.1, rocm: 2.1.0) model: runwayml/stable-diffusion-v1-5 (ROCm), lshqqytiger/stable-diffusion-v1-5-olive (Olive)

Olive: Olive ROCm: ROCm

Pros and Cons

Pros

Cons

FAQ

Clone this wiki locally