API client for AUTOMATIC1111/stable-diffusion-webui for Node.js and Browser.
- Full TypeScript support
- Supports Node.js and browser environments
- Extensions: ControlNet, Cutoff, DynamicCFG, TiledDiffusion, TiledVAE, agent scheduler
- Batch processing support
- Easy integration with popular extensions and models
- Enable webui-api: Use the
--api
command line argument. - Disable webui-gui (optional): Use
--nowebui
to disable the web GUI. -
You can find and modify the
COMMANDLINE_ARGS
value in thewebui-user.bat
orwebui-user.sh
file.
- WebUI: 1.8.0
- ControlNet: 3b4eedd
import { SDWebUIA1111Client } from "@stable-canvas/sd-webui-a1111-client";
const client = new SDWebUIA1111Client({
BASE: "http://localhost:7860",
});
Use the --api-auth
command line argument with "username:password" on the server to enable API authentication.
const client = new SDWebUIA1111Client({
BASE: "http://localhost:7860",
USERNAME: 'your_username',
PASSWORD: 'your_password'
});
Allowing full control over the request body.
const response = await client.default.text2ImgapiSdapiV1Txt2ImgPost({
requestBody: {
prompt: 'an astronaut riding a horse on the moon'
}
});
fully functions here: Service Functions
Use the A1111StableDiffusionApi
object for a high-level API client with optimized types and parameter handling.
const api = new A1111StableDiffusionApi({
client: {
BASE: 'http://127.0.0.1:7860',
// USERNAME: 'your_username',
// PASSWORD: 'your_password'
},
// optional caching
// cache: {
// disableCache: false,
// cacheTime: 60 * 1000
// }
});
const { image, info } = await api.Service.txt2img({
prompt: '1girl'
});
const { image, info } = await api.Service.img2img({
prompt: '1girl'
});
Use txt2imgBatch
and img2imgBatch
for batch processing.
const batch = api.Service.txt2imgBatch(
{ /* ... */ },
{
batchSize: 2,
numBatches: 10
}
);
const responses = await batch.waitForComplete();
- Correct installation of ControlNet Extension
- Install required ControlNet Models
const { image, info } = await api.ControlNet.txt2img({
params: {
prompt: '...',
// ...
},
units: [
{
image: '...', // base64 string
module: 'openpose_full',
model: 'control_v11p_sd15_openpose [cab727d4]'
}
]
});
const { image, info } = await api.ControlNet.img2img({
params: {
prompt: '...',
// ...
},
units: [
{
image: '...', // base64 string
module: 'openpose_full',
model: 'control_v11p_sd15_openpose [cab727d4]'
}
]
});
const batch = api.ControlNet.txt2imgBatch({
params: { /* ... */ },
options: {
batchSize: 2,
numBatches: 10
},
units: [
{
image: '...', // base64 string
module: 'openpose_full',
model: 'control_v11p_sd15_openpose [cab727d4]'
}
]
});
const responses = await batch.waitForComplete();
Thanks to the ControlNet plugin releasing an interface for detection, we can use this API when we only need to detect and not generate images.
It is worth mentioning that if you want to customize the preprocessing process of ControlNet, you need to set the
module
parameter of the ControlNet Unit tonone
, indicating that the input image has already been preprocessed.
Regarding the parameters
controlnet_threshold_a
andcontrolnet_threshold_b
, you can useapi.ControlNet.getModuleDetail
to get the requirements for these parameters from the current plugin.
const {
images,
} = await api.ControlNet.detect({
controlnet_module: 'openpose_full',
controlnet_input_images: [
// image base64
],
controlnet_processor_res: 512,
});
const modelList = await api.ControlNet.getModels();
const moduleList = await api.ControlNet.getModules();
Advanced processing pipeline, providing more control over requests.
The design of the processor lies between the API and the client. It is not as convenient as the API but offers more customized functionalities, which is useful for achieving workflows similar to ComfyUI.
const response = await client.default.text2ImgapiSdapiV1Txt2ImgPost({
requestBody: {
prompt: 'an astronaut riding a horse on the moon'
}
});
const { images: [img] } = response;
const buffer = Buffer.from(img, "base64");
await fs.promises.writeFile('result.png', buffer);
const pc1 = new Txt2imgProcess({ prompt: "1girl, black top, short pink hair" });
pc1.use(new CutoffExt({ targets: 'black, pink' }));
const { images } = await pc1.request(client);
const input_image = fs.readFileSync('input_image.png', 'base64');
const pc1 = new Img2imgProcess({ prompt: "1girl", init_images: [input_image]});
const cnet_ext = new ControlNetExt();
cnet_ext.addUnit({
model: 'openpose_full',
module: 'control_v11p_sd15_openpose [cab727d4]',
weight: 1,
pixel_perfect: true,
});
pc1.use(cnet_ext);
const { images } = await pc1.request(client);
Type | unpkg | jsdelivr |
---|---|---|
mjs | unpkg (mjs) | jsdelivr (mjs) |
umd | unpkg (umd) | jsdelivr (umd) |
<!DOCTYPE html>
<html lang="en">
<head>
<script type="importmap">
{
"imports": {
"@stable-canvas/sd-webui-a1111-client": "https://unpkg.com/@stable-canvas/sd-webui-a1111-client@latest/dist/main.module.mjs"
}
}
</script>
</head>
<body>
<h1>@stable-canvas/sd-webui-a1111-client DEMO</h1>
<div id="message"></div>
<img src="" alt="result" />
<script type="module">
import { SDWebUIA1111Client, Txt2imgProcess } from "@stable-canvas/sd-webui-a1111-client";
window.onload = async () => {
const $msg = document.querySelector("#message");
const $img = document.querySelector("img");
const client = new SDWebUIA1111Client({ BASE: "http://localhost:7860" });
const pc1 = new Txt2imgProcess({ prompt: "1girl" });
$msg.innerText = "Generating...";
try {
const { images } = await pc1.request(client);
const image = images[0];
$img.src = `data:image/png;base64,${image}`;
$msg.innerText = "Done.";
} catch (error) {
$msg.innerText = error.message;
console.error(error);
}
};
</script>
</body>
</html>
- Full code is available in the /examples folder.
- Online demo: CodeSandbox
Apache-2.0