title |
---|
Home |
.button-container {
display: grid;
grid-template-columns: repeat(auto-fit, minmax(200px, 1fr));
gap: 20px;
justify-content: center;
}
.button:hover {
background-color: #526CFE;
}
InvokeAI is an implementation of Stable Diffusion, the open source text-to-image and image-to-image generator. It provides a streamlined process with various new features and options to aid the image generation process. It runs on Windows, Mac and Linux machines, and runs on GPU cards with as little as 4 GB of RAM.
!!! Note
This project is rapidly evolving. Please use the [Issues tab](https://github.com/invoke-ai/InvokeAI/issues) to report bugs and make feature requests. Be sure to use the provided templates as it will help aid response time.
- WebUI overview
- WebUI hotkey reference guide
- WebUI Unified Canvas for Img2Img, inpainting and outpainting
- Installing
- Model Merging
- ControlNet Models
- Style/Subject Concepts and Embeddings
- Watermarking and the Not Safe for Work (NSFW) Checker
Behind the scenes, InvokeAI has been completely rewritten to support "nodes," small unitary operations that can be combined into graphs to form arbitrary workflows. For example, there is a prompt node that processes the prompt string and feeds it to a text2latent node that generates a latent image. The latents are then fed to a latent2image node that translates the latent image into a PNG.
The WebGUI has a node editor that allows you to graphically design and execute custom node graphs. The ability to save and load graphs is still a work in progress, but coming soon.
The original "invokeai" command-line interface has been retired. The
invokeai
command will now launch a new command-line client that can
be used by developers to create and test nodes. It is not intended to
be used for routine image generation or manipulation.
To launch the Web GUI from the command-line, use the command
invokeai-web
rather than the traditional invokeai --web
.
This version of InvokeAI features ControlNet, a system that allows you to achieve exact poses for human and animal figures by providing a model to follow. Full details are found in ControlNet
The list of schedulers has been completely revamped and brought up to date:
Short Name | Scheduler | Notes |
---|---|---|
ddim | DDIMScheduler | |
ddpm | DDPMScheduler | |
deis | DEISMultistepScheduler | |
lms | LMSDiscreteScheduler | |
pndm | PNDMScheduler | |
heun | HeunDiscreteScheduler | original noise schedule |
heun_k | HeunDiscreteScheduler | using karras noise schedule |
euler | EulerDiscreteScheduler | original noise schedule |
euler_k | EulerDiscreteScheduler | using karras noise schedule |
kdpm_2 | KDPM2DiscreteScheduler | |
kdpm_2_a | KDPM2AncestralDiscreteScheduler | |
dpmpp_2s | DPMSolverSinglestepScheduler | |
dpmpp_2m | DPMSolverMultistepScheduler | original noise scnedule |
dpmpp_2m_k | DPMSolverMultistepScheduler | using karras noise schedule |
unipc | UniPCMultistepScheduler | CPU only |
Please see 3.0.0 Release Notes for further details.
Please check out our :material-frequently-asked-questions: Troubleshooting Guide to get solutions for common installation problems and other issues.
Anyone who wishes to contribute to this project, whether documentation, features, bug fixes, code cleanup, testing, or code reviews, is very much encouraged to do so.
Please take a look at our Contribution documentation to learn more about contributing to InvokeAI.
This software is a combined effort of various people from across the world. Check out the list of all these amazing people. We thank them for their time, hard work and effort.
For support, please use this repository's GitHub Issues tracking service. Feel free to send me an email if you use and like the script.
Original portions of the software are Copyright (c) 2022-23 by The InvokeAI Team.