Varnava is a neural art management tool powered by Stable Diffusion. Varnava is heavily used by VERALOMNA artists to generate concept arts and textures for the games in the works.
The purpose of Varnava is to provide self-contained self-contained environment which can be easy to install and update.
- Self-Contained application manages the necessary runtime environment for Stable Diffusion: no need to install various dependencies and models manually.
- Easy to use and intuitive user interface to manage prompts and associated images.
- Projects allow grouping prompts by their meaning.
- Queued generation: schedule any number of images to be generated for any number of prompts and Varnava will update the images as soon as they are ready.
- Dynamic generation settings per output for maximum flexibility.
- Upscale images that you like the most and export them.
- Variate images by locking image seeds via user interface.
- Windows 10+
- GPU with CUDA support and at least 8GB of VRAM
- macOS 12.0+
- M1/M2 SoC with at least 15GB of RAM
- 8GB of disk space (4GB for runtime and 4GB for base Stable Diffusion models)
The following setups have been tested so far:
- Windows 11
- NVIDIA RTX 4080
- NVIDIA RTX 4090
- NVIDIA RTX 3090 TI
- NVIDIA RTX 3090
- macOS 13.0
- M1 Pro
- M1 Max
- Download the installer from the releases page on Github and run the installer.
- Varnava will download necessary dependencies on the first launch (it will take a while, please have patience).
- Press the Waiting to download top-right button, select a directory where you want to store Stable Diffusion models and start downloading them (it will take a while to download about 2.4GB of base model and 1.5GB of upscaling model).
- Create a new project, add a prompt to your project and start generating!
A project is a way to organise prompts. A prompt can contain any number of generated images with parameters that can be changed per each image gneeration.
v-22.mp4
Locking a seed and changing other parameters allows to generate similar image by content and different by styles using various parameters controlling the model (like strength, steps, and method).
v-33.mp4
Imags queue shows number of images currently in the queue (waiting to be processed or generating right now). Tapping on each image in the queue focuses on the image.
It is possible to install more than one text-to-image model from the hub and switch between them on a per image basis to iterate and compare results.
v-4.mov
- ControlNet and image editing
- Infinite upscaling
- Fine-tuning support
- VERALOMNA