Simplify your AI media generation workflows with Visionatrix—an intuitive interface built on top of ComfyUI
- 🔧 Easy Setup & Updates: Quick setup with simple installation and seamless version updates.
- 🖥️ Minimalistic UI: Clean, user-friendly interface designed for daily workflow usage.
- 🌐 Prompt Translation Support: Automatically translate prompts for media generation.
- 🛠️ Stable Workflows: Versioned and upgradable workflows.
- 📈 Scalability: Run multiple instances with simultaneous task workers for increased productivity.
- 👥 Multi-User Support: Configure for multiple users with ease and integrate different user backends.
- 🤖 LLM Integration: Effortlessly incorporate Ollama/Gemini as your LLM for ComfyUI workflows.
- 🔌 Seamless Integration: Run as a service with backend endpoints for smooth project integration.
- 😎 LoRA Integration: Easy integrate LoRAs from CivitAI into your flows.
- 🐳 Docker Compose: Official Docker images and a pre-configured Docker Compose file.
Access the Visionatrix UI at http://localhost:8288 (default).
Note: Starting from version 1.10 Visionatrix launches ComfyUI webserver at http://127.0.0.1:8188
We provide public template for RunPOD to help you quickly see if this project fits your needs.
- Python
3.10
or higher. (3.12
recommended) - GPU with at least
8 GB
of memory (12GB recommended)
Install prerequisites (Python, Git, etc.)
For Ubuntu 22.04:
sudo apt install wget curl python3-venv python3-pip build-essential git
It is also recommended to install FFMpeg dependencies with:
sudo apt install ffmpeg libsm6 libxext6
Download and run the easy_install.py
script:
Note: This script will clone the Visionatrix repository into your current folder and perform the installation. After installation, you can always run
easy_install
from the "scripts" folder.
Using wget:
wget -O easy_install.py https://raw.githubusercontent.com/Visionatrix/Visionatrix/main/scripts/easy_install.py && python3 easy_install.py
Using curl:
curl -o easy_install.py https://raw.githubusercontent.com/Visionatrix/Visionatrix/main/scripts/easy_install.py && python3 easy_install.py
Follow the prompts during installation. In most cases, everything should work smoothly.
To launch Visionatrix from the activated virtual environment:
python -m visionatrix run --ui
We offer a portable version to simplify installation (no need for Git or Visual Studio compilers).
Currently, we provide versions for CUDA/CPU. If there's demand, we can add a DirectML version.
- Install VC++ Redistributable: vc_redist.x64.exe from this Microsoft page.
- Download: Visit our Releases page.
- Get the Portable Archive: Download
vix_portable_cuda.7z
. - Unpack and Run: Extract the archive and run
run_nvidia_gpu.bat
orrun_cpu.bat
.
For manual installation steps, please refer to our detailed documentation.
The easiest way to set up paths is through the user interface, by going to Settings->ComfyUI
.
In most cases, the easiest way is to set ComfyUI base data folder
to some absolute path where you want to store models, task results, and settings.
This will allow you to freely reinstall everything from scratch without losing data or models.
Note: For easy Windows portable upgrades, we assume you have
ComfyUI base data folder
parameter set.
We highly recommend filling in both the CivitAI token and the HuggingFace token in the settings.
Many models cannot be downloaded by public users without a token.
Run the easy_install
script and select the "Update" option.
python3 easy_install.py
Updating the portable version involves:
- Unpacking the new portable version.
- Moving
visionatrix.db
from the old version to the new one.
Hint
Alternatively, you can specify a custom path for visionatrix.db
using the DATABASE_URI
environment variable. This allows you to keep the database file outside the portable archive and skip step 2.
For example, setting DATABASE_URI to:
`sqlite+aiosqlite:///C:/Users/alex/visionatrix.db`
will direct Visionatrix to use the C:\Users\alex\visionatrix.db
file.
We provide official Docker images along with a pre-configured docker-compose.yml
file, making deployment faster and easier. The file is located at the root of the Visionatrix repository.
Our Docker images are primarily hosted on GitHub Container Registry (GHCR): ghcr.io/visionatrix/visionatrix
. This is the default used by the docker-compose.yml
file.
For users who experience slow download speeds from GHCR (e.g., on certain cloud providers), we also provide a mirror on Docker Hub: docker.io/bigcat88/visionatrix
.
- visionatrix_nvidia: Visionatrix with
NVIDIA GPU
support. - visionatrix_amd: Visionatrix with
AMD GPU
support. - visionatrix_cpu: Visionatrix running on
CPU
only. - pgsql: A PostgreSQL 17 container for the database.
Choose the service appropriate for your hardware:
-
For NVIDIA GPU support:
docker compose up -d visionatrix_nvidia
-
For AMD GPU support:
docker compose up -d visionatrix_amd
-
For CPU mode:
docker compose up -d visionatrix_cpu
By default, these commands will pull images from GHCR. A visionatrix-data
directory will be created in the current directory in the host and used for the models
, user
, input
and output
files.
You can easily customize the configuration by modifying environment variables or volume mounts in the docker-compose.yml
file.
If you prefer to pull images from Docker Hub instead of GHCR, you can set the VIX_IMAGE_BASE
environment variable before running docker compose up
.
Method 1: Using a .env
file
-
Create a file named
.env
in the same directory as yourdocker-compose.yml
file. -
Add the following line to the
.env
file:VIX_IMAGE_BASE=docker.io/bigcat88/visionatrix
-
Now, run
docker compose up
as usual. Compose will automatically read the.env
file and use the Docker Hub images.# Example: Start NVIDIA service using images from Docker Hub defined in .env docker compose up -d visionatrix_nvidia
Method 2: Setting the variable temporarily
You can set the environment variable directly on the command line for a single command execution:
VIX_IMAGE_BASE=docker.io/bigcat88/visionatrix docker compose up -d visionatrix_nvidia
-
From the root of this repo, build a new image, using the arguments:
- BUILD_TYPE (required): Define the build type:
cpu
,cuda
, orrocm
- CUDA_VERSION (optional): Define the pytorch CUDA version (ex. 126 for 12.6) you want to use. The default is 12.8, which doesn't support older cards.
Example for a cuda build using 12.6 CUDA version:
docker build --build-arg BUILD_TYPE=cuda --build-arg CUDA_VERSION=126 visionatrix:release-cuda12.6 -f docker/Dockerfile .
- BUILD_TYPE (required): Define the build type:
-
Start the services refering the new image:
VIX_IMAGE_BASE=visionatrix:release-cuda12.6 docker compose up -d visionatrix_nvidia
If you have any questions or need assistance, we're here to help! Feel free to start a discussion or explore our resources:
- Documentation
- Available Flows
- Admin Manual
- Flows Developing
- Common Information