A somewhat optimized, serverless ComfyUI worker for RunPod, highly specific to my own personal use case, which just happens to require a lot of customization and flexibility! ❤️
- Image based on Ubuntu + NVIDIA CUDA
- Launch ComfyUI workflows on demand in seconds
- Automatically upload generations to Amazon AWS *(requires ENV Variables)
- Get base64 data for generated images as job output *(when not aws upload && tobase64 flag set)
- Easily add custom models/loras/nodes/etc via a selection of methods
- Sends progress updates so you can track/display them in your own ui
- Support for runpod network volume mounted models/nodes
- Allows for batch image processing && bulk AWS uploads
- Job output also returns non image/video node outputs
- model: sd_xl_base_1.0.safetensors
- model: sdxl_vae.safetensors
- nodes: efficiency-nodes-comfyui
- nodes: comfyui-wd14-tagger
- 🐳 Use the latest image for your worker: dekita/runpod-serverless-comfyui-worker:latest
- ⚙️ Setup environment variables for AWS
- ℹ️ Use the Docker image on RunPod
- Interact with your RunPod API
- Get the workflow from ComfyUI
- Setup Amazon AWS S3 Bucket (optional)
- Building Customized Image (optional)
- Github Actions: Auto Deploy to Docker Hub (optional)
- comfyanonymous creator of ComfyUI.
- Tim Pietrusky, creator of runpod-worker-comfy. Without that project to help me understand the process, this one would not exist! ❤️