Runtime44 - Changelog - Bug reports
Note
This workflow is a partial adaptation to ComfyUI, therefore the results might be different from those that you can expect on Runtime44 - Mage
Both this workflow, and Mage, aims to generate the highest quality image, whilst remaining faithful to the original image. Although the goal is the same, the execution is different, hence why you will most likely have different results between this and Mage, the latter being optimized to run some processes in parallel on multiple GPUs and a different diffusion pipeline.
- Runtime44 ComfyUI Nodes
git clone https://github.com/runtime44/comfyui_r44_nodes.git
- ComfyUI Impact Pack
git clone https://github.com/ltdrdata/ComfyUI-Impact-Pack.git
- rgthree-comfy
git clone https://github.com/rgthree/rgthree-comfy.git
- Comfyroll Studio
git clone https://github.com/Suzie1/ComfyUI_Comfyroll_CustomNodes.git
- ComfyUI Custom Scripts
git clone https://github.com/pythongosssss/ComfyUI-Custom-Scripts.git
- ComfyUI ControlNet Auxiliary Preprocessors
git clone https://github.com/Fannovel16/comfyui_controlnet_aux.git
- ComfyUI KJNodes
git clone https://github.com/kijai/ComfyUI-KJNodes.git
To install these dependencies, you have two options:
- Using ComfyUI Manager (recommended)
- Manually installing in your
custom_nodes
directory.
If you opt for the manual install, make sure that your virtual env is activated and that you install the requirements.txt
for each of these packages
This workflow requires a few models in different categories to work.
We recommend using a mix between SD1.5 and SDXL for the diffusion, but you are free to use whichever model you like. For reference, here are the model that were used:
- Realistic Vision 6.0 (SD1.5)
- DreamshaperXL Lightning (SDXL Lightning)
Each upscale model has a specific scaling factor (2x, 3x, 4x, ...) that it is optimized to work with. If you go above or below that scaling factor, a standard resizing method will be used (in the case of our custom node, lanczos). While being convenient, it could also reduce the quality of the image. Therefore, we recommend finding a 2x, 3x and 4x model.
Thankfully, because of the upscaling chain that is built into the workflow, if you wish to upscale 8x, you do not need to use a 8x model. You can just use a 2x model and the chain will distribute the load between up to 4 nodes.
You can mix upscale models depending on your needs
For reference, here are the upscale models that were used:
- 2x RealESRGAN_plus
- 4x foolhardy_Remacri
You can find more models on OpenModelDB
- SD1.5 Tile
- SDXL Tile (optional)
This workflow applies the principles of double sampling, where the first sampling is exaggerated to generate an image with a lot of noise and added elements (in our case using SD 1.5), and the second sampling is used as a refiner and fixes all the issues generated at the previous step, while using it as a starting point
In this workflow, we are making use of masks to separate persons from their environment (it would work similarly with anything that you can segment in an image). This allows for more control over the amount of details that we want to generate on the image. For that, we use the Mask Sampler with the two main inputs being the latent image and the mask.
Because this step aims to generate as many details as possible from the upscaled image, we use a heavy ControlNet strength to contain SD hallucinations. If you feel like there are too much added elements to your image, feel free to increase that value
Here, we use two sampling passes:
- Medium denoise, low ControlNet, adding noise, returning with noise
- Low denoise, no ControlNet, no noise addition, returning without noise
During these two passes, the sampler also changes.
Finally, we apply the last details to our image, this is where you can use the Image Enhance node, or the Color Match (recommended) to add the finishing touches to your image
This workflow is distributed under the GNU AGPLv3 license to generate an image with a lot of noise and added elements