ComfyUI Interface for VS Code
Version 0.5.1 Alpha
VS Marketplace
Download Nightly
- VS Code Extension
- ComfyUI Bridge
- Text to Image
- Image to Image
- Image Inpainting
- Image Outpainting
- Painting Canvas
- Quick Mask Painting
- LoRA and Custom VAE
- Upscaler 4x
- PNG Metadata
- Canvas editor
- Grid editor
- Model Merging
- Styles popup (experimental)
- Batch rendering
- Preview and progress bar
- Resolution presets
- Drag and drop images
- Image comparison A/B
- Split and swap image A/B
- Pan and zoom canvas
- Works completely offline
- Ad-free, no trackers
- Theme color
- Custom workflows
- Image output directory
- Editable styles.json
- Resizable windows/editors
- Preferences
- Extension Settings
- Get Visual Studio Code
- Get ComfyUI and setup models
- Download RealESRGAN_x4plus.pth to "ComfyUI/models/upscale_models"
- Get Mental Diffusion from Marketplace or download nightly
- Open Extension Settings
- Define the ComfyUI source path and Python path
- Run MD
MD: ComfyUI server + MD UI
MDC: ComfyUI server only
MDU: MD UI only
You can also run ComfyUI server standalone with these two arguments:
main.py --enable-cors-header --preview-method auto
- Marketplace: Update the extension like any other extension
- Nightly: Run "update.py" to update MD to the latest version
Notice: "update.py" preserve user data, but set custom paths in extension settings to avoid data loss when updating from the marketplace or vsix.
The canvas consists of 3 layers:
- Front (A): Painting canvas (to paint and mask)
- Middle (A): Image canvas (editable using Canvas Editor)
- Back (B): Background image (to compare, preview or store image)
Important: If you want your painting/adjustments to combine with the original image, you need to "Bake" the canvas or check "Auto bake", useful when copy, save, swap, split or dragging the canvas, sometimes you need to drag or save the original image.
- MD uses PNG files to save and load metadata
- MD can load single or multiple PNG files
- Your data is safe and can be loaded again as long as "Autosave File" is checked
- You can guide the image-to-image using brush strokes and color adjustments
- To create an airbrush effect, decrease the brush size and increase the softness
- If you swap the image A/B, all changes will be applied to the left image
- To create a mask image, draw using the Mask tool or check the Mask mode
- The upscaled image is saved to the file and is not returned to the MD
- LoRA and VAE are supported by all workflows
Workflow | Strength | How To |
---|---|---|
TXT2IMG | 1.0 | Select "Text to Image" workflow and render |
IMG2IMG | 0.01 - 0.99 | Select "Image to Image" workflow and load the "Initial Image" |
INPAINT | 1.0 | Select "Inpaint" workflow and draw the mask |
INPAINT PRO | 0.01 - 1.0 | Select "Inpaint Pro" workflow, load the "Initial Image" and draw the mask (this workflow allows the strength amount to be used for inpainting) |
OUTPAINT | 1.0 | Select "Inpaint" workflow and open "Outpaint/Crop" window to set paddings |
Inpaint example (top is the original):
Outpaint examples (padding 128 and 256):
Key | Action |
---|---|
Left Button | Drag, draw, select |
Middle Button | Reset pan and fit |
Right Button | Pan canvas |
Wheel | Zoom canvas in/out |
Key | Action |
---|---|
0 - 9 | Select workflows |
B | Bake canvas |
D | Drag tool |
B | Brush tool |
L | Line tool |
E | Eraser tool |
M | Mask tool |
I | Activate eyedropper |
R | Reset canvas zoom |
] | Increase tool size |
[ | Decrease tool size |
+ | Increase tool opacity |
- | Decrease tool opacity |
CTRL + Enter | Render/Generate |
CTRL + L | Load PNG metadata |
CTRL + Z | Undo painting |
CTRL + X | Redo painting |
Notice: Experimental, we need a more user-friendly solution.
Basically, you can load any ComfyUI workflow API into mental diffusion
Just copy JSON file to ".workflows" directory and replace tags
- Create "my_workflow_api.json" file in ".workflows" directory
- Replace supported tags (with quotation marks)
- Reload webui to refresh workflows
- Select workflow and hit Render button
"_seed_"
"_steps_"
"_cfg_"
"_sampler_name_"
"_scheduler_"
"_denoise_"
"_ckpt_name_"
"_vae_name_"
"_lora_name_"
"_lora_strength_"
"_width_"
"_height_"
"_widthx2_"
"_heightx2_"
"_positive_"
"_negative_"
"_image_init_"
"_image_mask_"
"_is_changed_" (to force comfyui to update the input image)
- Tags are optional, you can choose according to your needs
- See examples in "configs" directory
How to generate "workflow_api.json" file?
- Open ComfyUI http://localhost:8188/
- Open settings (gear icon)
- Check "Enable Dev mode options"
- Click "Save (API Format)"
↑ Ported to VS Code
↑ Upgrade from Diffusers to ComfyUI
↑ Upgrade from sdkit to Diffusers
↑ Undiff renamed to Mental Diffusion
↑ Undiff started with "sdkit"
↑ Created for my personal use
Code released under the MIT license.
Mental Diffusion is not directly related to ComfyUI or ComfyUI team, and does not modify or hook ComfyUI files.