ComfyUI Update: What's new from the last few weeks, SD Turbo, Stable Zero123, Group Nodes, FP8, and more.
Here’s what’s new recently in ComfyUI. Since this is most likely going to be the final post this year I would like to give a big thanks to everyone who contribute, report bugs and to Stability AI.
Here’s the link to the previous update in case you missed it.
SDXL Turbo
The workflow for live prompting with SDXL Turbo can be found on the SDXL Turbo examples page here.. There is also an SD2.1 turbo model that can be found here that can be used in the same workflow if SDXL is too slow on your machine.
Frontend Improvements
Group nodes: You can now select multiple nodes and: Right Click -> Convert to Group Node, this allows you to combine multiple nodes into a single one.
Undo Redo: CTRL-Z and CTRL-Y can now be used to undo and redo.
Reroute nodes can now be used with Primitive Nodes
Stable Zero123
Stable Zero123 is a SD1.x model that can generate different views of an object. Given a photograph of an object with a simple color background it can generate images of what that object looks like from different angles. Normally this model is supposed to be used to generate 3d models from a photograph by generating a bunch of images at different angles then using photogrammetry to create a 3d model from those images. In ComfyUI I only implemented the diffusion model itself
An example how to use it can be found on the examples page here.
FP8 support
Pytorch supports storing two different fp8 formats: e4m3fn and e5m2. I only recommend using fp8 if you are running out of memory.
--fp8_e4m3fn-unet
or --fp8_e5m2-unet
can be used to store the MODEL (unet) weights in memory in either of those fp8 formats.
--fp8_e4m3fn-text-enc
or --fp8_e5m2-text-enc
can be used to store the CLIP (text encoder) weights in memory in either of those fp8 formats.
fp8 has a slight performance penalty and makes generated images be lower quality but greatly reduces memory usage. I think e4m3fn looks best but I have seen some people prefer e5m2 so if you want to use fp8 try both and pick the one you like best.
To have both CLIP and MODEL store the weights in in fp8 e4m3fn you would launch ComfyUI like:
python main.py --fp8_e4m3fn-text-enc --fp8_e4m3fn-unet
If you are using the standalone windows package copy the run_nvidia_gpu.bat to run_nvidia_fp8.bat, edit it with notepad, add those arguments to it and then use it to run ComfyUI
Python 3.12 and Pytorch nightly 2.3
For those who want the bleeding edge maximum performance the standalone package with nightly pytorch for Windows has been updated to Python 3.12 and Pytorch Nightly 2.3. It can be found on the releases page. The regular standalone uses Pytorch 3.11 and Pytorch 2.1.2.
Self Attention Guidance
_for_testing->SelfAttentionGuidance implements Self Attention Guidance: Makes images sharper and more consistent at the cost of slower generation times.
PerpNeg
_for_testing->PerpNeg implements PerpNeg. This makes some images more consistent by making the negative prompt have a more precise effect at the expense of performance. To use it properly make sure “empty_conditioning” is connected to an empty CLIP Text Encode node.
Segmind Vega Model
To use it just download the segmind vega checkpoint and put it in the ComfyUI/models/checkpoints/ folder. You can then use it like a regular SDXL checkpoint.
The segmind vega LCM lora is also supported, to use it download it from the official repo, rename it and put it in ComfyUI/models/loras/ folder. You can then use the LCM Lora example with the segmind vega LCM lora and the segmind vega checkpoint. You might have to lower the cfg to 1.2 to get good images.
Other
A SaveAnimatedPNG node that can save animated PNGs which is a format that works well with ffmpeg unlike animated webp.
--deterministic
can now be used to make pytorch try to use slower deterministic algorithms which might help reproducing images.
--gpu-only
now puts intermediate values in GPU memory instead of CPU memory.
GLora files are now supported.
Interesting ComfyUI projects
ComfyUI in Blender: https://github.com/AIGODLIKE/ComfyUI-BlenderAI-node
PixArt, PixArt LCM for ComfyUI and more: https://github.com/city96/ComfyUI_ExtraModels
ComfyUI Portrait Master Custom Node