As some of you already know, I have resigned from Stability AI and am starting a new chapter. I am partnering with mcmonkey4eva, Dr.Lt.Data, pythongossssss, robinken, and yoland68 to start Comfy Org. We will continue to develop and improve ComfyUI with a lot more resources.

As you might have noticed, I am a bit overwhelmed by everything so far. Going forward, the team will work on solving many of the issues around ComfyUI while we continue to focus on keeping ComfyUI on the cutting edge.

Some of the main focuses:

  • The primary focus will be to develop ComfyUI to be the best free open source software project to inference AI models.

  • The focus will mainly be on image/video/audio models in that order, with the potential to add more modalities in the future.

  • The team will focus on making ComfyUI more comfortable to use. This includes iterating on the custom node registry and enforcing some basic standards to make custom nodes safer to install.

I believe that true open source is the best way forward and hope to make ComfyUI succeed so well that it will inspire companies to join the open source effort. I personally believe that closed source AI is a dead end and a waste of time.

Thank you, everyone, for supporting Comfy, for contributing, for writing custom nodes and for being part of the Comfy ecosystem. The future is truly Comfy.

Be sure to check the Comfy Org blog for more updates: https://blog.comfy.org/

What’s new in ComfyUI

For those that missed them these are the major updates in ComfyUI in the last few weeks.

SD3 support

You can find basic examples on the page here: https://comfyanonymous.github.io/ComfyUI_examples/sd3/

Stable Audio Support

I have not put an example for this one on an example page yet because there’s still a few things left to do but it should work and if you want to give it a try you can find a workflow on: https://gist.github.com/comfyanonymous/0e04181f7fd01301230adc106b691cc2

TensorRT support

Thanks to Nvidia there are now TensorRT nodes for ComfyUI, these can be used to compile models to TensorRT engine files to get a massive speed boost during inference.