Skip to content

kijai/ComfyUI-CogVideoXWrapper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WORK IN PROGRESS

Update7

  • Refactored the Fun version's sampler to accept any resolution, this should make it lot simpler to use with Tora. BREAKS OLD WORKFLOWS, old FunSampler nodes need to be remade.
  • The old bucket resizing is now on it's own node (CogVideoXFunResizeToClosestBucket) to keep the functionality, I honestly don't know if it matters at all, but just in case.
  • Fun version's vid2vid is now also in the same node, the old vid2vid node is deprecated.
  • Added support for FasterCache, this trades more VRAM use for speed with slight quality hit, similar to PAB: https://github.com/Vchitect/FasterCache
  • Improved torch.compile support, it actually works now

Update6

Initial support for Tora (https://github.com/alibaba/Tora)

Converted model (included in the autodownload node):

https://huggingface.co/Kijai/CogVideoX-5b-Tora/tree/main

chrome_HvGTlk5515.mp4

Update5

This week there's been some bigger updates that will most likely affect some old workflows, sampler node especially probably need to be refreshed (re-created) if it errors out!

New features:

  • Initial context windowing with FreeNoise noise shuffling mainly for vid2vid and pose2vid pipelines for longer generations, haven't figured it out for img2vid yet
  • GGUF models and tiled encoding for I2V and pose pipelines (thanks to MinusZoneAI)
  • sageattention support (Linux only) for a speed boost, I experienced ~20-30% increase with it, stacks with fp8 fast mode, doesn't need compiling
  • Support CogVideoX-Fun 1.1 and it's pose models with additional control strength and application step settings, this model's input does NOT have to be just dwpose skeletons, just about anything can work
  • Support LoRAs
CogVideoX_Fun_Pose_00133.mp4
cogvideox_pose_test.mp4
cogvideox_pose_depth_walk_test.mp4

Update4

Initial support for the official I2V version of CogVideoX: https://huggingface.co/THUDM/CogVideoX-5b-I2V

Also needs diffusers 0.30.3

chrome_jvZuPWOzUV.mp4

Update3

Added initial support for CogVideoX-Fun: https://github.com/aigc-apps/CogVideoX-Fun

Note that while this one can do image2vid, this is NOT the official I2V model yet, though it should also be released very soon.

chrome_klXjpmvAd4.mp4

Updade2

Added experimental support for onediff, this reduced sampling time by ~40% for me, reaching 4.23 s/it on 4090 with 49 frames. This requires using Linux, torch 2.4.0, onediff and nexfort installation:

pip install --pre onediff onediffx

pip install nexfort

First run will take around 5 mins for the compilation.

Update

5b model is now also supported for basic text2vid: https://huggingface.co/THUDM/CogVideoX-5b

It is also autodownloaded to ComfyUI/models/CogVideo/CogVideoX-5b, text encoder is not needed as we use the ComfyUI T5.

chrome_sxMlstknXt.mp4

Requires diffusers 0.30.1 (this is specified in requirements.txt)

Uses same T5 model than SD3 and Flux, fp8 works fine too. Memory requirements depend mostly on the video length. VAE decoding seems to be the only big that takes a lot of VRAM when everything is offloaded, peaks at around 13-14GB momentarily at that stage. Sampling itself takes only maybe 5-6GB.

Hacked in img2img to attempt vid2vid workflow, works interestingly with some inputs, highly experimental.

chrome_hrEYWEaEpK.mp4
chrome_BPxEX1OxXP.mp4

Also added temporal tiling as means of generating endless videos:

https://github.com/kijai/ComfyUI-CogVideoXWrapper

AnimateDiff_00003.54.mp4

Original repo: https://github.com/THUDM/CogVideo

CogVideoX-Fun: https://github.com/aigc-apps/CogVideoX-Fun

Controlnet: https://github.com/TheDenk/cogvideox-controlnet