
Date Updated: 17 January 2025
– Minimum: 48GB VRAM (good for training with a few videos)
– Recommended: 80GB VRAM (for larger datasets)
1 . Start with the Better Comfy template – this gives you:
– Easy testing of your model after training
– Access to VSCode for file management
– A stable environment for long training sessions
To access VSCode:
– Connect to port 7777
– Open the Command Palette
– Launch a terminal from there
First, get the main repository:
git clone --recurse-submodules https://github.com/tdrussell/diffusion-pipe
cd diffusion-pipe
pip install -r requirements.txt
Next, install Git LFS (Large File Storage):
apt-get update
apt-get install git
apt-get install git-lfs
git lfs install
mkdir models
cd models
# Get the video transformer and VAE files
wget https://huggingface.co/Kijai/HunyuanVideo_comfy/resolve/main/hunyuan_video_720_cfgdistill_bf16.safetensors
wget https://huggingface.co/Kijai/HunyuanVideo_comfy/resolve/main/hunyuan_video_vae_bf16.safetensors
# Get the LLM and CLIP models
git clone https://huggingface.co/Kijai/llava-llama-3-8b-text-encoder-tokenizer/
git clone https://huggingface.co/openai/clip-vit-large-patch14
1. Navigate to the configuration folder:
– Open VSCode
– Go to /workspace/diffusion-pipe/examples
– Find and edit hunyuan_video.toml or download and upload my toml files (you can find them in the resources file attached to the article).
2. Update the model paths in hunyuan_video.toml:
transformer_path = '/workspace/diffusion-pipe/models/hunyuan_video_720_cfgdistill_bf16.safetensors'
vae_path = '/workspace/diffusion-pipe/models/hunyuan_video_vae_bf16.safetensors'
llm_path = '/workspace/diffusion-pipe/models/llava-llama-3-8b-text-encoder-tokenizer'
clip_path = '/workspace/diffusion-pipe/models/clip-vit-large-patch14'
mkdir /workspace/diffusion-pipe/data
mkdir /workspace/diffusion-pipe/data/input
mkdir /workspace/diffusion-pipe/data/output
In dataset.toml
path = '/workspace/diffusion-pipe/data/input'
In hunyuan_video.toml
output_dir = '/workspace/diffusion-pipe/data/output'
Launch the training with this command:
Note: Make sure in command prompt you’re in diffusion-pipe folder. Also double check all paths in the toml files.
NCCL_P2P_DISABLE="1" NCCL_IB_DISABLE="1" deepspeed --num_gpus=1 train.py --deepspeed --config examples/hunyuan_video.toml
To test your LoRAs while training is ongoing:
– Set up a separate pod for testing
– Use the HunyuanVideo Lora Select node in ComfyUI
– Upload your LoRA files to the /loras/
folder
Tip: For early testing, you can increase the strength to preview training progress, but be aware that values above 2.0 may cause artifacts.
Embedded Guidance Scale
This parameter controls your model’s creativity level:
– Lower values → Results closer to training data
– Similar to CFG in Stable Diffusion
– Start with default values and adjust based on results
Â
This controls frame-to-frame variation:
– Controls how much each frame can differ from the previous one
– Lower values (2-4): Better for photorealistic results
– Higher values: Allow more dramatic movements but may introduce artificial looks
– Effects vary by prompt and seed
Â
Â
Finding the Right Balance:
– Start with moderate values for both parameters
– Keep track of successful seeds
– Make small adjustments to fine-tune results
– Document your settings for reproducible results
Â
For Photorealistic Results:
– Use lower flow shift values (2-4)
– Keep guidance scale moderate
– Focus on stable, smooth transitions
Â
For Dynamic Movements:
Â
Blurry Output:
– Flow shift might be too low
– Try gradually increasing the value
Artificial Shininess:
– Flow shift might be too high
– Reduce the value for more natural looks
Inconsistent Results:
– Document successful seed numbers
– Keep detailed notes of parameter combinations
– Test multiple variations of the same settings
Â
Â
– Always test multiple parameter combinations
– Save successful settings and seeds
– Use a separate pod for testing while training continues
– Start with conservative values and adjust gradually
– Keep your training data well-organized and clearly labeled
Â
Â
Remember that achieving the perfect result often requires experimentation and patience. Each project may need different settings, so don’t be afraid to try various combinations.
Send download link to:
Learn to create stunning AI Influencer with ComfyUI. This beginner-friendly course will guide you through every step—from basics to advanced techniques.