Official repository for LTX-Video at github.com
LTX-Video is the first DiT-based video generation model that can generate high-quality videos in real-time. It can generate 30 FPS videos at 1216ร704 resolution, faster than it takes to watch them. The model is trained on a large-scale dataset of diverse videos and can generate high-resolution videos with realistic and diverse content.
The model supports text-to-image, image-to-video, keyframe-based animation, video extension (both forward and backward), video-to-video transformations, and any combination of these features.
Prompt
Description
A woman with long brown hair and light skin smiles at another woman...
A woman with long brown hair and light skin smiles at another woman with long blonde hair. The woman with brown hair wears a black jacket and has a small, barely noticeable mole on her right cheek. The camera angle is a close-up, focused on the woman with brown hair's face. The lighting is warm and natural, likely from the setting sun, casting a soft glow on the scene. The scene appears to be real-life footage.
A clear, turquoise river flows through a rocky canyon...
A clear, turquoise river flows through a rocky canyon, cascading over a small waterfall and forming a pool of water at the bottom. The river is the main focus of the scene, with its clear water reflecting the surrounding trees and rocks. The canyon walls are steep and rocky, with some vegetation growing on them. The trees are mostly pine trees, with their green needles contrasting with the brown and gray rocks. The overall tone of the scene is one of peace and tranquility.
Two police officers in dark blue uniforms and matching hats...
Two police officers in dark blue uniforms and matching hats enter a dimly lit room through a doorway on the left side of the frame. The first officer, with short brown hair and a mustache, steps inside first, followed by his partner, who has a shaved head and a goatee. Both officers have serious expressions and maintain a steady pace as they move deeper into the room. The camera remains stationary, capturing them from a slightly low angle as they enter. The room has exposed brick walls and a corrugated metal ceiling, with a barred window visible in the background. The lighting is low-key, casting shadows on the officers' faces and emphasizing the grim atmosphere. The scene appears to be from a film or television show.
A woman with light skin, wearing a blue jacket and a black hat...
A woman with light skin, wearing a blue jacket and a black hat with a veil, looks down and to her right, then back up as she speaks; she has brown hair styled in an updo, light brown eyebrows, and is wearing a white collared shirt under her jacket; the camera remains stationary on her face as she speaks; the background is out of focus, but shows trees and people in period clothing; the scene is captured in real-life footage.
A man in a dimly lit room talks on a vintage telephone...
A man in a dimly lit room talks on a vintage telephone, hangs up, and looks down with a sad expression. He holds the black rotary phone to his right ear with his right hand, his left hand holding a rocks glass with amber liquid. He wears a brown suit jacket over a white shirt, and a gold ring on his left ring finger. His short hair is neatly combed, and he has light skin with visible wrinkles around his eyes. The camera remains stationary, focused on his face and upper body. The room is dark, lit only by a warm light source off-screen to the left, casting shadows on the wall behind him. The scene appears to be from a movie.
Name | Notes |
---|---|
inference.py config | ComfyUI workflow (Recommended) |
ltxv-13b-0.9.7-dev | Highest quality, requires more VRAM |
ltxv-13b-0.9.7-dev.yaml | ltxv-13b-i2v-base.json |
ltxv-13b-0.9.7-mix | Mix ltxv-13b-dev and ltxv-13b-distilled in the same multi-scale rendering workflow for balanced speed-quality |
N/A | ltxv-13b-i2v-mixed-multiscale.json |
ltxv-13b-0.9.7-distilled | Faster, less VRAM usage, slight quality reduction compared to 13b. Ideal for rapid iterations |
ltxv-13b-0.9.7-distilled.yaml | ltxv-13b-dist-i2v-base.json |
ltxv-13b-0.9.7-distilled-lora128 | LoRA to make ltxv-13b-dev behave like the distilled model |
N/A | N/A |
ltxv-13b-0.9.7-fp8 | Quantized version of ltxv-13b |
Coming soon | ltxv-13b-i2v-base-fp8.json |
ltxv-13b-0.9.7-distilled-fp8 | Quantized version of ltxv-13b-distilled |
Coming soon | ltxv-13b-dist-i2v-base-fp8.json |
ltxv-2b-0.9.6 | Good quality, lower VRAM requirement than ltxv-13b |
ltxv-2b-0.9.6-dev.yaml | ltxvideo-i2v.json |
ltxv-2b-0.9.6-distilled | 15ร faster, real-time capable, fewer steps needed, no STG/CFG required |
ltxv-2b-0.9.6-distilled.yaml | ltxvideo-i2v-distilled.json |
Online inference
The model is accessible right away via the following links:
The codebase was tested with Python 3.10.5, CUDA version 12.2, and supports PyTorch >= 2.1.2. On macos, MPS was tested with PyTorch 2.3.0, and should support PyTorch == 2.3 or >= 2.6.
git clone https://github.com/Lightricks/LTX-Video.git
cd LTX-Video
#create env
python -m venv env
source env/bin/activate
python -m pip install -e .[inference-script]
๐Note: For best results, we recommend using our ComfyUI workflow. Weโre working on updating the inference.py script to match the high quality and output fidelity of ComfyUI.
To use our model, please follow the inference code in inference.py:
For text-to-video generation:
python inference.py --prompt "PROMPT" --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml
For image-to-video generation:
python inference.py --prompt "PROMPT" --conditioning_media_paths IMAGE_PATH --conditioning_start_frames 0 --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml
Extending a video:
๐Note: Input video segments must contain a multiple of 8 frames plus 1 (e.g., 9, 17, 25, etc.), and the target frame number should be a multiple of 8.
python inference.py --prompt "PROMPT" --conditioning_media_paths VIDEO_PATH --conditioning_start_frames START_FRAME --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml
For video generation with multiple conditions: You can now generate a video conditioned on a set of images and/or short video segments. Simply provide a list of paths to the images or video segments you want to condition on, along with their target frame numbers in the generated video. You can also specify the conditioning strength for each item (default: 1.0).
python inference.py --prompt "PROMPT" --conditioning_media_paths IMAGE_OR_VIDEO_PATH_1 IMAGE_OR_VIDEO_PATH_2 --conditioning_start_frames TARGET_FRAME_1 TARGET_FRAME_2 --height HEIGHT --width WIDTH --num_frames NUM_FRAMES --seed SEED --pipeline_config configs/ltxv-13b-0.9.7-distilled.yaml
To use our model with ComfyUI, please follow the instructions at https://github.com/Lightricks/ComfyUI-LTXVideo/.
To use our model with the Diffusers Python library, check out the official documentation.
Diffusers also support an 8-bit version of LTX-Video, see details below
When writing prompts, focus on detailed, chronological descriptions of actions and scenes. Include specific movements, appearances, camera angles, and environmental details - all in a single flowing paragraph. Start directly with the action, and keep descriptions literal and precise. Think like a cinematographer describing a shot list. Keep within 200 words. For best results, build your prompts using this structure:
Start with main action in a single sentence Add specific details about movements and gestures Describe character/object appearances precisely Include background and environment details Specify camera angles and movements Describe lighting and colors Note any changes or sudden events See examples for more inspiration.
When using inference.py, shorts prompts (below prompt_enhancement_words_threshold words) are automatically enhanced by a language model. This is supported with text-to-video and image-to-video (first-frame conditioning). When using LTXVideoPipeline directly, you can enable prompt enhancement by setting enhance_prompt=True.
Resolution Preset: Higher resolutions for detailed scenes, lower for faster generation and simpler scenes. The model works on resolutions that are divisible by 32 and number of frames that are divisible by 8 + 1 (e.g. 257). In case the resolution or number of frames are not divisible by 32 or 8 + 1, the input will be padded with -1 and then cropped to the desired resolution and number of frames. The model works best on resolutions under 720 x 1280 and number of frames below 257 Seed: Save seed values to recreate specific styles or compositions you like Guidance Scale: 3-3.5 are the recommended values Inference Steps: More steps (40+) for quality, fewer steps (20-30) for speed
๐ For advanced parameters usage, please see python inference.py --help
A community project providing additional nodes for enhanced control over the LTX Video model. It includes implementations of advanced techniques like RF-Inversion, RF-Edit, FlowEdit, and more. These nodes enable workflows such as Image and Video to Video (I+V2V), enhanced sampling via Spatiotemporal Skip Guidance (STG), and interpolation with precise frame settings.
Repository: ComfyUI-LTXTricks
Features:
๐ RF-Inversion: Implements RF-Inversion with an example workflow here. โ๏ธ RF-Edit: Implements RF-Solver-Edit with an example workflow here. ๐ FlowEdit: Implements FlowEdit with an example workflow here. ๐ฅ I+V2V: Enables Video to Video with a reference image. Example workflow. โจ Enhance: Partial implementation of STGuidance. Example workflow. ๐ผ๏ธ Interpolation and Frame Setting: Nodes for precise control of latents per frame. Example workflow.
LTX-VideoQ8 is an 8-bit optimized version of LTX-Video, designed for faster performance on NVIDIA ADA GPUs.
Repository: LTX-VideoQ8
Features:
๐ Up to 3X speed-up with no accuracy loss ๐ฅ Generate 720x480x121 videos in under a minute on RTX 4060 (8GB VRAM) ๐ ๏ธ Fine-tune 2B transformer models with precalculated latents
Community Discussion: Reddit Thread
Diffusers integration: A diffusers integration for the 8-bit model is already out! Details here
TeaCache is a training-free caching approach that leverages timestep differences across model outputs to accelerate LTX-Video inference by up to 2x without significant visual quality degradation.
Repository: TeaCache4LTX-Video
Features:
๐ Speeds up LTX-Video inference. ๐ Adjustable trade-offs between speed (up to 2x) and visual quality using configurable parameters. ๐ ๏ธ No retraining required: Works directly with existing models.
is welcome! If you have a project or tool that integrates with LTX-Video, please let us know by opening an issue or pull request.
We provide an open-source repository for fine-tuning the LTX-Video model: LTX-Video-Trainer. This repository supports both the 2B and 13B model variants, enabling full fine-tuning as well as LoRA (Low-Rank Adaptation) fine-tuning for more efficient training. Explore the repository to customize the model for your specific use cases! More information and training instructions can be found in the README.
Want to work on cutting-edge AI research and make a real impact on millions of users worldwide? At Lightricks, an AI-first company, we're revolutionizing how visual content is created. If you are passionate about AI, computer vision, and video generation, we would love to hear from you! Please visit our careers page for more information.
We are grateful for the following awesome projects when implementing LTX-Video:
DiT and PixArt-alpha: vision transformers for image generation.
๐ Our tech report is out! If you find our work helpful, please โญ๏ธ star the repository and cite our paper.
@article{HaCohen2024LTXVideo, title={LTX-Video: Realtime Video Latent Diffusion}, author={HaCohen, Yoav and Chiprut, Nisan and Brazowski, Benny and Shalem, Daniel and Moshe, Dudu and Richardson, Eitan and Levin, Eran and Shiran, Guy and Zabari, Nir and Gordon, Ori and Panet, Poriya and Weissbuch, Sapir and Kulikov, Victor and Bitterman, Yaki and Melumian, Zeev and Bibi, Ofir}, journal={arXiv preprint arXiv:2501.00103}, year={2024} }