Lightricks has unveiled a new AI video model, built in partnership with Nvidia, that stands out by running directly on user devices instead of relying on cloud processing. This marks a significant step forward in AI video generation, offering both enhanced privacy and faster performance. The model, demonstrated at CES 2026, is designed for professional creators and offers capabilities exceeding many competitors in terms of control and output quality.
Key Capabilities and Features
The Lightricks model can generate AI video clips up to 20 seconds long at 50 frames per second, including native audio. Crucially, it supports 4K resolution, making it viable for professional projects. Unlike Google’s Veo 3 or OpenAI’s Sora, this model’s on-device optimization is its defining trait. The focus is on empowering creators with a secure and efficient workflow.
What “Open-Weight” Means
The term “open” in AI often refers to “open-weight” models, meaning developers can access the building blocks of the AI but not necessarily the complete, fully transparent source code. Think of it like having a recipe for a cake: you know the ingredients, but not the precise measurements. Lightricks’ model is available on platforms like HuggingFace and ComfyUI, allowing for wider experimentation and customization.
Why Local Processing Matters
AI video generation is computationally intensive. Most models require powerful data center computers to produce high-quality results, forcing users to rely on cloud services. Lightricks’ model, leveraging Nvidia’s RTX chips, breaks this dependency. Running AI locally has several advantages:
- Data Control: Creators retain full control over their data, avoiding potential misuse by large tech companies. This is especially critical for studios protecting intellectual property.
- Speed: Local processing can significantly reduce generation times. Typical AI video prompts take 1-2 minutes; faster turnaround means time and cost savings.
- Security: On-device execution minimizes the risk of data breaches or unauthorized access to sensitive content.
Running AI models on your device, with the proper equipment, can also give you results faster. The average AI video prompt takes 1-2 minutes to generate, so reducing that could help save time and money, which are two of the strongest arguments for creators integrating AI into their work.
The rise of local AI video processing is a natural progression, as computing power becomes more accessible. The trend suggests a shift toward greater creator independence and a more secure AI ecosystem.

































