ByteDance’s Seedance 2.0, a new multimodal AI video generator, is drawing wide attention after short, highly realistic clips spread quickly online, including viral celebrity deepfakes and remixes of well-known Hollywood content. The fast-moving buzz is also fueling fresh questions about copyright, performer likeness, and how easily high-quality AI video can be made and shared.
Seedance 2.0 was officially launched on Feb. 12, 2026, as a “next-generation video creation model” designed around a unified multimodal audio-video joint generation architecture. ByteDance says the model supports four input types—text, image, audio, and video—and is built to offer a broad set of reference and editing capabilities.
Viral clips raise IP concerns
Soon after its debut, Seedance 2.0 went viral with clips that, according to one report, reproduced Hollywood intellectual property “in startling detail,” including a “Tom Cruise versus Brad Pitt” deepfake fight video and remixes tied to major franchises. The same report described additional examples such as “Avengers: Endgame” remixes and a “Kim Kardashian-Ye” palace drama in Mandarin that it said reached around a million views on Weibo.
The quick spread of celebrity lookalike content has heightened concerns because Seedance 2.0 is described as producing short videos that look more coherent and consistent than earlier AI video tools. In that report, a Roblox product manager quoted on X called what they had seen from Seedance 2 “a copyright violation,” while a screenwriter quoted in the same piece said the Cruise-versus-Pitt video left him “shook.”
What ByteDance says Seedance 2.0 can do
ByteDance says Seedance 2.0 is a major upgrade over version 1.5, with improvements in physical accuracy, visual realism, and controllability that make it suitable for “industrial-grade creation scenarios.” The company says the model can output 15-second, high-quality, multi-shot audio-video and includes dual-channel audio for more realistic sound.
ByteDance also says Seedance 2.0 can handle complex interaction and motion scenes with strong motion stability and “physical restoration” capabilities, including multi-subject action. In its launch post, the company highlighted examples such as pair figure skating sequences that follow “real-world physical laws,” including synchronized takeoffs, mid-air spins, and precise landings.
Seedance 2.0 supports mixed-modality input, and ByteDance says users can provide up to nine images, three video clips, and three audio clips along with natural language instructions. Another report similarly described the model as letting users combine text, images, video, and audio inputs to generate 15-second clips, emphasizing synchronized audio.
ByteDance also says the model’s instruction-following and consistency have been upgraded, and it introduces new editing capabilities such as targeted modifications and video extension. The company says these features are meant to lower production costs across areas like film, advertising, e-commerce, and gaming.
Watermarks, voice cloning, and access
One report said Seedance 2.0 outputs are “completely watermark-free,” and contrasted that with claims that OpenAI’s Sora 2 uses visible watermarks and Google’s Veo 3.1 embeds metadata tags. That same report argued the lack of watermarks could complicate efforts by rights holders to identify and flag unauthorized uses of copyrighted material.
The same report also said ByteDance suspended a feature that could generate a person’s voice from a face photograph alone, after a tester said it produced audio “nearly identical” to a real voice without using any voice data. ByteDance’s own launch post includes a separate note saying that if users want to use real human portraits as subject references, identity verification or prior legal authorization is required.
On availability, ByteDance says Seedance 2.0 is available on platforms including Dreamina AI and Doubao, and it lists options to try it via Dreamina Web video generation, the Doubao app chatbox, and Volcano Engine’s Model Ark experience center. A separate report described access as limited to China through ByteDance’s Jimeng AI platform, the Doubao AI assistant, or the CapCut video editor, and said there was no word on a global rollout.
