TikTok owner teases scarily realistic AI video generator


ByteDance has entered the multimodal model game as it welcomes Omnihuman-1 to the world. But could this new tech result in a deepfake nightmare?

The world just got a little more dystopian as ByteDance, the creator of TikTok, unveiled its new artificial intelligence model, which can transform static images and audio into videos.

ByteDance introduced a “multimodality motion conditioning mixed training strategy,” which trains the model on various types of movement data.

ADVERTISEMENT

By training the model on different types of data, such as text, audio, and movement, it becomes better at generating realistic human movement.

ByteDance researchers proclaimed in a recent paper that its model “significantly outperforms existing methods” as it generates “extremely realistic human videos” even when using weaker inputs like audio.

Omnihuman-1 allows users to input images of any aspect ratio, such as portraits, half-body shots, or full-body images, with the aim of executing “lifelike and high-quality” videos.

While the AI model shows promising results, from re-animating historical figures like Albert Einstein to democratizing cartoons and animation, the model does raise ethical concerns.

Niamh Ancell BW justinasv jurgita Ernestas Naprys
Don’t miss our latest stories on Google News

For one, ByteDance’s Omnihuman-1 could be used to spread misinformation by creating realistic deepfakes of political or prominent figures.

Deepfakes are becoming commonplace as bad actors use technology, like ByteDance's AI model, to commit fraud or poison public opinion.

ADVERTISEMENT

During the US presidential campaign, X owner Elon Musk shared a convincing video of Kamala Harris saying compromising things with his roughly 216 million followers.

The Omnihuman-1 isn’t available to the public as Bytendance currently doesn’t “offer services or downloads anywhere,” the technical paper notes.

ByteDance researchers also warn users to be “cautious of fraudulent information” as bad actors may capitalize on the hype surrounding the new AI model and encourage users to download malicious software.