© 2022 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Meta removes a deepfake video of the Ukrainian president


The recent deepfake video of President Volodymyr Zelensky appears to be the first use of AI-generated multimedia during the war against Ukraine.

Facebook’s owner Meta removed the fake video of the Ukrainian president showing the president encouraging the citizens to ‘lay down arms.’ After the video started floating on social media, president Zelensky said the video was false.

“Earlier today, our teams identified and removed a deepfake video claiming to show President Zelensky issuing a statement he never did,” Meta’s head of security policy Nathaniel Gleicher tweeted.

According to Gleicher, the video that appeared on a compromised website was later shared on Facebook and violated the company’s policy on misleading and manipulated media.

deepfake-Zelensky-AI
An excerpt from a deepfake video Meta removed.

YouTube’s spokesperson Ivy Choi issued a similar statement claiming the platform also deleted the video and suspended reuploads, apart from cases when the video is used for educational or research purposes.

According to Sam Gregory, program director at WITNESS, a New York-based human rights and video evidence group, the fake video of President Zelensky appears to be the first explicit deceptive deepfake in the context of Russia’s war against Ukraine.

Weapon and a tool

Deepfakes is a genre of content generated using artificial intelligence or, more specifically, a Generative Adversarial Network (GAN).

Programs that generate deepfakes use two or more different AIs working together. The first AI scans an image (or video, or audio) of the subject to be faked and then creates a doctored image or other media types.

The second AI will then examine these fakes and compare them to authentic images. And if the differences are too stark, the second AI will mark the image as an obvious fake and tell the first AI.

The first AI takes this information and continually adjusts the fake image until the second AI can’t tell a fake from the real thing anymore. This system is called Generative Adversarial Network, or GAN for short.

While deepfakes can be used to generate legitimate video content as we do at Cybernews, threat actors can abuse the technology with malicious intent.

As of now, the vast majority of deepfakes online are used to generate pornography. Experts feared deepfakes might be instrumental in shaping the 2020 US Presidential elections. However, the fears did not play out.


More from Cybernews:

Government sites under 'unprecedented' cyberattack - Russian ministry

Russia’s cyber weapons might be as weak as its artillery, says expert

AI startup tracks companies breaking off ties with Russia

Not for love nor money: dating app crypto thieves hold on hard to stolen loot

Predictive policing comes under the spotlight in Europe

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked