© 2021 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Can we still believe what we see online?


That deepfakes are becoming increasingly prevalent is hard to dispute, with recent research from the Queensland University of Technology casting doubt on just how reliable our eyes can be when viewing content online.

The researchers highlight how the bulk of image manipulation has been designed to drive fake news agendas, with social media platforms and news organizations alike having a tough time understanding what is real and what isn't.

For instance, they highlight an example from 2019 when Donald Trump's team posted an image to his Facebook page. The image was quickly identified as having gone under the Photoshop scalpel, with the president's skin and physique visibly altered from the original that had been located on the official White House Flickr feed.

While that kind of ruse was fairly easily spotted, it becomes much harder when the unedited versions aren't so publicly available. This makes the standard reverse-image search useless in detecting manipulation.

Tricking the mind

The paper highlights the wide range of ways images can be cloned, spliced, cropped, and re-touched to manipulate reality. The authors cite images shared by media outlets last year that appeared to show crocodiles on American streets after a flood. It turned out they were actually pictures of Floridian alligators from several years previously.

Similarly, white supremacist groups manipulated an image of Martin Luther King to make it appear as though he was giving the finger as the US Senate passed the civil rights bill in 1964.

The authors highlight that the huge number of visual images produced and published every day makes detection that much harder. Indeed, a whopping 3.2 billion images and 720,000 hours of video are produced every day. There is also a growing desire among mainstream media to include user-generated material, which increases the importance of journalists themselves being able to detect fake material.

The paper reveals that just 11% of journalists use any form of verification tools.

The authors believe a lack of user-friendly software is a major barrier for society to overcome if we're to have confidence that what we're shown online is authentic and un-doctored.

Losing the arms race

Sadly, the evidence suggests that those wishing to make hay from such manipulation are acting far faster than those wishing to help us stop doctored material making its way around the web. While the production of faked media with the intention of spreading misinformation is bad enough, the rise in deep fakes that are generated for altogether more sordid ends is a much larger problem.

The problem posed by deep fakes came to head after news broke of a bot that was being used to create nude images based upon photos of clothed individuals. The bot has been operating on Telegram since July, and already over 100,000 women have been targeted by people generating nude images, with suggestions that women under the age of 18 have also been attacked.

The Telegram channel is believed to have over 25,000 subscribers, with each set of images garnering thousands of views. A second Telegram channel, which actively promotes the first, has over 50,000 subscribers. While not all of the images are perfect, this is believed to be the first time such productions have been performed on such an enormous scale.

The channel was discovered by deep fake detection company Sensity, who announced their find in a recently published report. The company hopes that by exposing the availability of such services that channels, such as Telegram, will be forced to remove the offending content. They believe that as they were only able to measure the images shared publicly, the likely number of women affected is going to be much higher than the recorded figure. Indeed, it’s quite probable that few of the women whose privacy has been exploited even know the offense has taken place.

What is even more alarming is that unlike the deep fake videos that have appeared on various porn websites in recent months, these images require no real technical skills to create.

The process is entirely automated and simply requires a regular photo of someone that is uploaded to the messaging service. The criminals make their money by charging users for both extensive uses and for the removal of the watermarks that adorn each image.

The software is likely to be based on a version of the DeepNude software that burst into the public consciousness last summer, but whose creator pulled due to fears about gross misuse. Sadly, it had already been downloaded nearly 100,000 times before it was taken down, and the code was quickly copied. The program uses deep learning and generative adversarial networks to produce images based on what it thinks the victim looks like. It has been trained using a range of clothed and unclothed images of women.

Since the creation of deep fakes in 2017, they have predominantly been used to abuse women, which gives the predominance of fake images online an altogether darker hue. That the growth in the deployment of fake media online has been growing so rapidly should alarm us all, especially as the methods used to detect and remove such images are growing at a considerably slower pace than those to create, disseminate, and profit from them.

Leave a Reply

Your email address will not be published. Required fields are marked