• About Us
  • Contact
  • Careers
  • Send Us a Tip
Menu
  • About Us
  • Contact
  • Careers
  • Send Us a Tip
CyberNews logo
Newsletter
  • Home
  • News
  • Editorial
  • Security
  • Privacy
  • Resources
Menu
  • Home
  • News
  • Editorial
  • Security
  • Privacy
  • Resources
CyberNews logo

Home » Editorial » Can we still believe what we see online?

Can we still believe what we see online?

by Adi Gaskell
6 January 2021
in Editorial
0
Woman using laptop with blurred screen

© Shutterstock

17
SHARES

That deepfakes are becoming increasingly prevalent is hard to dispute, with recent research from the Queensland University of Technology casting doubt on just how reliable our eyes can be when viewing content online.

The researchers highlight how the bulk of image manipulation has been designed to drive fake news agendas, with social media platforms and news organizations alike having a tough time understanding what is real and what isn’t.

For instance, they highlight an example from 2019 when Donald Trump’s team posted an image to his Facebook page. The image was quickly identified as having gone under the Photoshop scalpel, with the president’s skin and physique visibly altered from the original that had been located on the official White House Flickr feed.

While that kind of ruse was fairly easily spotted, it becomes much harder when the unedited versions aren’t so publicly available. This makes the standard reverse-image search useless in detecting manipulation.

Tricking the mind

The paper highlights the wide range of ways images can be cloned, spliced, cropped, and re-touched to manipulate reality. The authors cite images shared by media outlets last year that appeared to show crocodiles on American streets after a flood. It turned out they were actually pictures of Floridian alligators from several years previously.

Similarly, white supremacist groups manipulated an image of Martin Luther King to make it appear as though he was giving the finger as the US Senate passed the civil rights bill in 1964.

The authors highlight that the huge number of visual images produced and published every day makes detection that much harder. Indeed, a whopping 3.2 billion images and 720,000 hours of video are produced every day. There is also a growing desire among mainstream media to include user-generated material, which increases the importance of journalists themselves being able to detect fake material.

The paper reveals that just 11% of journalists use any form of verification tools.

The authors believe a lack of user-friendly software is a major barrier for society to overcome if we’re to have confidence that what we’re shown online is authentic and un-doctored.

Losing the arms race

Sadly, the evidence suggests that those wishing to make hay from such manipulation are acting far faster than those wishing to help us stop doctored material making its way around the web. While the production of faked media with the intention of spreading misinformation is bad enough, the rise in deep fakes that are generated for altogether more sordid ends is a much larger problem.

The problem posed by deep fakes came to head after news broke of a bot that was being used to create nude images based upon photos of clothed individuals. The bot has been operating on Telegram since July, and already over 100,000 women have been targeted by people generating nude images, with suggestions that women under the age of 18 have also been attacked.

The Telegram channel is believed to have over 25,000 subscribers, with each set of images garnering thousands of views. A second Telegram channel, which actively promotes the first, has over 50,000 subscribers. While not all of the images are perfect, this is believed to be the first time such productions have been performed on such an enormous scale.

The channel was discovered by deep fake detection company Sensity, who announced their find in a recently published report. The company hopes that by exposing the availability of such services that channels, such as Telegram, will be forced to remove the offending content. They believe that as they were only able to measure the images shared publicly, the likely number of women affected is going to be much higher than the recorded figure. Indeed, it’s quite probable that few of the women whose privacy has been exploited even know the offense has taken place.

What is even more alarming is that unlike the deep fake videos that have appeared on various porn websites in recent months, these images require no real technical skills to create.

The process is entirely automated and simply requires a regular photo of someone that is uploaded to the messaging service. The criminals make their money by charging users for both extensive uses and for the removal of the watermarks that adorn each image.

The software is likely to be based on a version of the DeepNude software that burst into the public consciousness last summer, but whose creator pulled due to fears about gross misuse. Sadly, it had already been downloaded nearly 100,000 times before it was taken down, and the code was quickly copied. The program uses deep learning and generative adversarial networks to produce images based on what it thinks the victim looks like. It has been trained using a range of clothed and unclothed images of women.

Since the creation of deep fakes in 2017, they have predominantly been used to abuse women, which gives the predominance of fake images online an altogether darker hue. That the growth in the deployment of fake media online has been growing so rapidly should alarm us all, especially as the methods used to detect and remove such images are growing at a considerably slower pace than those to create, disseminate, and profit from them.

Share17TweetShareShare

Related Posts

Google on laptop and mobile

Google vs Australia: The Battle of the Precedents

25 January 2021
Is there life on Mars?

Is there life on Mars?

22 January 2021
Woman using VR glasses

Why Businesses Are Preparing for an Extended Reality (XR)

21 January 2021
Satellites are not safe enough. Here’s why that should worry you

Satellites are not safe enough. Here’s why that should worry you

21 January 2021
Next Post
This educator claims to have invented an entertaining way to learn cybersecurity

This educator claims to have invented an entertaining way to learn cybersecurity

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

I agree to the Terms & Conditions and Privacy Policy.

Popular News

  • 70TB of Parler users’ messages, videos, and posts leaked by security researchers

    70TB of Parler users’ messages, videos, and posts leaked by security researchers

    83034 shares
    Share 83024 Tweet 0
  • The ultimate guide to safe and anonymous online payment methods in 2021

    13 shares
    Share 13 Tweet 0
  • Facebook is tracking you: learn how to delete all Facebook data

    57 shares
    Share 57 Tweet 0
  • ProtonMail review: have we found the most secure email provider in 2021?

    69 shares
    Share 69 Tweet 0
  • Custom mechanical keyboards – 17 coolest ones we’ve ever seen

    442 shares
    Share 441 Tweet 0
Teespring data leaked on hacker forum

8+ million Teespring user records leaked on hacker forum

25 January 2021
Italy consumer association sues Apple for planned iPhone obsolescence

Italy consumer association sues Apple for planned iPhone obsolescence

25 January 2021
Google on laptop and mobile

Google vs Australia: The Battle of the Precedents

25 January 2021
Makers of Sophia the robot plan mass rollout amid pandemic

Makers of Sophia the robot plan mass rollout amid pandemic

25 January 2021
Elon Musk

Elon Musk to offer $100 million prize for ‘best’ carbon capture tech

22 January 2021
Is there life on Mars?

Is there life on Mars?

22 January 2021
Newsletter

Subscribe for security tips and CyberNews updates.

Email address is required. Provided email address is not valid. You have been successfully subscribed to our newsletter!
Categories
  • News
  • Editorial
  • Security
  • Privacy
  • Resources
  • VPNs
  • Password Managers
  • Secure Email Providers
  • Antivirus Software Reviews
Tools
  • Personal data leak checker
  • Strong password generator
About Us

We aim to provide you with the latest tech news, product reviews, and analysis that should guide you through the ever-expanding land of technology.

Careers

We are hiring.

  • About Us
  • Contact
  • Send Us a Tip
  • Privacy Policy
  • Terms & Conditions
  • Vulnerability Disclosure

© 2021 CyberNews

This website uses cookies. By continuing to use this website you are giving consent to cookies being used. Visit our Privacy Policy.

Home

News

Editorial

Security

Privacy

Resources

  • In the News
  • Contact
  • Careers
  • Send Us a Tip

© 2020 CyberNews – Latest tech news, product reviews, and analyses.

Subscribe for Security Tips and CyberNews Updates
Email address is required. Provided email address is not valid. You have been successfully subscribed to our newsletter!