Deepfakes are becoming more and more pervasive, with new tools emerging to make the creation of fake media material within the grasp of many of us. Indeed, so prevalent are deepfakes, computer giant Microsoft released a new software tool to help users identify media that has been manipulated. And still, we happily share them even though we know they’re fake.
“Video Authenticator can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated,” Microsoft said in a blog post. “In the case of a video, it can provide this percentage in real-time on each frame as the video plays. It works by detecting the blending boundary of the deepfake and subtle fading or grayscale elements that might not be detectable by the human eye.”
Despite such tools helping us better identify deepfakes online, there are still concerns, however, that we happily share videos and images even when we know they’re fake. That was the dire warning posed by new research from Singapore’s Nanyang Technological University (NTU), which found that despite people being increasingly aware of the existence of deepfakes still shared such content with their social networks.
People are sharing fakes
The researchers quizzed 1,231 Singaporeans and found that while 54% of respondents were aware of the existence of deepfakes, they were less successful at actually being able to spot one, with a third of respondents saying they had shared content that they had later discovered was in fact fake. This was despite roughly 20% of respondents saying that they regularly come across deepfakes online.
This is hugely problematic, as data from the deepfake detection firm Sensity revealed that in the first six months of 2020 there were an estimated 49,081 deepfake videos identified online. This represents growth of over 300% from 2019.
While a lot of these deepfakes are designed for amusement, there’s growing concern that they’re being used for criminal activity like sextortion and misinformation.
The NTU researchers believe that the potential use of deepfakes to spread misinformation is particularly harmful as the videos look and feel authentic, and therefore encourage us to spread the misinformation among our network. They also highlight the potential for deepfake technology to be used to create non-consensual pornography and even to incite fear and violence. They worry that as the AI technology that underpins deepfake media evolves, it will be increasingly difficult to spot fakes from real footage.
An arms race
Microsoft is not alone in releasing tools to help us to identify deepfakes, with Twitter, Google, and Facebook all beginning to label content that they have identified as having been manipulated. While these are to be welcomed, the NTU team believes far more needs to be done to educate the public both on the prevalence of deepfakes and how they can be spotted.
The data revealed that Americans were generally more aware of deepfake technology than Singaporeans, but this awareness seemed to provide no protection against the technology, as more Americans also said that they had shared deepfake technology. This is perhaps because they are more exposed to deepfakes than their Singaporean peers.
The researchers suggest that the various high-profile instances of deepfake videos, including those featuring Donald Trump and Barack Obama, have both raised awareness but also anxiety regarding their potential threat to society in the US.
By contrast, Singapore has not had such direct exposure to the impact of deepfakes, and the government has also introduced legislation, called the Protection from Online Falsehoods and Manipulation Act (POFMA), to try and limit any threat posed by all forms of disinformation.
The researchers don’t believe anti-disinformation legislation will be sufficient on its own, and argue that our ability to spot fakes online is actually far lower than we like to believe.
I’ve written before about attempts to improve digital literacy, and particularly digital hygiene in schools, but such is the pace of change from a technological perspective, it will always be difficult for educators to keep pace and provide people with the means and the awareness to spot and protect themselves from the array of misinformation and fake media they encounter online.
As the technologies used to produce deepfake media become both more realistic and more accessible, however, it’s a defense that society needs to actively engage in sooner rather than later.