An unintended consequence: can deepfakes kill video evidence?


Many prophesied that deepfakes - media doctored with the help of artificial intelligence - will wreak havoc around the world. It did not. However, the idea that anyone can fake a video allows for a different type of problem: a threat to real video evidence of wrongdoing.

A day after rioters stormed the U.S. Capitol building, President Trump addressed the nation, claiming that “demonstrators do not represent the country” and that he will “focus on the seamless transition of power.” 

A somewhat unexpected change in the President’s rhetoric prompted various conspiracy theories, claiming that Mr. Trump did not say any of this. Thousands on social media believe the address was an AI-generated deepfake, even though fact-checkers quickly debunked such claims.  

ADVERTISEMENT

“We’ve created all this rhetoric around deepfakes. It now enables those in power to say it’s a deepfake about, for example, citizen footage of the suppression of a protest or an incriminating video that might imply sexual misconduct,” claims Sam Gregory, program director at WITNESS, a New York-based NGO helping people to use video evidence to protect and defend human rights all over the world

One of the most significant risks we heard globally was that people start saying that every bit of proof has been faked and demand to prove it’s real.

Sam Gregory

Since the early days of the technology, Mr. Gregory has been following the development of deepfakes, media created with an AI-powered Generative Adversarial Network (GAN). I sat down with Sam to discuss the future of video as a form of evidence and indirect ways that deepfakes are harming activists around the globe.

Your organization, WITNESS, has been on the pulse of deepfakes since the very beginning. What critical developments in the weaponization of deepfakes have you noticed over the past two or three years?

The trend that everyone almost wished upon us two or three years ago is this idea that deepfakes would impact elections. Many people talked about the 2018 midterms in the US, the EU elections at a similar time, and of course, the recent US presidential elections. And we’ve not seen the weaponization of deepfakes for deception in the US elections campaign.

But a trend that is very visible and well-documented is a weaponization of deepfakes against women. So, the usage of very basic forms of manipulation building on existing problems of gender-based violence directed towards women. 

An increasing amount of documentation from groups like Sensity AI has shown the continuing scale of that and how that’s evolved. The recent work they released around a bot on Telegram that was using a simple piece of previously readily available software to enable people to request stripped images. So that’s the most evident trend, and it’s been getting larger. 

We have seen an increasing number of GAN generated images in organized disinformation campaigns. I think this is an area where obviously the technology has improved. The ability to create GAN-generated images has got more robust and more flexible over the past couple of years. 

ADVERTISEMENT
Sam Gregory

I still think it’s a little over-hyped. If we look at the recent US election campaign, we had one notable GAN generated image within one of the campaigns targeting Hunter Biden, president-elect Biden’s son. Obviously, it sent a false trace for someone trying to find whether the person was real from the photo, but clearly, journalists doing any research could quickly work out that this was not real. I think in the political sphere, we’ve generally seen overhype around that kind of thing. 

Have you seen the same trends all around the globe? I mean, the hype over the weaponization of deepfakes for political goals often touches on events in the USA or Europe.

One of the things that we’ve focused on a lot over the past two years is leading a series of meetings worldwide: in Brazil, South Africa, and Malaysia. And when you talk to people in those contexts, they tell us how this ties into things that happen already. Somebody Photoshops an image of a civic activist already in Brazil to claim that they’re linked to drug cartels or drug gangs. 

Pretty much everywhere we’ve seen, people give examples of the use of the kind of what’s being called the liars’ dividend, the ability to say that something is a deepfake, even if it isn’t. In all the meetings, people would describe at least one incident in their context when someone has said that video evidence doesn’t count because it’s all video manipulation now. And this is interesting because it’s really the power of rhetoric.

We’ve created all this rhetoric around deepfakes. It now enables those in power to say it’s a deepfake about, for example, citizen footage of the suppression of a protest or an incriminating video that might imply sexual misconduct. And we heard that happening in some deepfake cases, for example, in Malaysia where several videos from an aging politician and claims that they were manipulated, and they weren’t.

And I think one of the most significant risks we heard globally was that people start saying that every bit of proof has been faked and demand to prove it’s real. It puts tremendous pressure on journalists, on people who shoot witness footage, on citizen journalists, because it’s tremendously hard to prove true. It’s far cheaper to claim something’s false than to falsify something. 

Why DeepFakes Are Difficult to Detect? video screenshot

You’ve already touched a bit about how deepfakes create a liar’s dividend for bad actors. And do you think there’s a real danger to video as a medium that provides powerful proof of wrongdoing?

I think it is a challenge. We’ve been seeing video performing very successfully as proof of wrongdoing in the last several months. Most notably, video of the killing of George Floyd in the US, and we’ve had the first war crimes trials that are driven by video from social media. So we know that it has tremendous power. I think the risks are that some people are trying to challenge that, and there’s not much capacity to push back. There are very few people who know how to do media forensics.

ADVERTISEMENT

For us, WITNESS, a group that works almost exclusively on the use of video technology to prove the truth, we’ve been trying to think that you don’t necessarily want people to believe everything because bad actors also can manipulate media. We need to be trying to help people be more skeptical but not suspicious of everything. That’s the key. 

That’s been partly why we’d be very involved in looking at the kinds of solutions around manipulated media, trying to tread the fine line about what will enable us, not just disbelieving everything. We want to have better signals of what to trust and look at and see how we build an authenticity infrastructure. We need to help people understand how images are manipulated, give them better signals, and make sure that information is available easily. 

Some experts are pessimistic about the whole issue and claim that there’s no point in educating people about how to spot which video is fake because the algorithms are evolving too quickly. We need to combat deepfakes, but what can we do if the technology is advancing so fast?

I’ve been doing training with journalists around the fakes and spotting them. And it’s sort of a funny process because people ask me how we can stop them. And I answer that I could teach them something new, but it would be a bad idea because I’d be teaching about a kind of algorithmic Achilles heel of the moment. 

The classic example is the blinking of the eyes. Many news articles identified this as a signal of a deepfake. And then, a few weeks later, the researcher who developed that received a video with a blinking deepfake because someone had taken up the challenge. That said, some of these clues won’t go away so quickly since other indications are more specific. There’s a lot of work going on developing a whole range of detection techniques, including ones based on those types of artifacts in GAN generated images.

I think deepfakes were massively over-hyped three years ago, because they lend themselves so well to the rhetorical discussion about “the end of trust” and “the end of truth,”

Sam Gregory.

There are more sophisticated ones you could look for then just say the blinking. If you’re looking at it from a detection side, you need to build a whole array of them so that you have multiple signals and some redundancy between them. And it’s a big discussion within the detection community. 

I think we shouldn’t give up on the idea that we should tell people there’s some sign of manipulation. Giving some signal still matters, even if you can only do it with 80% of confidence, which is different from saying that we know conclusively that this is fake. So I tend to advocate for that sort of very basic media literacy framework, like suggesting taking a look at the source and seeing if there’s parallel information that confirms it. 

WITNESS suggests that one of the first things to do to prepare for deepfakes is to de-escalate the rhetoric about manipulated media. You’ve mentioned that before as well. Do you suggest that deepfake technology is over-hyped in the sense that people give it more attention than it deserves?

I think deepfakes were massively over-hyped three years ago, because they lend themselves so well to the rhetorical discussion about “the end of trust” and “the end of truth.” And I think that had unfortunate consequences. 

ADVERTISEMENT

On the one hand, it enabled this kind of “it’s a deepfake” plausible deniability is starting to take hold. On the other hand, I think it’s created a little bit of a false sense of security about deepfakes as well. I certainly meet people who believe that deepfakes are a total bust, and I usually say to them that I would love it if that were true.

But if you look at the technology trends that we should be preparing now, it’s still not wide-scale, and we should be addressing the current problems. We need to deescalate the rhetoric. It causes direct harm because people use rhetoric to attack real video. And because it prevents the opportunity for real collaboration on solutions. We’re still in the window before deepfakes are used more broadly where we can have good discussions on the right solutions and the proper ways to approach this.

Do you feel that Big Tech companies are paying enough attention to deepfakes? After all, they are probably the largest distributors of media that people tend to use.

There was a lot of attention by platforms towards deepfakes last year and the first half of this year. And I think worries about the US election drove people because that is such a driving force to motivate the platforms. They’ve invested in detection technology; they’ve invested in making policy decisions around deepfakes. 

I think they’ve been thinking quite thoughtfully, but there are places where they need to continue to invest on the detection side and thinking about responses to things like non-consensual sexual images. But in general, I think there’s been a fair amount of movement happening by the platforms, but we need much more attention to the existing manipulated media problems.

We want to have attention to the future problem, but don’t be distracted from the existing issues. I think as we look at platforms, yes, they were proactive on deepfakes. And I think that’s a good thing, but that proactivity on deepfakes needs to be matched by technical innovation on existing shallow fakes. And we need to avoid taking the foot off the pedal on deepfakes.