20 big tech firms sign 'accord' to battle AI election deepfakes


At least 20 big tech companies and counting – including Google, Meta Platforms, Microsoft, and OpenAI – have signed on to a new ‘tech accord’ aimed at preventing the distribution of deceptive AI content during the 2024 global election cycle.

The new “Tech Accord to Combat Deceptive Use of AI in 2024 Elections” was announced Friday at the 2024 Munich Security Conference, now in its 55th year.

Signatories of the pledge as of February 16th include Adobe, Amazon, Arm, ElevenLabs, Google, IBM, Inflection AI, LinkedIn, McAfee, Meta, Microsoft, Nota, OpenAI, Snap Inc., Stability AI, TikTok, Trend Micro, Truepic, and X.

ADVERTISEMENT

Its goal: to ensure that voters retain the right to choose who governs them, free of this new type of AI-based manipulation, according to Microsoft Vice Chair & President Brad Smith.

“We are committed to safeguarding our services from deceptive content like deepfakes that alter the actions or statements of political candidates to deceive the public,” Smith said in a blog post released Friday.

According to Smith, the Accord will explicitly focus on combating “deceptive AI election content” using a concretely defined set of parameters.

Dubious content will be defined as any “convincing AI-generated audio, video, and images that deceptively fake or alter the appearance, voice, or actions of political candidates, election officials, and other key stakeholders in a democratic election.”

This also includes any content “that provides false information to voters about when, where, and how they can lawfully vote,” as stated in the document.

Tech Accord commitments

Smith said there are eight specific commitments laid out in in the Accord, that “fall into three critical buckets worth thinking more about.”

ADVERTISEMENT

The three critical areas of focus, according to Smith, include the commitment by the tech sector to:

  • Make it more difficult for bad actors to use legitimate tools to create deepfakes.
  • Establish universal ways to detect and respond to deepfakes in elections.
  • Help advance transparency and build societal resilience to deepfakes in elections.

As part of Microsoft’s commitment to the accord, the company has created its own “Microsoft-2024 Elections” page, where candidates can report any AI deepfake materials created of themselves or about their campaign.

Tech Accord Commitments
Image by Microsoft.

AI Voter manipulation

More than 4 billion people in more than 40 countries are set to vote in elections this year and generative AI is already being used to influence politics and even convince people not to vote.

An EU investigation from last month found at least 750 incidents of misleading information being deliberately spread by foreign actors, many of them voicing support for Russia's invasion of Ukraine.

And on January 23rd, a fake robocall impersonating US President Joe Biden was circulated in New Hampshire, urging Democratic voters to stay home during the state's presidential primary election.

ADVERTISEMENT

The fake audio stunt spurred the US Federal Communications Commission (FCC) to make unwanted robocalls generated by AI illegal across the nation. The White House since has also vowed to use cryptographic verification to help combat the spread of deepfakes.

Tech tools to identify deepfakes

Additionally, big tech has mentioned incorporating the use of tools such as watermarking or embedding metadata to help the public identify AI-generated content or certify its origin.

“It’s all good and well if individual platforms develop new policies of detection, provenance, labeling, watermarking, and so on, but unless there is a wider commitment to do so in a shared interoperable way, we’re going to be stuck with a hodgepodge of different commitments,” said Nick Clegg, president of global affairs at Meta Platforms.

“I think the utility of this (accord) is the breadth of the companies signing up to it,” Clegg said.

The tech sector said there has been less of a focus on deceptive texting, partly because people tend to have more skepticism with text.

"There's an emotional connection to audio, video, and images," said Adobe's chief trust officer Dana Rao during a recent interview. "Your brain is wired to believe that kind of media," he said.

The Tech Accord did not give a timeline or specific details on when individual companies would begin to implement the commitments.

Although earlier this week, Google announced plans to launch an anti-misinformation advertising campaign across five EU nations ahead of the bloc’s parliamentary elections happening in June.

ADVERTISEMENT

Google said it will run a series of ads on platforms like TikTok and YouTube designed to teach voters how to identify manipulative content before encountering it.