AI-generated election ads causing serious dust-up in Washington

US lawmakers are debating whether to require broadcast TV and radio political ads – made with artificial intelligence – to be stamped with an AI disclosure tag in the lead-up to the US presidential elections. Cybernews breaks down both sides.

It’s just six months before candidates Joe Biden and Donald Trump are expected to face off in the November 4th US presidential elections.

While concerns of AI-generated disinformation continue to plague the European Parliamentary Elections, which began on Thursday, top federal election officials in Washington say they’re split over the matter.

Tasked with the decision, the Federal Election Commission (FEC) was presented with a proposal from the Federal Communications Commission (FCC), concerned that misleading AI content could inadvertently sway unsuspecting voters, potentially influencing election outcomes.

Besides the US president for the next four years (which is technically decided by the US Electoral College anyway), hundreds of seats in both the Senate and the House will be up for grabs, in addition to the tens of thousands of candidates voters will cast ballots for in local municipalities across the nation.

FCC Chairwoman Jessica Rosenworcel expects that AI will most likely play a major role in 2024 political ads. She specifically asked the Commission to create a disclosure rule for any ads containing AI content for both candidate and issue advertisements.

The rule would require on-air and written disclosures and cover cable operators, satellite TV and radio providers.

Ads found on the internet, social media sites, or on streaming services would be exempt from the mandate because of lack of authority by the FCC to regulate them.

Weighing the arguments

Chairpersons from both agencies openly shared their views on the May proposal.

FEC Commissioner Ellen Weintraub spoke out on Thursday in support of regulation actions, though noted it was a “large and complicated issue.”

“The public would benefit from greater transparency as to when AI-generated content is being used in political advertisements,” Wientraub wrote in a June 6th letter addressed to Rosenworcel.

"Transparency enables the electorate to make informed decisions and give proper weight to different speakers and messages,” Weintraub stated, citing the Supreme Court’s views on the electoral process.

Weintraub reiterated her thoughts in a post on X. “AI has the potential to influence our elections in wide-ranging ways, and no one agency can address every aspect, she posted.

Critical of the plan, FEC Chairman Sean Cooksey thinks mandating disclosures would "directly conflict with existing law and regulations and sow chaos among political campaigns for the upcoming election."

Meanwhile, Rosenworcel on Thursday pointed out it was all “about disclosure" and since the 1930s, the FCC “have decades of experience with doing this."

Rosenthal went on to say that AI generated content has the potential for misleading "deep fakes" and "altered images, videos, or audio recordings that depict people doing or saying things that did not actually do or say."

FCC Commissioner and Republican Brendan Carr also criticized the proposal stating that the "FCC can only muddy the waters.”

Carr questioned why “AI-generated political ads that run on broadcast TV will come with a government-mandated disclaimer, but the exact same or similar ad that runs on a streaming service or social media site will not?"

Backing Carr, fellow Republican leaders saw Rosenthal's proposal as a partisan move by the FCC, and released their own letter against the possible mandate also on Thursday.

The letter, signed by Senators John Thune (SD), Eric Schmitt (MO), Mitch McConnel (KY) and Ted Cruz (TX), called out the FCC for over stepping its boundaries, stating the FCC “has no authority to police the content of political advertising… raising serious statutory and constitutional concerns.”

The letter continued, questioning the burden that would be place on broadcasters and journalists who lack the technical expertise to label the information.

And finally the Senators accused the FCC of being an arm for the Democratic party “favoring certain political speech and interfering in the election.”

AI deepfakes already being used

As for voice advertisements, the FCC already took measures to address the issue in February, making it illegal to use AI-generated voices in all robocalls.

In January, an AI-generated political robocall, impersonating current President Joe Biden, was sent to sent to thousands of New Hampshire voters, telling them to skip Democratic primary.

The political consultant who cooked up the ad, as well as the telecommunications company that allowed the robocalls to go through, are now facing FCC fines totaling $8 million for transmitting the deepfake.

Still in the preliminary stages, neither the FEC or the FCC mentioned the possible fines or repercussions that advertisers would face for breaking the rule if enacted.

Both Google, OpenAI and Meta last fall announced any political ad containing AI content would need to have a disclosure notice “prominently” displayed for the viewer, Meta taking it a step further and banning the use of the platform’s AI ad tools.

All three have agreed to eventually require watermarks on all AI-made content.