Google refutes lax election ad check claims on YouTube


Google disagrees with recent findings by online advocacies that YouTube approved disinformation ads in India ahead of the general election, calling the methodology into question. However, the tech giant also welcomes research and remains committed to better safeguarding its platforms.

Cybernews recently reported on the investigation by Access Now and Global Witness. In it, YouTube was found to approve 48 out of the 48 ads placed by investigators, all of which contained things like baseless allegations of electoral fraud, lies about voting procedures, and other misinformation about elections in India.

Google disagrees with the findings, saying they do not reflect a lack of protections for election misinformation in India.

“Not one of these ads ever ran on our systems and this report does not show a lack of protections against election misinformation in India. Our policies explicitly prohibit ads making demonstrably false claims that could undermine participation or trust in an election, which we enforce in several Indian languages,” a Google spokesperson told Cybernews.

“Our enforcement process has multiple layers to ensure ads comply with our policies, and just because an ad passes an initial technical check does not mean it won’t be blocked or removed by our enforcement systems if it violates our policies. But the advertiser here deleted the ads in question before any of our routine enforcement reviews could take place.”

Google assured us that they’re aligned with the goal of preventing misinformation spread.

“We will use this test to see if there are ways we can further bolster our protections,” the spokesperson added.

The ads would be caught in the subsequent stages

In a detailed explanation, Google claims that once an ad is submitted, it may initially be confirmed as eligible via automated systems. However, this is just the first stage of their review and enforcement process, where the experiment concluded.

“It does not mean the ad won’t be subject to further enforcement actions. We understand how this may have led to some confusion,” the comment reads. “In fact, after this step, ads are still subject to several layers of reviews, including human evaluations as needed, to ensure the content complies with our policies. These protections can kick into place both before an ad runs or quickly after it starts to gain impressions.”

Investigators removed the ads in question before “any of our additional enforcement checks could take place” after the initial automated label was applied. And as a result, the “enforcement systems were not able to work as intended.”

The Tech giant assures that ads violating policies will be blocked or removed.

“We welcome research examining our products and policies. The information these reports can provide important insight into how our systems are working. We’re aligned with the goal of preventing the spread of misinformation that reduces trust during elections, and we will use this test to see if there are ways we can further bolster our protections,” Google promises.

Google’s Ads policy prohibits false claims about elections and voting procedures that could undermine trust or participation in democratic processes. It also prohibits advertisers from directing content about politics, social issues, or matters of public concern to users in a country other than their own if they misrepresent or conceal their country of origin or other material details about themselves.

“We use a combination of automated systems and human reviewers to enforce our policies. And when we find violating ads, we take them down,” the spokesperson said. “To give you a sense of scale, in 2023, we blocked or removed 5.5 billion ads for violating our policies, including 206.5 million for violating our misrepresentation policies.”

7.3 million election ads were also removed, as they came from advertisers who had not completed the required verification process. Other strict policies guard against ads promoting hate speech or inciting violence.

Google also ensures that it enforces its policy globally and in several Indian languages, including Hindi and Telugu.

Understands the importance

Election advertising is an important component of democratic elections. Candidates use ads to raise awareness, share information, and engage potential voters.

In their response, Google also detailed many steps and investments towards safeguarding platforms. Previously, advertisers were obliged to clearly show who paid for election ads so Google could reveal to users how much they spent and how many impressions the ad received. Targeting restrictions limit election ads to only a few general categories: age, gender, and general location.

Since 2023, advertisers must also disclose when they use synthetic or digitally altered or generated content in their ads, including AI tools.

Together with advanced machine learning tools, thousands of people across the globe work around the clock to safeguard Google’s digital advertising ecosystem.

“Google understands the importance of the election in India, where millions of eligible voters will head to the polls this year, and we’ve invested significant resources to provide people with access to high-quality information across our products, including YouTube, while safeguarding our platforms from election-related abuse.”