Can the US safeguard user data from Meta’s AI training?


Unlike Europe, the US has no regulatory mechanisms to stop Meta from training its AI on its users' public data. However, new European regulations have their drawbacks, too.

Last week, a Meta representative reluctantly acknowledged that the company used Facebook and Instagram users' public posts and pictures to train its AI models.

Australia’s ABC News reported that Meta's global privacy director, Melinda Claybaugh, initially denied having done so/ However, when confronted by an Australian politician, she admitted it.

ADVERTISEMENT

For companies like Meta, OpenAI, Google, and others that develop their own Generative AI, feeding it with new information is crucial, as it determines how fast these AI models can progress.

Contrary to its competitors, Meta has one advantage – the data of billions of users, which could significantly improve its AI.

As it admitted last week, the company used public posts of its users dating back to 2007 for this purpose. From a privacy standpoint, such a decision seems concerning.

No way to opt-out

Those who shared their posts publicly on Facebook in 2007 and subsequent years could hardly imagine that all of their information would end up training the largest corporations' generative AI models.

Legally, though, Meta did nothing wrong. The company included the ability to train its models with recent public user data in its privacy policies and terms of service.

The main point of concern is that people in the US, Australia, and many other countries have no way to opt out of Meta’s AI training if they want to keep their posts public.

While users can always change settings to make their posts private – Meta doesn’t train its AI on private information – this is not an option for influencers, creators, and many others whose income depends on their consumer reach.

ADVERTISEMENT

Brazilians and Europeans are the only users who can opt out of Meta publicly training its AI with their data.

Brazilians were given such a possibility by their court, while Europeans can opt out thanks to the General Data Protection Regulation (GDPR), even though Meta made the opt-out option for Europeans unnecessarily difficult.

GDPR, introduced in 2018, gives European consumers peace of mind regarding how companies handle their personal information.

Interestingly, when GDPR was adopted, Meta’s CEO, Mark Zuckerberg, praised the initiative, saying that the company would extend EU privacy rights globally.

However, the latter data training case outside Europe, as many other examples, shows us that such claims were baseless.

Other regulations

While GDPR protects European users' personal data, other regulatory acts, such as the Digital Markets Act (DMA), which aims to provide better opportunities for smaller countries to compete with big tech, face criticism.

Some European entrepreneurs expressed fears that regulation will hinder innovation. Spotify’s founder, Daniel Elk, along with Zuckerberg, highlighted concerns surrounding open-source artificial intelligence.

According to them, complex rules may result in Europe falling behind.

Uncertainty in regulation can have downsides for entrepreneurs and European users. For example, new Apple Intelligence features, such as Visual Intelligence and iPhone mirroring, will not be available in Europe at launch.

ADVERTISEMENT

Previously, Apple said it wouldn’t include Apple Intelligence in Europe, citing privacy concerns stemming from the DMA.

Meanwhile, in June, Meta said that its Llama model would not be released in Europe due to the “unpredictable nature of the European regulatory environment.” The company has also excluded AI features from its Ray Ban-Meta smartglasses for European users.

National legislation is years away

While the EU tries to be at the forefront of regulation, the US takes a much more careful approach, which doesn’t hinder innovation but doesn’t always go hand in hand with protecting consumers.

There are calls for regulation, as well as initiatives such as the American Data Privacy Protection Act, though experts say that with a current political landscape it would take years to come into effect.

For example, if the US government now decide that it wants to force Meta to stop training its AI on its users data without their consent, it would need to implement specific legislation that directly limits or restricts the use of personal data for AI training purposes.

“Realistically, enacting such legislation could take several years. First, there would need to be bipartisan agreement on the scope and enforcement mechanisms, which could be a lengthy process given the political landscape,” says Kalim Khan, co-founder and senior partner at Affinity lawyers.

And once passed, the law would need to be followed by rulemaking from agencies like the Federal Trade Comission to create guidelines and enforcement protocols.

Khan thinks that rather than implementing a broad initiative like the GDPR, the US is more likely to adopt sector-specific regulations that could still be several years away.

In the meantime, separate states, such as California, may take lead with their own regulations such as California Consumer Privacy Act

ADVERTISEMENT

“The US should adopt a balanced, risk-based approach to AI regulation. This means focusing on high-risk applications of AI – such as in healthcare, finance, or law enforcement – where misuse or privacy breaches could have significant consequences,” Khan says.

Corporations resist regulation

One of the reasons some of the regulatory guidelines are slowly adopted is resistance from big companies, says Hamid Ekbia, director of the Autonomous Systems Policy Institute at Syracuse University, New York.

An example he cites is the Californian bill California AB 2013, which aims to promote greater transparency in AI development. Reports show that governor may not sign the legislation due to pressure from industry lobbyists and California’s Congressional delegation.

Ekbia highlights that pople often perceive innovation and consumer protection as mutually exclusive, though he says such a view is a false binary propagated by major corporations.

“Innovation doesn’t have to happen only in areas that benefit these companies through ad revenue. Some of the most interesting innovations in AI have taken place in areas such as drug discovery, vaccine development, and other publicly beneficial developments,” he says.

According to Ekbia, AI regulation should steer efforts and resources in this direction, while disincentivizing work in places such as social media that have proven to be more harmful than helpful.