AI Act gets final nod in European Parliament: does it go far enough?


The European Parliament passed the AI Act on Wednesday, with the landmark ruling setting the standards for AI governance in the European Union. But does it go too far, or not far enough?

The AI Act is the first attempt in the world to regulate AI systems according to a risk-based approach. Now, the road for these rules to finally enter into force has been paved in the European Parliament.

It will still take some time. In April, European institutions will need to approve linguistic changes in the text made by the lawyers. But then – it's a green light. The AI Act will be published in the EU’s official journal in May and become law.

Stringent rules and prohibited practises

MEPs are naturally glad. Thierry Breton, the European Commissioner for the single market, said on X that democracy has prevailed over the lobbyists: “Europe is NOW a global standard-setter in AI. We are regulating as little as possible – but as much as needed!”

“We’re laying out a common European vision for the future of this technology: one where AI is more democratic and safe but also, I would hope, more competitive – that is if it’s done right,” lawmaker Eva Maydell said in the Parliament on Tuesday.

As always, it remains to be seen how effective the AI Act will be. For example, EU member states are still to determine which regulator will oversee compliance.

They have 12 months to nominate national competent authorities. However, for example, Spain already set up an Agency for the Supervision of Artificial Intelligence back in 2023. All these bodies will be supported by the AI office inside the European Commission.

Under the document, machine learning systems are to be divided into four main categories, according to the potential risk they pose to society. In other words, it’s a “risk-based” approach.

For instance, AI systems aimed at influencing behavior or exploiting a person or group’s vulnerabilities will be banned. Using biometric data to ascertain a person’s race, sexual orientation, or beliefs won’t be allowed either.

Real-time facial recognition in public places? Banned except in cases when law enforcement will be dealing with serious crimes or searching for missing people. Predictive policing, already prevalent in the United States, won’t be allowed, either.

Bans on prohibited practises by AI companies, specified in the AI Act, will be enforced in November. The general-purpose AI rules will apply one year after entry into force, in May 2025, and the obligations for high-risk systems will be activated in three years.

The latter ones will also be subject to stringent rules that will apply before they even enter the EU market. Content created by generative AI models such as ChatGPT will have to be labeled, and the systems will not be able to publish summaries of copyrighted data.

Glaring loopholes

For months, though, the fate of the AI Act was threatened by France, Germany, and Italy and their opposition to the regulation of foundation models. The three countries did not want to clip the wings of promising European AI startups such as Mistral AI and Aleph Alpha.

The debate is still heated. Tech companies, especially smaller ones, want to avoid double regulation and simply too many high-risk obligations.

“There are concerns about how the EU AI Act might prevent exponential innovation in AI in Europe, compared to the strides being made by US and Chinese companies and governments,”

Nitish Mittal

Alon Yamin, the CEO of Copyleaks, a young AI startup, told Cybernews in February that the regulation might stifle the innovation of AI newbies that actually need a bit more breathing space as they’re starting out with barely any cash around.

“There are concerns about how the EU AI Act might prevent exponential innovation in AI in Europe, compared to the strides being made by US and Chinese companies and governments,” Nitish Mittal, partner at Everest Group, a research firm.

To be fair, Mittal agreed that the AI Act’s “risk management framework is nuanced as it looks at risks throughout the design and development process, considering impacts on safety, privacy, and fundamental rights.”

Besides, there’s a fresh example of how European tech startup, Mistral AI, played the European lawmakers.

mistral-ai-firm
Mistral AI. Image by Getty.

The company complained for months about regulations inserted in the AI Act and about the need to become a strong alternative to American tech giants – but then entered into a partnership agreement with Microsoft which is, of course, an American tech giant.

That’s what – empty words and promises – digital watchdogs and activist organizations are worried about. What’s more, at least according to AlgorithmWatch, a non-profit research and advocacy organization, member states will have to “plug surveillance loopholes.”

“The Act fails to effectively ban all AI-powered surveillance practices such as automated facial recognition. Specifically, the restrictions on the use of real-time and retrospective facial recognition in the AI Act are minimal and do not apply to private companies or administrative authorities,” said AlgorithmWatch in a press release.

“Since an earlier restriction – that such technology can only be used to address serious cross-border offenses – has been removed from the final text, a vague reference to the ‘threat’ of a criminal offense can now be sufficient to justify the use of retrospective facial recognition in public spaces.”


More from Cybernews:

Duvel and Boulevard Brewing attackers post passports

Ninety percent of US internet users stream music – more than ever

Bitcoin Fog operator barred for laundering $400M

Massive data leak in Irish Health Service Executive uncovered

Duty Free Americas claimed by Black Basta ransom group

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked