Different forms of biometric surveillance, predictive policing, and “harmful” uses of AI in migration control must be banned in the EU, rights advocates say.
Over a hundred civil society groups and academics across Europe have called on EU policymakers to put limits on “unchecked forms of discriminatory and mass surveillance”
The EU AI Act provided an “urgent opportunity” to set legal boundaries for authorities to use AI and protect people from rights violations, the groups said in a statement.
Bans should apply to law enforcement, migration control, and national security agencies across Europe, they said.
“Increasingly, in Europe and around the world, AI systems are developed and deployed for harmful and discriminatory forms of state surveillance,” the statement read.
Areas of concern stated by the groups range from the use of biometrics for identification, recognition, and categorization, to predictive systems in decision making and resource allocation.
“AI in law enforcement disproportionately targets already marginalized communities, undermines legal and procedural rights, and enables mass surveillance,” they said.
The EU AI Act, the bloc’s first comprehensive legislation to regulate AI, was approved by the European Parliament in June.
It bans systems that present an “unacceptable level of risk” to people’s safety and privacy. These include predictive policing tools used in several US states or China-style social scoring practices.
It also sets limits on “high-risk” AI technologies that could pose harm to people’s health or influence elections.
The AI Act is yet to be approved by the EU member states and their leaders in the European Council, with biometric surveillance among the points of contention.
While EU lawmakers argue for a complete ban of public biometric surveillance, including facial recognition, some member states want exceptions on national security, defense, and military grounds.
More from Cybernews:
Subscribe to our newsletter