- Humans should supervise AI systems and algorithms should be open
- Ban private facial recognition databases, behavioural policing and citizen scoring
- Automated recognition should not be used for border control or in public spaces
The European Parliament adopted a resolution demanding strong safeguard when artificial intelligence tools are used in law enforcement.
“Fundamental rights are unconditional. For the first time ever, we are calling for a moratorium on the deployment of facial recognition systems for law enforcement purposes, as the technology has proven to be ineffective and often leads to discriminatory results. We are clearly opposed to predictive policing based on the use of AI as well as any processing of biometric data that leads to mass surveillance. This is a huge win for all European citizens,” rapporteur Petar Vitanov is quoted in a press release.
In a resolution adopted by 377 in favour, 248 against, and 62 abstentions, MEPs point to the risk of algorithmic bias in AI applications and emphasize that human supervision and strong legal powers are needed to prevent discrimination by AI, especially in law enforcement or border-crossing context. Human operators must always make the final decisions and subjects monitored by AI-powered systems must have access to remedy, say MEPs.
According to the text, AI-based identification systems already misidentify minority ethnic groups, LGBTI people, seniors, and women at higher rates, which is particularly concerning in the context of law enforcement and the judiciary. To ensure that fundamental rights are upheld when using these technologies, algorithms should be transparent, traceable, and sufficiently documented, MEPs ask. Where possible, public authorities should use open-source software in order to be more transparent.
MEPs ask for a permanent ban on the automated recognition of individuals in public spaces, noting that citizens should only be monitored when suspected of a crime. Parliament calls for the use of private facial recognition databases (like the Clearview AI system, which is already in use) and predictive policing based on behavioural data to be forbidden.
MEPs also want to ban social scoring systems, which try to rate the trustworthiness of citizens based on their behaviour or personality.
Parliament is also concerned by the use of biometric data to remotely identify people. For example, border control gates that use automated recognition and the iBorderCtrl project (a “smart lie-detection system” for traveler entry to the EU) should be discontinued, say MEPs, who urge the Commission to open infringement procedures against member states if necessary.
Subscribe to our newsletter