Europol has released its long-awaited AI accountability framework as part of a project to guide law enforcement on the thorny subject of the use of AI.
The Accountability Principles for Artificial Intelligence (AP4AI) project aims to create a practical toolkit for security organizations, balancing the opportunities of AI against ethical concerns.
It will be based on 12 accountability principles, from oversight and transparency to mechanisms for redress. Each of these, says Europol, is being turned into a set of actionable steps.
"I am confident that the AP4AI Project will offer invaluable practical support to law enforcement, criminal justice and other security practitioners seeking to develop innovative AI solutions, while respecting fundamental rights and being fully accountable to citizens," says Catherine De Bolle, executive director of Europol.
"This report is an important step in this direction, providing a valuable contribution in a rapidly evolving field of research, legislation and policy."
Research finds concerns
Over the last year, Europol has polled thousands of European citizens, finding that more than 87 percent agree or strongly agree that AI should be used for the protection of children and vulnerable groups and to detect criminals and criminal organizations.
At the same time, though, more than 90 percent say they expect the police to be held accountable for the way they use AI and its consequences.
At the heart of the debate is the way data can be used to identify patterns of criminal behavior. At its best, this can mean helping to identify large-scale fraud or the sharing of child sexual exploitation content.
More controversially, though, it has in the past meant identifying crime hotspots or even individuals deemed likely to offend. And this can introduce bias: when algorithms are trained on existing data such as arrests, studies have shown, racist feedback loops can arise. The project aims to allay these concerns.
"The international security community is well aware that the use of AI must be proportionate, ethical and ensures a level of accountability," says Professor Babak Akhgar, director of Centre of Excellence in Terrorism, Resilience, Intelligence and Organised Crime Research (Centric), Europol's partner in the project.
"It is vital that the societies which agencies protect and serve have confidence in the way AI is utilised."
The Artificial Intelligence Act
A lot of these issues will also be tackled by the EU's upcoming Artificial Intelligence Act. In its draft form, this prohibits systems that distort human behavior; systems that exploit the vulnerabilities of specific social groups; systems that provide 'scoring' of individuals; and the remote, real-time biometric identification of people in public places.
However, more than 40 civil organizations have recently called for the Act to specifically ban AI-based predictive policing altogether.
"Age-old discrimination is being hard-wired into new-age technologies in the form of predictive and profiling AI systems used by law enforcement and criminal justice authorities," says Griff Ferris, legal and policy officer at Fair Trials.
"Seeking to predict people’s future behavior and punish them for it is completely incompatible with the fundamental right to be presumed innocent until proven guilty. The only way to protect people from these harms and other fundamental rights infringements is to prohibit their use."
A blanket ban, though, is incredibly unlikely. AI now pervades every aspect of life - and often with good reason and good effect. For European law enforcement to be banned altogether from using data and algorithms to predict trends would be a massive own goal.
Unsurprisingly, there's been deep division about the issue amongst legislators - not least over the definition of AI that's used, which some say could cover much traditional software. And the controversy is likely to continue, with the first draft of the Act set for next month, to be followed by discussions on amendments and a final vote in November.