The problem with predictive policing and pre-crime algorithms


For decades, technology has enabled police forces to leverage valuable insights from crime data. Crime pattern analysis (CPA) have also helped law enforcement agencies identify criminal hotspots and reduce unlawful acts. Continuing on the proactive, rather than reactive approach, a Sheriff's office in Pasco County, Florida, has adopted an intelligence-led policing method that used AI algorithms to identify repeat offenders.

The new form of policing feels eerily reminiscent of Philip K Dick's Minority Report, which was released back in 1956.

Predictive policing involves the targeting of high-frequency offenders to ensure that they don’t fall back into their old habits.

ADVERTISEMENT

In movie terms, if you combine Moneyball with the Minority Report, the result is a PreCrime division that solves crimes before they happen.

The process begins when the Sheriff’s Office generates a list of people that it considers are most likely to break the law. Big data from arrest histories and other unspecified intelligence is fed into AI. It then enables police analysts to advise officers to interrogate anyone that it thinks might commit a crime in the immediate future.

Many accused law enforcement officers in Pasco of harassing entire families in their own homes. A failure to comply with arbitrary requests often resulted in charges for petty legal violations such as a mailbox number to even overgrown grass.

A pre-crime state

The concept of pre-crime, where even someone who hasn't done anything wrong, can become a person of interest, has been around for longer than many realize. In his book Permanent Record, Edward Snowden famously warned that we would all be criminals if technology were used to enforce every law.

Most people reading this would have been guilty of jaywalking, putting their trash in the recycling bin and recyclables in the trash, or rode their bicycle in the wrong lane. What would happen if facial recognition resulted in you being charged for such an event? If it did, imagine then becoming a target by the PreCrime division for a future offence.

ADVERTISEMENT

Controversial recidivism algorithms could then send you to prison and even determine the length of your sentence. Almost anyone could quickly have a dramatic fall from grace. But the human bias fed into AI algorithms is also replicating and amplifying racism, which puts minorities at a much higher rate of being harassed by law enforcement agencies.

The definition of insanity is doing the same thing over and over again and expecting a different result. Previous experiments with predictive policing revealed that software created feedback loops that repeatedly sent police offices to the same neighbourhoods, which increased officers' bias and enhanced prosecutions.

Shakeer Rahman tweet screenshot

Both Los Angeles and Chicago turned its back on predictive policing methods after learning that it only matters worse. The inconvenient truth is that in its current state, it creates more problems than it solves. Although algorithms have a poor reputation in the fight against crime, our connected devices that could put anyone behind bars as well.

When a suspect in the UK claimed they were at home washing their clothes, a digital forensics team was able to prove that the smart washing machine was activated from a smartphone while at the crime scene. Smartwatches are also appearing in more legal cases with the heart rate and tracking of their accused's last movements being of particular interest.

It's a warning, not an instruction manual

Some authorities seem to have misunderstood the key messages delivered in the dystopian fiction books we grew up with. The lessons were that totalitarianism created vast inequities and put citizens in continuous conflict with one another. Sadly there is increasing evidence that surveillance, thought control, and repetition to brainwash are very real in modern society.

Big data and algorithms are paving the way for so-called PreCrime divisions and thought police. But once again, it's humans, not technology, that are to blame. Humans misinterpreted the warnings from dystopian fiction and chose to use it as an instruction manual rather than a warning.

With AI, we’re like children playing with a very dangerous toy, blissfully unaware of the existential risks to society's future. Garbage in is garbage out. If we dare to look long and hard at ourselves in the black mirror, it's humans who are the cause of bias in AI. The sexism and racism that AI and machine learning (ML) platforms have inherited are all of our doing.

ADVERTISEMENT

However, as an eternal optimist, I am hopeful that the global community will leverage the same technology to focus on sustainability that will improve our lives and help the planet in which we all reside. Humans and AI are both stronger when working together than apart by enhancing each other's capabilities. Sadly, this is one lesson that we are yet to learn.


Build your secure personal and business online presence