
A robot policeman was unveiled on Wednesday in Nakhon Pathom, Thailand, during the annual Songkran festival.
The robot, named “Pol Col Nakhonpathom Plod Phai,” which means “Nakhon Pathom is safe,” is intended to enhance public safety.
The Royal Thai Police have dubbed the robot “AI Police Cyborg 1.0.” It is equipped with 360° vision and can allegedly integrate live footage from CCTV cameras and drones, processing the data using AI technology.
According to a Facebook post by the Thai police, AI Police Cyborg 1.0 can:
- Identify high-risk individuals and notify officers using facial recognition technology
- Track suspicious people throughout the entire event
- Search for individuals by scanning facial features, body type, and other distinguishing marks
- Identify weapons
- Monitor violent or disruptive behavior
The post does not specify which AI systems the robocop has been trained on.
What could go wrong with deploying such systems to monitor the public sphere? Beyond the obvious privacy concerns, there is also the risk of wrongful accusations (NYT, $) or misinterpretation.
As is well known, algorithms are often less accurate when it comes to people of color and other groups historically underrepresented in databases.
Your email address will not be published. Required fields are markedmarked