Clearview AI hit with massive legal complaint by privacy watchdogs

Regulators have three months to respond to the complaint by Privacy International and others.

The data harvesting practices of American facial recognition company Clearview AI are subject to a massive legal complaint launched by four large privacy and digital rights organisations today.

Privacy International, alongside the Hermes Center for Transparency and Digital Human Rights, Homo Digitalis and noyb - the European Center for Digital Rights, has launched legal complaints with regulators across Europe about Clearview’s data collection practices.

The complaints, lodged with data protection regulators in France, Austria, Italy, Greece and the United Kingdom, allege Clearview AI uses an “automated image scraper” to search the web and collect any images it detects as containing human faces. 

The company boasts of possessing the “largest known database of facial images,” with more than three billion records.

Those faces are then parsed through Clearview's facial recognition software to enable them to build what the complainants call “a gigantic biometrics database.” That information and database are then marketed by Clearview to private companies and law enforcement across the globe. Some law enforcement agencies in Europe have reportedly engaged in contracts with Clearview, while the company was the subject of a highly negative New York Times investigation in January 2020. There, one backer of the firm said Clearview “might lead to a dystopian future.”

The case against Clearview AI

Last month, the Italian data protection authority blocked police forces from using real time facial recognition. “Facial recognition technologies threaten our online and offline lives,” said Fabio Pietrosanti, President of the Hermes Center, one of the complainants. “By surreptitiously collecting our biometric data, these technologies introduce a constant surveillance of our bodies.”

“European data protection laws are very clear when it comes to the purposes companies can use our data for,” said Ioannis Kouvakas, Legal Officer at Privacy International, which is leading the complaint. 

“Extracting our unique facial features or even sharing them with the police and other companies goes far beyond what we could ever expect as online users.”

“Clearview seems to misunderstand the Internet as a homogeneous and fully public forum where everything is up for grabs,” said Lucie Audibert, Legal Officer at PI. “This is plainly wrong. Such practices threaten the open character of the Internet and the numerous rights and freedoms it enables.”

Future rights over data up for debate

Professor Alan Woodward of the University of Surrey says the complaint will get to a broader question of personal rights to data you generate. “I think it might get wrapped up with the whole question of do you have rights to your own image,” he said. 

"Just because something is 'online' does not mean it is fair game to be appropriated by others in any which way they want to - neither morally nor legally. Data protection authorities need to take action and stop Clearview and similar organizations from hoovering up the personal data of EU residents," said Alan Dahi, data protection lawyer at noyb. 

But the case may be more complicated. “In the case of Clearview they can claim the data is used for very specific purposes, and they might argue that they maintain control over data whilst allowing matches against it,” said Woodward. In the UK at least, the case’s outcome may well hinge on the results of another live case in court: Lloyd versus Google, which looks set to establish a precedent for all other data protection law in the UK going forwards.

"It is important to increase scrutiny over this matter. The DPAs have strong investigative powers and we need a coordinated reaction to such public-private partnerships,” said Marina Zacharopoulou, lawyer and member of Homo Digitalis.

Clearview AI CEO Hoan Ton-That told CyberNews: “Clearview AI has never had any contracts with any EU customer and is not currently available to EU customers.”

“We have voluntarily processed the five Data Access Requests in question, which only contain publicly available information, just like thousands of others we have processed.”  

He added: “Clearview AI has helped thousands of law enforcement agencies across America save children from sexual predators, protect the elderly from financial criminals, and keep communities safe. National governments have expressed a dire need for our technology because they know it can help investigate crimes like money laundering and human trafficking, which know no borders.”

More from CyberNews:

Report: how cybercriminals abuse API keys to steal millions

Who has dealt with cybercriminals better: Colonial Pipeline or Ireland’s Health Services?

Crypto boom: what you need to know before diving in

Over 30,000 VoIP devices identifiable worldwide, some with suspected vulnerabilities

Million-dollar deposits and friends in high places: how we applied for a job with a ransomware gang

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are markedmarked