Book review: where do you go when there’s nowhere to hide?

If you ever wanted smart glasses that can identify exactly who you’re looking at or talking to, please be aware that you’re contributing to the current onslaught on privacy. This is discussed in Your Face Belongs To Us, a new book by New York Times journalist Kashmir Hill.

Essentially, the book is a haunting portrait of sci-fi darkness in the real world, and the ultimate villain here is Clearview AI, a secretive facial recognition start-up that has built a huge, searchable database of people’s faces.

Various law enforcement agencies around the world, authoritarian governments included, have contracted the firm and started identifying everything about a person’s life based on a photograph. Clearview AI made this possible, so why not?

If you’re a police officer trying to catch a criminal, it’s all very exciting – especially in, say, non-democratic countries such as Russia or China, where facial recognition technology has made identifying annoying protesters so much easier.

But if you’re even a tiny bit conscious about your own privacy, you should worry about the possibility of a future where being anonymous in public places will be virtually impossible, Hill says.

Dystopia turned into reality

It all started with a tip. In late 2019, Hill learned about claims by Clearview AI’s high-profile lawyer Paul Clement, formerly the Solicitor General under US President George W. Bush, that hundreds of American law enforcement agencies were already using the tool.

Clearview AI, of course, had to hire lawyers because officers – at least some of them – needed convincing that they weren’t committing a crime by using images of people scraped from their social media profiles to identify them.

Hill, a tech reporter at the Times, usually writes about the ways technology is changing our private lives so, obviously, knew all about the advances in facial recognition technology. Unsurprisingly, she started digging.

What Hill found was startling, if not terrifying. Sure, you have to be aware of what the technology means if you want to grasp the possible consequences, and the author does a great job of explaining the ins and outs of facial recognition in the book.

It turns out that the technology, previously regarded as a “dystopian technology that most people associated only with science fiction novels or movies such as Minority Report,” is very real and already used in everyday life.

Clearview AI – which is now under regulatory fire in many countries – has billions of images scraped from social media. I haven’t checked but I’m guessing that I’m on that database – and so are you and your friends.

Now, in theory, why would I be worried if the police could identify me from my images on the internet? If I committed a crime, I should face the consequences, and if facial recognition helped the officers to nab me, so be it. Biometric privacy has a ceiling.

Besides, the settlement between Clearview AI and the American Civil Liberties Union (ACLU) in 2022 has banned the firm from providing the so-called faceprints to corporate clients, so it’s all kosher, right?

Problems aplenty

The problem, Hill writes, is that it’s not so simple. Elegantly moving through decades of technological innovation, she’s very convincing in reminding the reader of the sometimes forgotten importance of anonymity.

“Would young men have been able to dodge the draft during the Vietnam War, or would they have been tracked down with real-time cameras and been conscripted? Would war objectors have been able to protest without fear of being identified and fired?” writes Hill.

“Same-sex relationships were stigmatized in the 1960s. Would it have become too dangerous to socialize publicly or to go to gay bars for fear of a bigot with a facial recognition app outing you? The availability of perfect surveillance in the early 1960s might have preempted the meaningful social change to come.”

Anonymity provides powerful protection for those who don’t conform to the status quo. Of course, again, in countries like China and Russia, where facial recognition technology is basically in-your-face, resisting the norm is now near-impossible without being detected.

Unfortunately (and that’s probably the only little thing that makes the book incomplete), the coronavirus pandemic shook the world just when Hill was writing this, and she wasn’t able to travel to those countries to pursue the technology further.

"The availability of perfect surveillance in the early 1960s might have preempted the meaningful social change to come.”

Kashmir Hill.

And when it comes to police work, it’s no secret that facial recognition tech should only be used as a guiding tool. People certainly shouldn’t be prosecuted based on the program alone.

As Hill notes, overtrust in facial recognition technology has already ended in numerous mistakes. In August, the US city of Detroit was forced to change facial recognition policy after a woman who was eight months pregnant said she was wrongly charged with robbery and carjacking.

Porcha Woodruff became the sixth person in the US to report being falsely accused of a crime as a result of facial recognition tech used by police, the ACLU said after helping the woman sue the Detroit police department.

Data exploitation policies

After Hill’s article appeared in the New York Times in January 2020, Clearview AI naturally attracted a lot of attention. Not all of it was unwanted, though – while Hill was writing her book, more and more law enforcement agencies around America were starting to use its tool.

Maybe the book, released at the end of September, will now shine so much light onto the firm that using its facial recognition technology will become a sort of taboo. But it certainly doesn’t seem like it so far.

In May 2022, Clearview AI even expanded the sales of its software, moving on from mainly serving the police to attracting private companies. Vaale, a Colombian app-based lending start-up, has adopted Clearview AI to match selfies to user-uploaded ID photos, and another firm selling visitor management systems to schools has signed up, too.

It seems that Clearview AI doesn’t really care about the ongoing debate over the ethics of collecting faceprints en masse, or fines in countries like the United Kingdom and Italy for breaking privacy laws for collecting online images without consent.

The issue here, unfortunately, is that most of us don’t really care. Throughout the book, Hill is almost begging people to pay more attention to their online data and warning us that drooling over augmented reality devices is really, really not smart.

“Too many people currently on the internet do not realize what’s possible. People on OnlyFans, Ashley Madison, Seeking, and other websites that cultivate anonymity are hiding their names but exposing their faces, not realizing the risk in doing so,” writes Hill.

But, again, people are busy and don’t really have time to dive into all the settings on an app or read extensive privacy policies. Most of us think that the sheer existence of a privacy policy means that data is protected but, in fact, the policy exists to explain – in boring legalese, of course – how the company plans to exploit it.

Clearview AI certainly does. Kudos to Meta and Google, by the way – unlike the ambitious start-up, these tech giants have carefully edged back from using facial recognition in their software, despite certainly having the computing power and data to do whatever they please.

In short, be very, very careful, Hill says again and again. If we’re not, we might all face the reality of Beijing, the capital of China, today.

Beijing has upgraded its public bathroom tech considerably over the past decade. In 2017, the city installed toilet paper dispensers with facial recognition technology in public bathrooms at the historic Temple of Heaven to thwart tissue paper thieves.

The machines now dispense two feet of toilet paper per face and refuse to give more to the same person until nine minutes have elapsed. People may even be fined. Welcome to the future – it’s quite disturbing, isn’t it?