© 2021 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Big tech retreat from facial recognition, but the war has only just begun


If we put down our pitchforks and torches for a moment, it's important to remember there are significant differences between facial recognition, verification, and identification. 

For example, face recognition is also referred to as authentication and is often used as an umbrella term to identify and verify a user. But facial identification can be used to identify an individual based on the image of a face.

These subtle differences are better understood by looking at our digital lifestyle. When we unlock our phone or log in to our bank using FaceID or fingerprint, the technology will authenticate our account by comparing it to a series of identity markers in a previously stored template. Contrary to popular opinion, these biometric authentication methods cannot identify exactly who the user is.

Facial recognition quickly becomes creepy when it's embraced by law enforcement agencies that scan the eye in the sky against a database of images of persons of interest and notify the relevant authorities when it finds a match. The mantra “If you've got nothing to hide, you've got nothing to fear” is no longer going to cut it if you upset the wrong people. Shouldn't privacy be the default?

In the workplace, security systems are increasingly using a database of employee facial images to determine who can and cannot enter restricted areas. But this is just the beginning. In China, facial recognition technology enables authorities to match every face with their ID card. When linked up to every CCTV camera, it becomes incredibly easy to track the movements of an individual and who they speak with. 

How did we get here?

In the middle of the 2014 Academy Awards, host Ellen DeGeneres encouraged some of the world's biggest stars to squeeze in for a selfie. For many, this was the watershed moment that highlighted how easy it was to capture a selfie and share it with the world. Tourists all over the world added the ubiquitous selfie-stick to their hand luggage, and everyone rushed to flood their social media platform of choice with images of their faces.

Here in 2020, tech companies are harvesting our selfies for their facial recognition databases. Taylor Swift reportedly used facial recognition at her concerts to scan the crowd for stalkers. Although citizens of the world are now wearing face masks, researchers are using it as an opportunity to crawl the internet for face mask selfies to help them train facial recognition tools and algorithms.

Clearview AI has reportedly scraped 3 billion labelled faces from Facebook and other social media platforms. FindFace also takes things a step further by performing a constant live scanning of people's faces. Are we sleepwalking our way into a Big Brother state? What happens when every selfie you have taken is handed over to the authorities who will match it against CCTV cameras to determine your everyday activities?

The same CCTV cameras that were meant to protect its citizens are now being used to track and monitor us. Attending a peaceful process could result in you being added to a watch list if it is not appropriately regulated. Knowing that you are always being watched will force many to change their behaviour and even censor themselves.

Why big tech is social distancing from facial recognition

In the last few weeks, Tech giants IBMAmazon, and Microsoft have quickly backtracked from selling facial recognition tools to authorities in a world of police violence and racial profiling. Another problem is that the tech is not mature enough for the kind of deployments it is being used for. 

The applications are being tested in highly constrained lab settings, and the performance results appear to be rather good. But there is a significant difference between lab testing and commercial deployment. It is relatively easy to tweak applications to maximize test results. The test data sets are consistent, and so are the test environments, which allow application "tweaking" in subsequent tests. 

However, they cannot duplicate the wide variations in settings and environments found in the wild. And, in some cases, it is not being used for the most appropriate use cases. 

The second issue is there is nearly no oversight, let alone consequences for irresponsible use. Many organizations take the results of what their systems see and run with it without regard to the effects of a false positive. They believe it's better to make the mistakes and clean up the results than to wait for the tech to mature and provide much more accurate results. 

One organization, the Biometrics Institute has been promoting the responsible use of biometrics for about 18 years. While they've made progress, adoption of the advocacy has mostly not been backed by deeds. Companies like Clearview AI, PimEyes and Ayonix like to talk about being responsible, but have merely doubled down after Microsoft, IBM and Amazon backed off, making it understood they are ready to sell to law enforcement or governments to take up the slack. 

The technical limitations of facial recognition

When I spoke to John Wojewidka, VP of Communications Facetech, about the technical limitations behind the headlines, he explained that facial recognition is a 2D-based technology. Sensors - like cameras - are gathering images in two dimensions, then comparing them to 2D photos in a database. There is not enough data acquired in 2D images to allow the systems to identify an individual from within a large number of people (1:N, or 1-to-many) with the kind of certainty that is needed, and attributes like skin tone can cause problems. 

Ironically, this is primary function surveillance, and what law enforcement applications use it for. Being 2D does not allow the required level of certainty or accuracy when an individual is required to be authenticated for access to, say, an account or physical location (1:1). The shortage of data can allow photos or videos to stand in for a real person, allowing a bad actor to gain access. 

As an example of what it takes to authenticate with certainty levels that are better-than-human in the real world, Facetec believes they can achieve this the right way. The AI-driven 3D application has, over several years, learned to identify an individual out of more than 13M people and can verify that what the camera sees is a real, live, present person with 99.9999% certainty. They claim to do so without being intrusive.

A new hope

To avoid problems, Wojewidka told me how they also believe a system must be able to recognize a legitimate account holder without knowing their name - much as humans do. You can identify your neighbour and determine if they are alive and not know anything else about them. Unfortunately, not all tech companies have this mindset or set of ethics at the core of their company.

Facial recognition becomes creepy when it is used to pick one face out of a crowd. But facial authentication is far less offensive and intrusive as it enables users to access their account. The differences between how law enforcement and transportation applications use

facial recognition highlights why we shouldn't confuse identification and verification as surveillance.

However, there are tech companies who are scraping selfies from social media, tweaking their facial recognition algorithms and willing to sell it to the highest bidder. Until there are substantial economic and legal consequences, there will always be a company happy to ring up sales regardless of the potential damage to individuals. The unfortunate truth is oversight is very far behind in the understanding of the tech and the legal ramifications. This might take a while.

Leave a Reply

Your email address will not be published. Required fields are marked