© 2021 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Coded bias: do you need to be white to play peekaboo with a robot?


Facial recognition systems are prone to errors, especially when it comes to people of color. When Joy Buolamwini first started working with social robots, she felt invisible and had to wear a white mask while coding. Otherwise, the robot would simply ignore her.

An algorithm behind the artificial intelligence systems might seem like a simple mathematical equation. Therefore, it should treat everybody the same. Well, in theory, yes. Yet, scientists studying facial recognition algorithms claim that some of them are built on pale male databases. It means these algorithms might work well for white men but fail to correctly identify people of different color, gender, or race.

Injustice, racism, and inequality might already be embedded in the algorithm behind any AI system. Only it makes it hard for people without a background in math or data science to argue how unjust particular systems might be.

Starting April 5, Netflix is streaming a movie Coded Bias (2020) by Shalini Kantayya. The documentary follows computer scientist Joy, along with data scientists, mathematicians, and watchdog groups from all over the world, as they fight to expose the discrimination within algorithms in employment, banking, insurance, dating, policing, and social media.

The rise of artificial intelligence promised the elimination of errors of human prejudice. But it is only as unbiased as the humans and historical data programming them, Joy claims.

Joy’s story

Joy first encountered the issue of being invisible by the AI system when she was still an undergraduate.

“I was working on social robots, and I was trying to play peekaboo with social robots, but peekaboo doesn’t work if your partner does not see you. Even as an undergraduate, I was encountering some of these issues. At that time, I would ask my white roommate to test the system and then get it to work,” Joy said during a discussion after the movie screening at Her Dream Deferred 2021.

Then, she carried on with her research on facial recognition at the M.I.T. Media Lab and experienced a deja vu.

“I just figured by the time I came to grad school at M.I.T., the epicenter of innovation, this is not where I expected to be coding in a white face. I was bemused. I didn’t even start with a white mask. I was having a little bit of trouble for my project to work. I drew a face on my hand, and it detected a face on my hand. It was a poetic layup,” she said.

Joy felt invisible. Yet, do you really want to be visible and recognizable in the context of massive surveillance?

At M.I.T, Joy discovered that some algorithms could not detect dark-skinned faces or classify women accurately. It led to the harrowing realization that the very machine-learning algorithms intended to avoid prejudice are only as unbiased as the humans and historical data programming them. Joy looked into some of the databases that were used to build algorithms.

“I would call them pale male datasets because they consisted of 75% men, 80% lighter skin individuals,“ she said. It means that these algorithms mirrored the pale male understanding of the world. Her M.I.T. thesis methodology uncovered large racial and gender bias in AI services from Microsoft, IBM, and Amazon. 

Joy didn’t just write the paper and let the issue rest. Startled by her findings, she founded the Algorithmic Justice League, gave a TED talk, and started advising world leaders on reducing AI harms through service on the Global Tech Panel, congressional testimonies, and keynotes.

It is not a math problem

“There are ethical choices in every single algorithm we build,” Cathy O’Neil, author of Weapons of math destruction, claims.

She believes that people are intimidated to discuss the fairness of algorithms.

“The power behind this mathematical intimidation. It is math shaming. You are not a Ph.D. in math, so you can’t possibly understand this. It is the power that technologists and computer scientists wheeled. The salespeople in these companies play on,” she said after the screening.

But this isn’t a math test. Math is only used to intimate people into silence, Cathy argues. Algorithms are just another language, and society’s core values and fairness should be translated into codes.

Cathy is conducting algorithmic audits now evaluating for whom certain algorithms work and who are at a disadvantage.

“I am separating what we mean by success, what are our values, for whom this is a failure. Then I try people to explicitly state their values and then measure them, measure the harm on the stakeholders and the people who are impacted by this algorithm,” she explained.

The truth is, injustice doesn’t happen at the mathematical, statistical, or scientific level. Cathy stressed at least several times that this is not a math problem.

“The most interesting part is the question of values, of how do we balance these stakeholder groups against that stakeholder groups’ interests. And this is why Facebook says AI is going to solve a problem of misinformation, or they decide that their algorithm distributing newsfeed is neutral, but it’s complete garbage. It is, in fact, exactly to benefit them and to destroy everybody else. They have chosen themselves as the only stakeholder in that conversation, and their interest is profit, and nothing else matters. Let’s think about all the stakeholders here and balance their interests explicitly,” she said.

Her job as a data scientist is to say whether core values are rendered in the particular codes.

Ruha Benjamin, a professor in the Department of African American studies at Princeton University, shares the idea that technology does not create but fortifies the existing inequalities and biases.

“These are social processes that are being sped up or scaled through technology. Technologies are not creating the problem by themselves but rather amplifying existing issues,” she said after the screening.

The uprising in Brooklyn

The algorithmic bias is one problem. There’s also a question of whether you want to be seen and recognized. Massive surveillance in the US, amplified by the pandemic, is a concern for many citizens.

Tranae’ Moran, a Brooklyn-born and raised multimedia artist, first encountered the problem of surveillance when she bought iPhone X.

She started reading the facial recognition terms and didn’t feel comfortable with her iPhone taking measurements of her face.

Face detected. Screenshot from YouTube

“I did not turn that feature on because I was not interested in tracking my face because I didn’t know what else that data could do,” she said.

Soon enough, she came home to find a large manila envelope on her doormat. The building management notified the residents that a facial recognition modification will be made to the current security system. It would have meant that anyone entering the building, including residents and visitors, would be scanned, and their biometric data would end up in the database.

“Why do they need another layer of security? We are talking about my home. I come here every day. I pay to live here. I was immediately taken aback. My neighbors were either equally concerned, or they didn’t know what this was,” she asked then.

Tranae’ made sure all of her neighbors, especially seniors, knew about the concerns surrounding the facial recognition system. Eventually, they got legal help to resist the modification to the current security system. At that time, she already knew, mostly by following Joy’s work, how biased and unfair the system might be.

“This was infringing on our human rights. We felt this was going to be used as a tool to get us out of these units,” Tranae’ said.

According to Amnesty International, residents who initially campaigned against the use of facial recognition were threatened by the landlord with printouts of their faces from surveillance cameras and told to stop organizing.

The predominantly Black and Brown community of the Atlantic Plaza Towers, where she lives, was eventually able to make the management of the building promise they would not install facial recognition in the complex.

The pressure applied from her neighbors and team of lawyers influenced the drafting of federal, state, and senate bills around the biometric collecting systems in residential spaces.

‘Reclaim Your Face’ cry in Europe

Biometric mass surveillance, including facial and emotion recognition, is widespread throughout Europe, too. Activists claim it to be a major human rights violation, and the Reclaim Your Face movement is calling for an EU-wide ban on biometric surveillance. The campaign needs to gather 1 million signatures within the next 14 months for the European Commission to put it on the EU’s agenda.

There are many examples of the use of biometric surveillance in the EU. For example, local authorities and schools in France are implementing facial recognition against young people. In Germany, transport companies in collaboration with the police are using it against commuters. In Denmark, event organizers in collaboration with the police are using it against football supporters. Slovenia is using it against the protestors, Spain and Netherlands are using it against shoppers, Italy – against migrants, Poland is using it under the guise of COVID, EU research agencies are using it against travelers and asylum seekers, activists suggest. 

Authorities that are using surveillance technologies claim it’s for the public’s safety. Yet, human rights advocates reckon that people who are being watched actually become less safe.

“The argument of efficiency is a thinly-veiled attempt to divert from the fact that authorities are spending vast sums of public money on technologies that are being aggressively pushed by private companies. Often, they are doing this instead of investing in welfare, social provisioning, education, and other strategies that are much more effective at tackling social issues,” Ella Jakubowska, Policy & Campaigns Officer at European Digital Rights, told Cybernews.


More great CyberNews stories:

We may be closer to cyberwar than ever before, study about nation-states concludes

Bitcoin is horrible, but NFT will shine, early investor in blockchain claims

People are aware of deepfakes. And yet, they keep sharing them

Will Green AI help businesses rethink sustainability?

Subscribe to our newsletter


Leave a Reply

Your email address will not be published. Required fields are marked