To improve or to abandon: what should we do about racist AI?
As bias infiltrates AI responsible for credit score monitoring, the hiring process, and even criminal analysis, a logical question arises: are we giving too much power to a discriminatory system that should be abandoned rather than improved?
There’s no need to look far for examples of how biased data used to create artificial intelligence (AI) algorithms can negatively affect people. In 2019, it was discovered that an algorithm deployed to assess which of the 200 million people in the US were likely to need extra medical care in the future mistakenly favored white patients over black ones.
Amazon’s algorithm used for hiring in 2015 favored men over women, while the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) algorithm was twice as likely to incorrectly predict recidivism for black over white offenders. In turn, Google Photos had a labeling feature that classed the photos of a black software developer and his friend as being of gorillas.
But what is the solution? Well, some, like researchers from EDRi (European Digital Rights) argue that it’s “policymakers who must tackle the root causes of the power imbalances caused by the pervasive use of AI systems.”
Others respond by offering even more technical solutions for eliminating tech bias, such as IBM, which introduced its cloud-based software. The tool scans algorithms for signs of bias and recommends adjustments. But there are also those – like researchers from the AI Now Institute – who argue that debiasing AI is a lost cause. According to them, some systems should not only be prevented from commercial exploitation but should simply not be designed at all.
We sat down with Migüel Jetté, head of R&D and AI at Rev, a company that offers AI-driven speech-recognition transcription tools, to learn more about bias in artificial intelligence and ways to tackle it.
The idea of bias in AI, where a system gives continuous preference to certain groups of people, is not new. How can biased AI harm people?
There are different levels of algorithms out there. One – like the one we’ve built at Rev – is a foundation of a bigger solution. It gets integrated as part of a solution somewhere else. Something like speech recognition might not necessarily harm everybody. It’s the way it can be used that could be harmful.
Yet, “harm” is a strong word because it makes me think of physical harm. Speech recognition could be used in a system that, for example, uses interviews – and then you might not get the interview because the system can’t understand you. It’s essential for us to have tools that understand everybody regardless of your race, nationality, gender, and so on.
So it’s more about how these systems are eventually used without understanding their limitations that could potentially cause harm.
Are all AI systems inherently biased, or is it dependent exclusively on algorithms that companies set? In other words, who is ultimately biased – the humans who code, or AI at its core?
AI, in general, is just an algorithm. If I write a piece of software using traditional algorithms, there could be bias – in how someone codes an application or even colors it. We’ve all experienced bias, even in the physical world, so it’s not a new thing in AI.
The difference with AI is that it’s a statistical model that learns from data. Now, we have more of an open dialogue around it, but I think, maybe people didn’t understand how data might lead to bias. So if your data has bias in it, your statistical model will too. There are ways to make sure the data you feed in is the least biased possible. So I wouldn’t say it’s AI at its core – it’s more of data at its core.
The rise of artificial intelligence promised the elimination of errors of human prejudice. Why are we still not witnessing this change?
I think we’re just starting out – so there’s a lot of work ahead of us for sure. Especially in speech recognition, it’s very hard to achieve zero error.
Maybe, people get excited about the promise of AI too quickly – it’s always 5-10 years before things are perfect. But I don’t know if they’ll ever be perfect. The way we approach it at Rev – and increasingly, other people are leaning towards that now – is improving the way we benchmark and monitor those algorithms. In speech recognition, you used to test yourself in a fairly limited scenario, but then you deployed on a much wider scenario.
In speech recognition, the benchmark used to be something called “the switchboard” data set, which is people talking on a phone system – like from the ‘90s. But that has a very different acoustic fingerprint. So when you deploy it in a Zoom call on a Google meeting, it behaves very differently. And there is a danger there – if you don’t benchmark yourself on the right thing.
But as I’ve said earlier, now the dialogue is definitely more open, and people understand these things more. And I think that’s critical to building trustworthy AI systems.
The new AI Bill of Rights will guide the design, use, and deployment of automated systems. It focuses a lot on data privacy, the safety of systems, and protections against user discrimination. With AI, how important are these three security checkpoints?
I’m a big fan of responsible AI, and normally, that means that AI should adhere to four principles of responsibility: fairness, explainability, accountability, and respect for privacy.
So I think the Bill of Rights is very interesting, and those things you’ve mentioned are directly related to responsible AI. I do think we need to care more about data privacy for sure, and protection against user discrimination is an obvious one. Any system we build needs to be fair to everybody, especially the one that might result in an important decision – your credit score or employment.
Whenever these things are discussed at a high level like this, it shines a light on the dialogue. Then, customers will start to ask for more of a standard around these AI systems, which overall improves the quality of the system – and the trust between the customer and AI provider.
Why is compliance voluntary? Do you think it should be mandatory instead?
I’m not sure if it should be mandatory, but every time these things get brought up like this, consumers learn a bit more about the use of biased AI. So it opens the door for companies like ours to say that we comply with the AI Bill of Rights. So at least for a company that wants to do the right thing, it gives you an opportunity to build trust with your customers.
I don’t know if you could make something like this mandatory – it’d be very hard to police that. But at least, hopefully, in a free market kind of way, I think customers will go towards companies that respect those things. I think we’re seeing people being smarter and smarter in the technologies they adopt, and it’s only because they’re learning about the negative side effects. So I’m leaning towards it being okay to be voluntary and force companies to adopt it because the market demands it.
In the US, there is the Algorithmic Accountability Act of 2022, but still, not a single general regulatory framework for AI systems exists. Why is that – and do you think it’s required?
That particular act applies to machine learning in AI, as well. In reality, it’s just algorithms – there is no magic to it, so I think that they are already kind of accountable. They are just a lot more complicated to track and monitor, I guess, but they are a part of it.
So I think it should be greatly encouraged, but I don’t see how it could be mandatory.
A report from the AI Now Institute suggests that debiasing some sorts of AI – like facial recognition – is not enough. Instead, they argue that certain types of AI – those that can profile people for “criminality” based on facial features, identify gay or black people, or even determine the professional qualities of a person via micro-expressions – simply should not exist. Do you agree that there are types of AI that should be completely abandoned?
Thankfully, I work in a field where this doesn’t apply as much – speech recognition can work for everybody, and it can actually be a good thing if we try as hard as we can to remove bias. There are fields that are definitely way more questionable. They – perhaps – should not be commercially available. They need to continue to advance and research AI because it’s impossible to stop people from using those things in a bad way. It’s almost the same as saying that crime is a bad thing, so we shouldn’t have police because they’re working to solve crimes.
One example of this are deepfakes and fake voices. At some point, I really thought we shouldn’t work on this because it helps people commit fraud. But at one conference, I met a whole group of researchers who were studying it and creating fake voice detectors out of it. There is always a bad and a good side to it. I think if we’re doing research in those fields, we’re at least deepening our understanding of them, and we have ways to understand their bad sides. But surely, I agree that some of those things shouldn’t be used commercially.
What are some of the practical ways to address bias in AI?
At Rev, we’ve built a very interesting “human transcription marketplace,” where people we call “Revvers” are transcribing audios, but they do it on top of our speech recognition. So those humans are finding all the errors that our system makes and fixing them every day. It creates a feedback loop where the system can learn new words and accents, hence continuously improving itself. It’s difficult for other people to build this – it took us a long time – but I think having some sort of a way to test your bias and improve upon it is critical.
One really easy thing people can do is create a much more varied data set for their use case, monitor how it behaves in the real world, and act on it when they discover that the model didn’t work well. I think you can find examples in the news of companies that didn’t do that, and it can have very bad consequences.
Monitoring and acting on it is part of accountability. If you have ways to hold yourselves accountable – and for us, it’s about the partnership with our customers, so our system continuously improves.
The more people understand what can happen when you deploy these systems, the more they’ll be more careful and use them in a good way. A lot of these algorithms have good promise if used properly.
Comments
Your email address will not be published. Required fields are markedmarked