Devoid of emotions, machines impartially serve humanity, don’t they? Regrettably, as products of human creation, computers reflect our ingenuity and imperfections all the same.
Envision a scenario where decision-making is entrusted to machines, and robotic entities are tasked with determining matters like granting loans and diagnosing diseases. The promise lies in the potential to eradicate biases. Yet, the reality paints a different picture, where these non-sentient machines pass judgments with a severity reminiscent of high school days.
Unfortunately, this is not some sci-fi movie premise – it’s the world we live in. AI’s potential to boost productivity and profit leads to rushed business decisions with little to no regard as to how these machines enforce the biases we’re already drowning in.
Welcome back to another installment of our bi-weekly podcast series, Through a Glass Darkly. In this episode, we delve into the realm of AI bias. How is it possible that machines, driven solely by data and algorithms, exhibit biases and prejudices akin to our own? Join us for an insightful 52-minute exploration where we tackle questions such as:
- Who creates AI systems and for whom
- Who’s training AI systems
- The Stone Age of AI systems
- Would we want an AI system to decide whether we should get a loan?
- The rushed adoption of AI to boost profits and productivity
- How AI reinforces stereotypes that are already embedded in society
- Why current AI systems are biased towards white people over people of color
- AI systems simply mirror societal biases. Can we do something about it?
We don’t think that a sentient AI application exists – meaning that none of the software you interact with, no matter how “smart,” doesn’t actually have a mind of its own. This fact has both pros and cons. The pros – well, it won’t take over the world in the near future. And the cons? It’s being built by people, feeding it written testaments of our history, which, we know for a fact, is nothing more than just a fairy tale only seasoned with facts and written by the winners.
Well, that might be an overstatement, but I wanted to highlight that we, people, are the creating what could one day be perceived as a whole new species – robots. We craft them in accordance with our perception of equity and morality, and yet often think those machines are devoid of bias. It’s crucial to recognize that certain preconceptions and biases are somewhat entrenched in our cultural fabric, therefore might manifest itself through our creations, including machines.
Consider, for instance, a seemingly innocent question posed to a kid: “Your braids are so beautiful. Was it your mother who did your hair?” Seemingly benign, it’s actually full of stereotypes, even if we’re not aware of them when posing that question.
We infuse our perspectives into the tasks we undertake quite naturally and often without conscious awareness. How does this phenomenon unfold in the realm of AI?
- AI is being trained on data which is “dirty” and doesn’t really represent the world. Cars’ AI systems designed to recognise a pedestrian have a hard time recognizing people of color and children because it has probably been trained on datasets where these groups of people were underrepresented.
- Many machine learning models actually learn by consuming the people’s input and swallowing all the information on the internet. It’s like if I would give my kid a plate of smelly garbage for lunch and then complain about her nasty breath.
- AI is trained by people. Job postings like AI trainers are becoming increasingly popular, highlighting that people are behind AI systems.
- Data is being cleaned by people, too. For example, data labelers are asked to label pictures for systems so they can learn to recognize objects. And guess what, people can be very mean, labeling overweight people as losers, and introducing biases into a machine.
Why does any of this matter, you ask? Well, the problem is that, according to some researchers, AI systems are underdeveloped, and we’re still in the Stone Age when it comes to their security. With no regard to that reality, various businesses – from media houses to financial institutions – are rushing the adopion of different AI systems to boost productivity and, therefore, profit. Soon, machines will:
- Decide whether you’re eligible for a loan
- Read your medical history, interpret symptoms, give you a diagnosis, and offer a treatment
- Become your psychiatrist, judge your academic paper, and God (or machine?) knows what else
We might not be able to turn the tides since the mainstream adoption of AI applications is already happening but, for starters, it’s enough to be aware of the fact that any machine might be just as faulty and corrupted as people are.
Knowledge is power here, and exercising your freedom of expression becomes even more important. Always seek explanations behind any decision at any company, get a second opinion from another doctor, take it to the streets or social media to discuss the injustices.
What does “through a glass darkly” mean?
While our primary goal is to maintain objectivity, we acknowledge our inherent humanity as we strive to provide our readers, viewers, and now listeners with a comprehensive understanding of the ever-expanding cyber landscape. This is precisely why we chose the name for our podcast, "Through a Glass Darkly," drawing inspiration from the biblical expression used by the Apostle Paul, signifying a limited clarity when it comes to envisioning the future.
Our discussions often involve speculation about what lies ahead, eliciting both excitement and trepidation regarding the tech evolution or revolution. As we maintain a strong emphasis on cybersecurity, we find ourselves naturally inclined toward a somewhat "doomsday" perspective, perceiving the world through lenses shaded in darkness rather than rose-tinted hues.
More from Cybernews:
Subscribe to our newsletter