Will explainable AI solve the Black Box problem?


Almost every aspect of our life is already impacted by AI, and machine learning algorithms. Everything from a credit card or job application to our next TikTok video going viral will be determined by automated decision-making. But what happens when computer algorithms make decisions and humans cannot understand the thought process behind them?

Black Box AI models can find patterns in vast quantities of data and make a logical decision. But the black box is essentially an impenetrable AI system that does not share how it reaches its conclusions. Blindly trusting a decision made by an algorithm that will significantly impact our lives without understanding the reasoning behind the result highlights an explainability problem with AI.

AI also has a transparency problem. A recent report by the UK government's centre of data ethics revealed that only 14% of 2,000 job seekers would know if AI had made an automated decision on their application. More worryingly, only 17% would know how to appeal against a decision that they deemed unfair. 

ADVERTISEMENT

If your job, mortgage, or loan application was rejected based on an algorithm, every customer is entitled to know why. Last year, David Heinemeier Hansson, creator of Ruby on Rails, famously turned to Twitter to call out Apple's black box algorithm for being sexist after offering him 20x the credit limit that his wife was offered.

DHH tweet screenshot

However, it would be wrong to point the finger of blame solely at AI. Algorithms often provide an unflattering reflection of conscious or unconscious human biases. Many are demanding that we dare to open the black box and build transparency into algorithms so that we can learn to trust rather than fear the outcome.

The European Commission has already revealed a plan to ban black box AI programs that humans cannot understand. It is hoped that by making AI more explainable, we can build a more transparent, traceable solution with the much-needed guarantee of human oversight.

Thinking outside of the black box with explainable AI

Emerging technologies have introduced as many complexities as the problems that they set out to solve. Thankfully, there is now a renewed focus on removing discrimination to build a fairer society. Explainable AI is designed to demystify the decision-making process by communicating the reasoning process and the outcome to ensure that it always justifies the result.

The Information Commissioner's Office (ICO) began a consultation earlier this year to ensure organizations are made accountable when using AI to make decisions about individuals. As the digital landscape continues to evolve, we can expect the introduction of hefty fines for those unable to explain the decision-making processes made by artificial intelligence (AI).

Contrary to popular opinion, we are a long way from an AI takeover.

ADVERTISEMENT

Many of AI’s capabilities have been greatly exaggerated. Just like us, it can make mistakes, and unwittingly learn biases from its surroundings. There are many examples of racism and sexism that reflect the dark side of humanity. But it's humans that have inputted these prejudices into the algorithms.

A combination of decision-making processes that humans do not understand, and a lack of transparency has prevented the concept of hybrid intelligence from moving forward. Google is helping overcome the challenge with a set of Explainable AI tools that enable developers to monitor their AI and ML models visually. 

Developing tools and frameworks for explainable AI is a massive step in the right direction. But we are still a long way from commercialized solutions at scale. The inconvenient truth is that current AI models were never designed with transparency and explainability in mind.

Trust and transparency

It is hoped that explainable AI will help us increase both our understanding and the cause of biased results. Most importantly, it will help us identify the required steps to fix the problems and prevent them from happening again.

The future of AI requires humans and machines to work seamlessly together while enhancing each other's strengths. Chatbots are a great example of how AI cannot replace human agents, but they can help support them. The success of future collaborations like this will be determined by the building of trust rather than fear.

When black box AI is making life-changing decisions about individuals, it's simply not acceptable to not know how it derived its conclusion.

The good news is we are collectively asking the right questions and searching for the answers that will help society overcome the black box problem and avoid sleepwalking into a so-called AI apocalypse.

There is an opportunity for AI to make a difference in society by increasing fairness and inclusion. But if we don't know how AI and machine learning works, it will be difficult for anyone to get over their trust issues. It's a long road ahead, but by daring to open the black box and understand what we have created and how it works, at least we are back on the right track.

ADVERTISEMENT