© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

The million dollar question: why can’t AI be both secure and fair?


If crucial decisions like getting loans or bail are to depend on the AI system, the latter must be fair and secure. However, as it turns out, you can’t have it both ways.

“When it comes to explainability, reliability, maintainability, accountability, and security, AI systems are not just underdeveloped. They are totally undeveloped,” Chapter six of the book “Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What To Do About Them” reads.

This conclusion was made by the JASON advisory group – a team of Nobel laureates, scientists, and professors that advise the US Department of Defence (DoD) – in 2017. Six years later, AI still hasn’t been tamed or even tested enough.

“Shouldn’t consumers of autonomous cars demand that the AI system undergo rigorous and verifiable tests? If your doctor uses the AI system for diagnosis, would you not want to know that the algorithm passes a high bar for reliability?” The author of the book, Ram Shankar Siva Kumar, Tech Policy Fellow at UC Berkeley, asks.

On Wednesday, he posed similar questions on the Black Hat conference stage, building his talk about AI testing and regulation on the chapter from his book that he co-authored with Hyrum Anderson.

From the economic perspective, it’s pretty evident why AI systems are neither fully secure nor fair.

“The current market economics of AI systems largely reward putting out a product first; it does not reward putting out a vulnerability-free product,” the book reads.

That’s why many are calling for AI regulation. But first attempts to tame AI with Risk Management frameworks such as NIST AI RMF and the draft EU AI Act “have five roadblocks that stakeholders will face when operationalizing these standards.”

“The million dollar question”

With AI applications finding their way to nearly every aspect of our lives, we need these systems to be fair and robust against attacks. Unfortunately, these properties seem to be in conflict, meaning that you can't have them both.

"There are always technical trade-offs you have to make. If you make a system more secure, you may actually make it less fair and more biased," Kumar told Cybernews during an interview before the Black Hat conference.

Algorithmic bias is a well-documented and widely covered topic, drawing attention to systems' flaws that lead to misrepresentations of certain groups, which could result in wrongful arrests.

The problem is that AI, when "attacked," is suddenly not as bright as you thought it was. At least that's the impression I got after dipping my nose into the freshly released "Not with a Bug, But with a Sticker."

As it turns out, even small changes to data can make a significant difference in AI's perception.

For example, the AI can say with somewhat moderate confidence that the image on the left depicts a panda. But when you add noise, it’s 99% sure it's a gibbon.

Panda or Gibbon

Why is that important? For example, you can confuse an AI-powered X-ray system into misrecognizing a malignant scan as benign by simply adding adversarial noise.

Apparently, you can "have explainable AI systems and robust AI systems, but most likely, not both."

Why? I asked.

"That's the million-dollar question," Kumar laughs. There's no definitive answer here. All we know is that there are trade-offs to be made once we become reliant on AI systems.

No silver bullet

While lawmakers are trying to put together frameworks to regulate AI, researchers see several roadblocks.

For example, the EU Artificial Intelligence Act (AIA) could cause an additional 17% overhead on all AI spending for organizations. While behemoths like Nvidia probably can afford it, small and medium companies – the backbone of the AI economy – might not be able to absorb the costs.

There are also multiple questions about how to test AI systems, since no single test can be applied to different systems.

We might also want the AI system to be transparent, especially if it is applied to life-changing decisions like issuing loans.

“Should someone be denied a loan because of their race and gender, the AI explanation should clearly say so, and this would help policymakers take the appropriate legal remedy,” the book reads.

The problem is that you could construct a system to fool a human being into overlooking racism and other biases by using indirect cues.

There’s a lot to take into consideration when building AI regulations. Kumar calls for a holistic approach, looking into all AI properties at the same time, understanding that they’re “deeply entangled and in tension with each other.”

“The real question is not whether we need a framework with a testing regimen and sufficient enforcement mechanisms but how to develop one that inspires trust and promotes continued innovation,” the chapter concludes.


More from Cybernews:

Five most common cybersecurity vulnerabilities in 2023

Satellites easier to hack than a Windows device

Apple removes, then restores popular Russian podcast after backlash

Moscow civil servant and politicians’ addresses leaked say pro-Ukrainian attackers

Colorado education department admits data breach

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked