
No one in the pharmaceutical industry would dare say that going through clinical trials harms their innovation. Somehow, the tech gurus developing new generative artificial intelligence (Gen AI) models do, Gemma Galdón-Clavell, a tech policy analyst, tells Cybernews in an interview.
As Cybernews reported recently, California is pushing legislation to force companies building large artificial intelligence (AI) models to perform safety testing. The industry is panicking and typically talking about painful hits to innovation.
Under the new AI safety bill, Senate Bill 1047, companies that spend more than $100 million on training large AI models would be forced to conduct thorough safety testing. If firms don’t, they would be liable if their systems led to a “mass casualty event” or more than $500 million in damages in a single incident.
The industry is unhappy. For instance, Clement Delangue, the CEO of HuggingFace, called the bill a “huge blow” to both Californian and US innovation. TechNet, a tech trade group, is preaching caution because, allegedly, moving too quickly could stifle innovation.
However, in an expansive interview with Cybernews, Gemma Galdon Clavell, an expert in technology ethics and algorithmic accountability, says that is not the case – sufficient attention to detail is the only way for the AI industry to gain people’s trust.
As the founder and CEO of Eticas.ai, a software company focused on AI auditing, Galdon Clavell keeps repeating: “You can’t code a society you don’t understand.”
Key takeways | |
Attention to detail is the only way for the AI industry to gain people’s trust. | |
Five years from now, we'll look back and wonder how on Earth we were using non-audited AI. | |
Even the best players in the industry are investing in tools that don't improve their tech's performance. | |
With AI entering the mainstream, people are going to start appreciating the human experience more. | |
If we see AI being appropriated into military operations, people will be imprisoned for crimes against humanity. | |
Countries competing for global respect also compete to set the guardrails around AI systems. |
Innovation needs to incorporate safety
I'd like to start from California where tech firms are resisting AI safety initiatives. It’s actually quite closely related to the discussions around the AI Act that we had here in the European Union. In both cases, tech companies have been and are pushing against closer regulation, safety testing, and auditing. They're claiming all this would hurt innovation. Do they have a point?
Not at all. Not under democratic contexts. That's the main distinction. We live in democracies where the economic system is capitalism – that means that the limit to profit making is hard. The system is widely accepted in the West and seems to work. It has brought a lot of prosperity because it's managed to strike that balance between innovating but also protecting.
No one in the pharmaceutical industry would dare say that going through clinical trials harms their innovation. Everyone understands that having a vaccine go through clinical trials is part of making the vaccine better. No one would say that cars without seatbelts are better than cars with seatbelts.
Innovation needs to incorporate things that protect people and work for all industries. Basically, our economic systems are based on the idea that you can make a lot of money as long as you don't cause harm. The only thing that we ask is that you show and prove that you do not cause harm before you launch something in the market. That applies to toy makers, food providers, and the pharmaceutical industry – and it works for everyone.
And then all of a sudden, the tech industry is like, oh, unless I don't have regulation, I cannot innovate. Well, go talk to your friends in the pharmaceutical industry, in the construction industry – everyone's innovating and still understanding that there are rules that you need to abide by.
That's what makes innovation interesting. I grew up under this idea that innovation is about doing things that are worth having within a set of constraints. It's like building a plane that doesn't have to fight gravity. That's not innovation. What’s really interesting and challenging is how you can make a plane fly in conditions of gravity.
Go talk to your friends in the pharmaceutical industry, in the construction industry – everyone's innovating and still understanding that there are rules that you need to abide by.
Gemma Galdón-Clavell
Regulation is the same. How can you make a good algorithm in conditions of respect for people's rights? That's it. So, I don't think that they have a case, and I think that the history of democracies proves that they don't have a case. In the 19th century, you could buy cocaine in pharmacies. We don't want to go back to that today.
We are lucky, and we have worked a lot and spent a lot of effort in developing systems that ensure that when I go to the pharmacy, what I buy cures me and doesn't kill me.
Would you still agree that for the last year or two, since the whole ChatGPT boom started, politicians, especially on the left, who were and still are criticized for not cracking down hard enough on social media companies, are now trying to be more aggressive in going after Big Tech? The FTC is also doing quite well. Is there a trend, a tendency to talk louder about the more sensitive issues?
There's a total waking up in the US. One of the reasons why I moved from Europe to the US is because I saw how dynamics and the conversations were changing. There's more awareness on the part of policymakers and on the part of society. There are also more histories of failures that have caused harm.
Hopefully, we can also promote an industry that cares about impact and looks for a competitive advantage in building responsible AI.
The fact that most firms in the AI and tech industry have tended to build monopolies has stopped them from really exploring what brings value to the brand, builds trust with the client, and brings them to themselves and not anyone else.
In any other field, we find that unless there's trust, there's no widespread adoption. If we want the wide adoption of AI tools, trust is a prerequisite. There are more and more actors in the US realizing this.
We've seen flashes of this over time – for instance, Apple has always used privacy as one of the reasons why you pay more money for their products.
Now, OpenAI and Anthropic are presenting themselves as the Apple of the large language models (LLMs) and saying that they actually care. We're beginning to see competition along those lines.

It's still very nascent, but I definitely see increased awareness in the US at all levels, and I foresee a lot of change in the next few months and years as to how we develop AI and assess whether AI works.
Five years from now, we'll look back and wonder how on Earth we were using non-audited AI in 2023 or 2024 – in the same way that we’re now remembering that we used to buy cocaine in pharmacies.
“The industry is going to change”
Some still argue that the whole AI chat is overly hyped and say that we’re dealing with snake oil. To me, it sometimes seems that even though the Sam Altmans of the world are talking about the safety of their models, they're still doing some risky stuff that they're probably going to apply on a wide scale without testing or auditing it very thoroughly before.
Totally. But that's why tools like auditing are so important to differentiate between the charlatans and the ones actually taking practical steps to ensure that their technologies are better.
There's definitely a lot of snake oil. Right now, the snake oil is very much focused on the LLMs, but there are other AI dynamics, such as recommender systems, that decide who gets a loan, a mortgage, a benefit payout, or public housing.
Yes, there's a lot of hype when some big companies speak of responsible AI and even auditing, but we also see a lot of progress in incorporating these tools and a lot of desire to make sure that we, humans, are making the decisions.
Someone asked me the other day: when companies have been so successful in rejecting anything around climate change, why do you think responsible AI has a chance at actually convincing the industry that we need to do something about this?
I think the big difference is that climate change, unfortunately, is a net loss – you need to give up profits and spend money to do better. There's a lot of reluctance from the industry.
However, when we talk about bias and accountability in AI, what we find is that audited AI is better AI. It works better and it makes better decisions. That actually helps us build our argument.
If you don't audit your systems, you are making bad decisions. Basically, you're giving the loans to people who should not be receiving the loans. You are giving jobs to people who are not the best candidates. You are giving cancer treatments to people who don’t need to get that specific cancer treatment, and other people would benefit more from that treatment.
We are seeing a lot of really bad outcomes of the AI around us that hasn't cared enough about bias to build guardrails around it and to audit. There’s an underlying case for auditing AI and responsible AI, so I actually think the industry is going to change. It's just about how long it's going to take.
The companies are not going to self-regulate, though, even if they’re keen to do that, so I guess you mean independent audits?
Absolutely. We see ourselves as independent auditors. Here’s another analogy: companies can have accountants that make sure that things are audit-ready. But the auditor is always someone who is independent and who is bound by the rules of the auditing industry, not the contract with the client.
At Eticas AI, we have promoted the creation of the International Association of Algorithmic Auditors. We are working with policymakers to develop the guidelines that will determine how auditing should be conducted. We need to have that kind of independent oversight that validates what companies claim they have done.
And again, that's a good thing for the industry – if you’re a company that takes this seriously and takes proactive steps to protect outliers in the datasets to ensure that the outcomes are not biased, that no one is harmed.
We want to encourage the good players to have the visibility they deserve. When they get an A in our audit, they can go and tell the world, hey, it's not just us telling you that you can trust us.
Don’t you think some firms like Google, Meta, or Amazon are too big to care? They can be audited again and again, but then they’ll send an army of lobbyists out into the world to deal with whatever obstacles, can’t they?
I think they care. The question is, do they do something about that concern? That is proving more difficult. It's not only the industry that is failing here – policymakers cannot seem to promote regulations that the industry understands.
We've seen cases when policymakers have asked for data that they don't need. Companies didn’t think they needed to give away their protected data to policymakers who don't know what an audit is or how to inspect these systems.

We’re now in an interim stage in which we are getting better definitions and better practices, but we are not there yet. Many companies are saying, let's get some AI governance software and make sure that it is covered. But it doesn’t bring any value.
It's more of a cosmetic thing. Part of our responsibility as AI auditors is to teach policymakers and the industry what an audit is. And it's not just AI governance. It's not only about assessing risks and having staff dedicated to overseeing risks – it's also about having the data to prove that your system is not causing harm, that your system is not causing more harm to women or people of color, that your system is making the decisions that it's supposed to make based on indicators that are reasonable, legal, and relevant.
Right now, we don't have a clear translation of those concerns into data. The auditing space is still occupied by compliance teams, and there’s not enough space for development teams. That's why we, as auditors, build software – to improve how systems are built.
With AI, we don't have that layer of understanding as to how the systems are performing because we are going with compliance exercises that are not technical enough. In medicine, a clinical trial is a medical exercise. It's not just a legal exercise.
We need to learn from other fields and see how to incorporate good practices into how we audit and inspect AI systems.
We have a lot of really well-intentioned people, but even the best players in the industry are, unfortunately, investing in tools that don't improve their technology's performance.
On one of the panels you took part in, you mentioned that tech leaders seem a little bit more arrogant than the leaders of other industries like medicine, and that they feel better than the rest of us, that they know best. Do you think that arrogance helps tech leaders sound more convincing?
I don't have anything against arrogant people: their contribution to society might be amazing, so they can be arrogant. The problem is when you are arrogant because society has attributed to you the knowledge that you don't actually have.
With regards to the tech industry, we've trained society to see such people as demigods, as people who know everything. Interestingly, the engineering profession is one of the most arrogant without justification.
Other professions are very interdisciplinary – you may be an anesthesiologist, but you understand that to be a good anesthesiologist, you need to work well with the heart surgeon.

In most other professions, people are asking questions, like, what did I miss? Because their knowledge depends on an understanding of the fields of work they’re not experts in.
In tech, it's just engineers. It's a profession that hasn't had to experience dependency on others, and so we see a lot of unjustified arrogance – besides, these people have proven to not know better.
I don't think Mark Zuckerberg, when he came up with Facebook, knew more than others about the dynamics of addiction to social media platforms, for instance. He could have benefited from having those conversations and understanding what the negative externalities of what he was building could be and mitigating them early on.
He didn't because he didn't even know these dynamics existed in other fields that are more regulated and more mature.
Existential threat of AI isn’t real
Do you believe that powerful AI systems or models could be catastrophically dangerous, or do you see these fears as a bit overblown? I was thinking the other day that these chatbots – which is what the hype is about, really – aren't that special, to be honest. But how far is humanity from real dangers, if at all?
There is nothing in my auditing work that leads me to believe that a jump to consciousness or an existential threat could or will evolve. It's just not there. It's not a matter of time – it’s not there… It's just not there.
When you have large language models based on correlation, thinking that through a lot of correlation, causation will emerge is a logical jump that is completely unjustified. Will I fly if I run? No, because these are two different dynamics.The first wave of automation took away jobs that, to be fair, could be done by robots. Now, Gen AI is threatening arts – creative writing, painting, and animation. What does that tell us? Where does that leave us as humans?
Yeah, there's a saying: “I wanted AI to do the laundry and sweep the floor so that I could write and paint, and now I have AI that writes and paints while I do the laundry and sweep the floor.”
I'm fascinated by AI because it poses many interesting questions about us as a society. Take a poem written by an AI system – is it the same as a human-written poem? For some people, it will be. For some people, it won’t. To me, it’s hollow, and poetry is about communicating the human experience,
But some people like things that I would never pay money for or spend time in. There’s a space for everyone. There might be an audience for AI poetry.
I’m not going to be part of that audience, and the fact that part of the audience goes with AI poetry may mean that there's less of an audience for human poetry, but it forces us to think about what it is we value in art. What do we get out of art? I think that's a very personal question.
There's also the context that influences who we are and what we do. I'm Spanish originally, and when we had the Civil War, a whole city was bombed by the Nazis in Northern Spain.
Picasso painted that in a very famous painting called Guernica. If you took all the work of Picasso until 1937, trained an AI model on Picasso’s works until 1937, and asked the AI model what Picasso would paint next, the model would just give you a summary of the main patterns in the past works of Picasso.
But then 1937 happens, and there's a fascist bombing of a Basque town. Picasso was moved by that and painted Guernica.

This integration between context, our lived experiences, our art, and how we express it is not something that can be achieved by reproducing patterns of the past.
Is there a place for just reproducing patterns of the past? Yes. Is that everything? I don't think so. But I do think that we're going to have to make room for content and art that has a lot less substance.
Still, we've been doing that for quite some time with the internet. The news that we read today is not the news that we read 30 years ago before the internet. We are now used to a lot of crap content.
People are going to start appreciating the human experience more. Whenever new technology emerges, we hear that it's going to change everything. But not a lot of technologies achieved that – actually, most technologies end up changing just a little bit.
15 years ago, we were supposed to take drones, not taxis. Drones were supposed to change the face of our cities, our interactions, the way we moved around. But today, drones fix power lines in rural areas – this is the actual reality.
We see that over and over again. Blockchain and cryptocurrencies were supposed to change economics, and the states were going to become powerless. But they don't have that revolutionary aspect that was promised.
The tech industry makes very large promises and, let's not forget, lives off venture capitalist (VC) money, so there’s an economic incentive to sell snake oil. As a society, we are the victims, but we soon become disillusioned.
AI and warfare are two areas where technology can become unpredictable and cruel. What happens if humans are out of the loop and war is only waged by machines?
If we let AI systems make their own decisions – which, again, they do based on patterns from the past – we will get a lot of really bad decisions. In war, we could choose the wrong targets.
We have a legal framework that is very specifically targeted at minimizing errors, and everything we developed after the Second World War is about protecting human life.
Yes, you can declare war on others but you need to minimize error and civilian casualties. AI is no good for that. If we see AI being appropriated into military operations, ten or 15 years from now, people will be imprisoned for crimes against humanity.
We need to discuss error rates more when examining AI systems that make autonomous decisions. We cannot control error rates, false positives, or false negatives.
Without more transparency – which is what you get through auditing – you should not be using those tools. If you cannot guarantee that the person targeted by those tools is going to be someone who needs to be targeted and who can be legitimately targeted, you should not deploy those tools.
And what we see right now is in unaudited AI. We've audited, for instance, AI tools in the criminal justice system – that determine whether someone can leave prison because they're no longer a risk. It’s a very sensitive decision. But even though we’re auditing fairly simple deployed models, we find them to be absolutely random.
Countries that are competing for global respect are also competing to set the guardrails around AI systems.
Gemma Galdón-Clavell
They’re making decisions on people's freedom and the risks of whether a person can go and rape another woman, but it’s basically like flipping a coin because there’s no robust way of making a decision. When you incorporate these automated systems, the decisions become random.
Without auditing, we won’t even be aware that we’re causing this situation. It's just like we would be approving planes that can fly without knowing that they would not fall out of the sky. Auditing allows us to have that information with regard to AI. We have visibility over what happens.
To stay in the AI warfare area, let's say China or Russia are developing AI systems, and if I were the Pentagon, I would think that if I don’t do something because it’s not yet safe, China is still going to do it and gain an advantage over us, right? This has to come into consideration for policymakers as well, hasn’t it?
Yeah, but that's not what we're seeing. What we're seeing right now is competition for regulation. China is regulating Gen AI in a much more robust way than the US.
Countries that are competing for global respect are also competing to set the guardrails around AI systems. I cannot speak for Russia but I don't think that Russia is in a competition to get the world’s respect.
When countries play in the global scene, they often want to get that clout, and China is clearly investing a lot in the Global South, um, to create all kinds of relationships, also technologically. They're very aware that building trust in their technology is part of those relationships.
Without trust, there is no wide adoption. I actually fear that unless we start investing in auditing, transparency, and accountability, people will not trust those systems even when they do things that are proven to be good. That's my biggest fear.
AI has a lot to offer to society. We are killing the potential of AI by making really bad choices around trust and safety.
Your email address will not be published. Required fields are markedmarked