Experts cautiously welcome Biden’s order on AI safety, but risks still lurk


US President Joe Biden’s new executive order – which seeks to reduce the risks posed by artificial intelligence (AI) – demonstrates that this is now a national priority, experts say.

The executive order states the developers of AI systems that might endanger US national security, the economy, public health, or safety must share the results of safety tests with the US government, in line with the Defense Production Act, before they’re released to the public.

The order, which Biden signed at the White House, also directs agencies to set standards for testing, as well as addressing any related chemical, biological, radiological, nuclear, and cybersecurity risks.

ADVERTISEMENT

“To realize the promise of AI and avoid the risk, we need to govern this technology,” Biden said. "In the wrong hands AI can make it easier for hackers to exploit vulnerabilities in the software that makes our society run.”

The thorough executive order was welcomed in the industry, albeit cautiously. Most experts say that Washington is finally recognizing AI as a national priority, and they’re pleased that safety tests will now be mandatory.

Others, major tech platforms among them, complain about red tape and worry about bad actors – both private and state-sponsored – who are obviously not bound by any executive orders.

Broad impact expected

“The executive order on AI that was announced today provides some of the necessary first steps to begin the creation of a national legislative foundation and structure to better manage the responsible development and use of AI by both commercial and government entities, with the understanding that it is just the beginning,” Michael Leach, compliance manager at Forcepoint, the world’s largest privately owned cybersecurity company, said.

Leach is especially glad that Biden’s administration is concerned about the protection of individual privacy by ensuring the safeguarding of personal data when using AI.

“The new executive order will hopefully lead to the establishment of more cohesive privacy and AI laws that will assist in overcoming the fractured framework of the numerous, current state privacy laws with newly added AI requirements,” said Leach.

ADVERTISEMENT

To Randy Lariar, practice director of big data, AI, and analytics at Optiv, a cyber advisory firm, it’s important that Biden’s order increases the focus on the National Institute of Standards and Technology to complement its existing framework by developing safety standards for AI.

“The biggest takeaway, though, is that the executive order demonstrates that AI is a national priority – not an issue limited to the big tech companies. It has a broad impact, affecting consumers, students, small businesses, and many other interest groups,” said Lariar.

“It’s clear the administration has consulted with experts across the public and private sector to establish a very strong plan to achieve AI safety and security, and today’s development should be seen as a step in the right direction.”

Tech companies not happy

Indeed, major tech firms aren’t particularly happy. NetChoice, a national trade association that includes major tech platforms, described the order as an "AI Red Tape Wishlist" that will end up "stifling new companies and competitors from entering the marketplace and significantly expanding the power of the federal government over American innovation."

Indeed, Paul Brucciani, a cybersecurity advisor at WithSecure, an information security company, said that proving their products are safe is going to be very difficult for AI vendors – both in the US and the European Union, which is pushing forward its AI Act.

“That is hard to do. It is much easier to prove to you that my gleaming Mercedes can accelerate from 0-60 miles per hour in less than 4 seconds than it is to prove that its anti-skid control system is safe. Proving negatives is hard,” said Brucciani.

What’s more, the new order goes beyond voluntary commitments made earlier in 2023 by AI companies such as OpenAI, Alphabet, and Meta Platforms, which pledged to watermark AI-generated content to make the technology safer.

During a meeting between US lawmakers and tech CEOs to discuss AI regulation in mid-September, Meta’s boss, Mark Zuckerberg, pointedly said it was “better that the standard is set by American companies that can work with our government to shape these models on important issues.”

"It is much easier to prove to you that my gleaming Mercedes can accelerate from 0-60 miles per hour in less than 4 seconds than it is to prove that its anti-skid control system is safe. Proving negatives is hard,”

Paul Brucciani.
ADVERTISEMENT

But the language of the executive order suggests that the White House has agreed with experts who say voluntary commitments aren’t enough, as they should only complement government regulations and be combined with them.

Bad actors lurking

There are other issues, though, especially because bad actors who are already using AI to expand their cybercrime repertoire certainly do not care about any regulations.

“Our concern is not with corporations adopting safe and ethical AI, but rather bad actors, both private and state-sponsored, who are not bound by any executive order,” Drod Liwer, co-founder of Coro, an AI-based cybersecurity startup, said.

“We need to prepare for the asymmetrical battle where corporations are bound by regulatory requirements while the adversaries are using that to their advantage.”

Jeff Williams, co-founder and chief technology officer at Contrast Security, a code security platform, was impressed that the White House has reacted to the risks posed by AI relatively quickly. But he has questions.

For instance, the executive order seems to only apply to AI systems that pose a serious risk to national security and public health and safety. But how are we to determine this?

“Even an AI used to create social media posts will have incalculable effects on our elections. Almost any AI could flood a critical agency with requests that are indistinguishable from real human requests. They could be realistic voicemail messages or videos of system damage that aren’t real. The opportunities to undermine national security are endless,” said Williams.

He also thinks it’s going to be extremely difficult to even define “rigorous standards” for red-team testing: “How do you create tests that ensure that AI is safe?”

ADVERTISEMENT

Williams is hoping the government will involve industry professionals in shaping these standards, including the large team from The Open Worldwide Application Security Project that has created a comprehensive Top 10 list of risks for large language models.

Finally, Williams would have liked to have seen more in the executive order about AI transparency and Explainable AI.

“Consumers need to understand and interpret the predictions made by ML models. They have the right to know about the software and models they are trusting with the most important things in their lives – finances, healthcare, government, social life, education. All of this will be influenced by AI in the near term,” said Williams.