Biden issues first US federal regulations on AI systems: what’s in them?


US President Joe Biden’s White House has outlined the federal government’s first regulations on artificial intelligence. The administration's powers are limited, though.

Government officials and experts everywhere agree that the rise of machine learning has been stunning, and its potential to disrupt life on Earth as we know it is undoubtedly significant.

However, debates about what exactly needs to be controlled and how dangerous AI actually is are only just heating up.

While, for example, the European Union has already prepared the AI Act, with the current draft envisioning banning the use of software that creates unacceptable risks, the US government has mostly been keeping its hands off the fast-expanding industry. Until now.

New standards

US President Biden issued a “landmark” executive order on Monday and outlined the government’s first-ever regulations on AI systems. The administration describes the order as the most sweeping government action to protect US citizens from the potential risks brought by AI development.

“The Executive Order establishes new standards for AI safety and security, protects Americans’ privacy, advances equity and civil rights, stands up for consumers and workers, promotes innovation and competition, advances American leadership around the world, and more,” the White House said in a statement.

President Biden’s account on X added: “Artificial intelligence is moving quickly. And so is my Administration.”

The White House breaks the key components of the executive order into eight parts that touch on, among other things, safety and security standards for AI, consumer privacy, equity and civil rights, consumer protection, support for workers, innovation, and competition.

The regulations, for example, include requirements that the most advanced AI products be tested to ensure that they cannot be used to produce nuclear or biological weapons. The test results will have to be reported to the federal government.

The National Institute of Standards and Technology is now supposed to set the “rigorous” standards for extensive red-team testing to ensure safety before public release.

In the released fact sheet, the White House says that it will protect Americans from AI-enabled fraud and deception. The Department of Commerce will develop guidance for content authentication and watermarking to clearly label AI-generated content.

“Federal agencies will use these tools to make it easy for Americans to know that the communications they receive from their government are authentic – and set an example for the private sector and governments around the world,” the administration said.

However, hoping to provide an example for private companies is not the same as requirements. This shows that the White House is limited in its authority. Besides, some of the envisioned steps would require approval by independent agencies such as the Federal Trade Commission.

Important summit upcoming

The new regulations, some of which are due to go into effect in the next 90 days, will also require companies that run cloud services to tell the government about their foreign customers.

This is obviously connected to the rise of China – the US recently restricted the export of high-performing chips to China to slow Beijing’s ability to produce large language models.

The order only affects American companies, though, so the US will face diplomatic challenges enforcing the regulation elsewhere.

That’s probably why the White House released the executive order just days before a gathering of world leaders on AI safety organized by Rishi Sunak, the United Kingdom’s prime minister.

rishi-sunak-uk
Rishi Sunak. Image by Shutterstock.

China will send Wu Zhaohui, a vice minister of Science and Technology, to the summit, even though the country’s president Xi Jinping was initially invited, Reuters reported on Monday.

China now has at least 130 large language models launched by companies including Alibaba and Tencent, accounting for 40% of the global total and just behind the United States' 50% share, according to brokerage CLSA.

The US is hoping to “expand bilateral, multilateral, and multi-stakeholder engagements to collaborate on AI.” The State Department will try to establish international frameworks for “harnessing AI’s benefits and managing its risks,” while Vice President Kamala Harris will speak at the UK summit.

The administration said it had already consulted widely on AI governance frameworks over the past several months, engaging with dozens of countries such as Australia, Brazil, Canada, Germany, Japan, and others. China is not mentioned.

More attention to job security

The American perspective has always been liberal and filled with belief in the power of innovation with minimal intervention. Tech companies are usually keen to call for voluntary commitments to responsible development rather than actual laws.

In July 2023, Biden’s administration announced that seven leading AI companies in the US – Amazon, Anthropic, Google, Inflection, Meta, Microsoft, and OpenAI – have formally agreed to voluntary safeguards on the technology’s development. In September, eight more firms involved in AI joined the voluntary pledge.

“Voluntary commitments – underscoring safety, security, and trust – mark a critical step toward developing responsible AI,” the White House stressed in July’s press release.

But some say such pledges are not enough. For instance, John Isaza, a data privacy and cybersecurity expert at Rimon Law, a firm advising tech companies, told Cybernews recently that these commitments should still “only be a good complement to regulations and could provide the details that regulators would not be able to or attempt to tackle.”

Michael DeSanti, president and chief technology officer at LightPoint Financial Technology, also said that voluntary commitments need to be combined with laws because big businesses have already proven that they’re often untrustworthy.

“Profit motivation is a powerful force, and history is littered with examples of companies that have violated the trust of guidance bodies in order to make a profit. Some companies will inevitably break their commitments,” DeSantis told Cybernews.

“For example, look at the pharmaceutical industry, where Purdue Pharmaceutical manipulated the US Food and Drug Administration in order to sell more Oxycontin.”

The White House might have heard the criticism because Biden’s document orders the Department of Labor and the National Economic Council to study AI’s effect on the labor market.

“AI is changing America’s jobs and workplaces, offering both the promise of improved productivity but also the dangers of increased workplace surveillance, bias, and job displacement,” the White House said.