OpenAI, Microsoft, Google, Anthropic launch forum for safe AI development


OpenAI and other leading tech companies have formed a new industry body to monitor the safe and responsible development of the next generation of “frontier” AI models – models even more powerful than ChatGPT.

Anthropic, Google, Microsoft, and OpenAI announced the creation of the "Frontier Model Forum" on Wednesday.

The Forum’s goal: “advancing AI safety research, identifying best practices and standards, and facilitating information sharing among policymakers and industry,’ OpenAI said in a blog post outlining the scope of the new industry body.

“Companies creating AI technology have a responsibility to ensure that it is safe, secure, and remains under human control,” said Microsoft Vice Chair & President Brad Smith.

Open AI coins “frontier AI” models as “highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.”

The news comes as US Senators gathered in Washington DC Thursday to discuss the formation of their own legislative body to reign in the malevolent potential of artificial intelligence.

“Advanced AI technologies have the potential to profoundly benefit society, and the ability to achieve this potential requires oversight and governance,” said Anna Makanju, OpenAI’s Vice President of Global Affairs.

“It is vital that AI companies – especially those working on the most powerful models – align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible,” Makanju said.

US Senators say they fear the more powerful large language frontier models could be used to create a biological weapon in the not-so-distant future.

Anthropic CEO Dario Amodei on AI safety
Anthropic CEO Dario Amodei speaks on AI safety to US lawmakers on Capitol Hill. July 25, 2023. Reuters

Anthropic CEO and co-founder Dario Amodei spoke about how AI could help otherwise unskilled malevolent actors develop biological weapons at a Senate Judiciary subcommittee hearing earlier this week.

"Certain steps in bioweapons production involve knowledge that can't be found on Google or in textbooks and requires a high level of expertise. We found that today's AI tools can fill in some of these steps," Amodei said.

Senate Majority Leader Chuck Schumer (D) said the Senate would establish "the first-ever AI Insight Forums" later this year, inviting AI developers, executives, and experts to speak on possible legislative safeguards.

There is "real bipartisan interest in AI, which will be necessary if we want to make progress on what really is an imperative for this country – putting together AI legislation that encourages innovation but has the safeguards to prevent the liabilities that AI could present," Schumer said.

Supporting the AI safety ecosystem

Besides facilitating information sharing among companies and governments, OpenAI states the Frontier Model Forum’s three key areas of focus will also include identifying best practices, as well as advancing AI safety research.

Almost immediately, the founders plan to consult with public, private, and government entities to help with the design of the Forum and how best to collaborate.

The Forum also plans to establish an Advisory Board of diverse backgrounds in the coming months “to help guide its strategy and priorities,” according to the announcement.

“This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety,” Makanju said.

The Forum will also seek to build on the valuable work of existing industry, civil society, and research efforts across each of its workstreams, OpenAI stated.

During Thursday’s session on Capitol Hill, the Senate’s Homeland Security and Governmental Affairs Committee also voted to advance a new bill that would create a council of AI leaders representing each of the US government’s more than 100 federal agencies, to coordinate the use of the emerging AI technology throughout Washington.