US AI Safety Institute will get early access to OpenAI’s latest model, Sam Altman says


OpenAI’s CEO, Sam Altman, said that the US AI Safety Institute will receive early access to its latest foundation model to ensure its safety and “push forward the science of AI evaluations.”

In a recent post on X, Altman gave “a few quick updates” regarding safety at OpenAI after lawmakers expressed concerns about the tech giant’s safety practices.

ADVERTISEMENT

The CEO starts off by saying that “we’re committed to allocating at least 20% of the computing resources to safety efforts across the entire company.”

Altman’s post comes a month after he received a letter, shared by The Washington Post, sent by the United States Senate to address “recent reports about OpenAI’s safety and employment practices.”

“OpenAI has announced a guiding commitment to the safe, secure, and responsible development of artificial intelligence (AI) in the public interest. These reports raise questions about how OpenAI is addressing emerging safety concerns,” the letter reads.

Altman responded by saying that the company has been “working with the US AI Safety Institute on an agreement where we would provide early access to our next foundation model so that we can work together to push forward the science of AI evaluations.”

Under the National Institute of Standards and Technology, the US AI Safety Institute is a federal government initiative “advancing the science, practice, and adoption of AI safety across the spectrum of risks, including those to national security, public safety, and individual rights,” NIST says.

The letter sent to Altman also details reports regarding non-disparagement agreements for current and former employees, as well as OpenAI's commitment to dedicate 20% of its computing resources to research AI safety.

Altman addresses the non-disparagement agreements noted in the letter.

“We want current and former employees to be able to raise concerns and feel comfortable doing so. This is crucial for any company, but for us especially and an important part of our safety plan,” the CEO said.

ADVERTISEMENT

According to Altman, the company voided non-disparagement terms for current and former employees in May, along with a “provision that gave OpenAI the right (although it was never used) to cancel vested equity.”

A non-disparagement agreement or clause is an agreement or promise between two parties that one will not make disparaging remarks about the other.

“Non-disparagement provisions typically restrict what an employee can or cannot say about the employer following a separation of employment,” Thomas Reuters Practical Law states.

This move follows the complaints filed with the Securities and Exchange Commission (SEC) from anonymous whistleblowers requesting an investigation into whether or not OpenAI illegally restricted workers from communicating with regulators.

In May, the company received backlash for its restrictive offboarding policy forbidding ex-employees from criticizing OpenAI. Even acknowledging that such an NDA exists is a violation of the agreement.