AI workers call on OpenAI, Google DeepMind to pledge accountability

Thirteen OpenAI and Google Deep employees – both past and present– published an open letter Tuesday calling on frontier AI companies to pledge support for industry employees and potential whistleblowers who voice concerns about AI risks to the public.

The letter, endorsed by AI industry godfathers Yoshua Bengio, Geoffrey Hinton, and Stuart Russell, discusses how despite “the potential AI technology has to deliver unprecedented benefits to humanity,” there are serious risks that need to be addressed.

The AI workers cite risks ranging from “the further entrenchment of existing inequalities, to manipulation and misinformation, to the loss of control of autonomous AI systems potentially resulting in human extinction.”

Titled "A Right to Warn about Advanced Artificial Intelligence," the letter clearly states that without government oversight, the workers do not trust the technology fully in the hands of corporations.

Instead, the group said they have major concerns that C-suite leadership will withhold important information from the public due to a lack of obligation and more pressing financial incentives.

The letter cites non-public information such as “the capabilities and limitations of their systems, the adequacy of their protective measures, and the risk levels of different kinds of harm.”

AI employees open letter
Section and signatures of the open letter "A Right to Warn about Advanced Artificial Intelligence" signed by former and current AI employees. Image by Cybenews.

For example, Reuters reports that researchers have found examples of image generators from companies including OpenAI and Microsoft producing photos with voting-related disinformation despite policies against such content.

“So long as there is no effective government oversight of these corporations, current and former employees are among the few people who can hold them accountable to the public,” the letter states.

The employees say they are the last line of defense between AI companies and the public, and they need reassurance that if the time comes to reveal sensitive information in the name of public safety, they will not be retaliated against.

Four principle of accountability

The letter lays out four principles for advanced AI companies to commit to, such as supporting a culture of open criticism and not punishing those who publicly speak out when other processes have failed.

Companies should not force agreements that would prohibit employees from speaking out about risk-related concerns and be prohibited from initiating any kind of financial retaliation against employees for speaking out.

The workers also want the companies to create an anonymous process to raise risk-related concerns to the company’s board, regulators, and appropriate independent organizations. The group added that trade secrets and other intellectual property interests would be appropriately protected.

It’s not the first open letter to address the risks of unchecked AI technologies.

Last March, more than 100 researchers called on generative AI companies to clarify their rules and allow investigators access to their systems to facilitate broader research.

And later that month, tech heavyweights, including CEO Elon Musk and Apple co-founder Steve Wozniak, joined thousands of signatories in an open letter calling on all AI labs to pause the training of systems more powerful than GPT-4.