NYT: hacker stole OpenAI secrets, sparking internal feuds

A hacker gained access to OpenAI’s internal systems in early 2023 and stole details about the company’s technologies. However, according to The New York Times, the ambitious AI startup never shared the news publicly, sparking internal disagreements.

The daily’s sources say that the intruder picked up details from an internal online forum where employees discuss OpenAI’s latest updates.

However, the individual did not get into the sensitive systems where the firm’s AI products are built. Still, the incident, which was revealed to employees a year ago, increased tensions inside OpenAI and can probably explain last year’s internal drama, The Times says.

The firm didn’t inform the FBI because they believed the hacker was a private individual with no ties to a foreign government.

However, some OpenAI employees were allegedly unnerved that OpenAI’s leadership decided not to share the news publicly. What if a foreign adversary such as China executes the next breach and endangers US national security?

Sure, most of the generative artificial intelligence patents now come out of China, which far outpaces the US in terms of innovation. Still, additional goodies from the US wouldn’t hurt, a few influential OpenAI employees allegedly argued.

For example, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring the safety of future AI technologies, sent a memo to OpenAI’s board, arguing that the company wasn’t properly treating security.

His concerns were supposedly dismissed, and in April 2024, OpenAI fired Aschenbrenner for alleged leaking. On a recent podcast, the researcher argued that his dismissal had been politically motivated.

Aschenbrenner, a close ally of Ilya Sutskever, who has also left OpenAI recently, graduated from Columbia University at the age of 19 and was seen as a rising star and a vocal advocate for safe AI development at the company.

Fears that China is interested in American technology are not ungrounded. Mandiant, a cybersecurity company, just said that Beijing is stealthily targeting US organizations and companies.

Last month, Microsoft’s president Brad Smith was grilled on Capitol Hill about how Chinese hackers used its systems to launch a large attack on the US federal government networks.

OpenAI and other AI startups consistently claim AI is not significantly more dangerous than search engines, essentially saying it’s not worth stealing. Nevertheless, in late 2023, OpenAI formed a “Preparedness” team to continually test the tech and warn the executives of any danger.

Furthermore, the OpenAI board formed a new Safety and Security Committee in May 2024. This committee explores how to handle the risks posed by future technologies.

It includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command. Nakasone has also been appointed to the company board.