UK’s AI Safety Institute to open office in Silicon Valley

The United Kingdom’s AI Safety Institute (AISI) is jumping the pond to open its first overseas office in San Francisco’s tech-heavy Bay Area – all to strengthen ties with the US and enhance global AI safety efforts.

Announcing the news Monday, UK’s State for Science, Innovation and Technology Secretary Michelle Donelan said new US office will open sometime this summer.

The expansion aims to harness the tech expertise in the Bay Area and will serve as a vital extension of the Institute's headquarters in London, which already has a robust team of over 30 technical experts, the agency said.

“This will enable us to hire more top talent, collaborate closely with the US AI Safety Institute, and engage even more with the wider AI research community”, the Institute posted on X.

AISI is currently in the process of recruiting the first team of technical staff for the satellite branch, headed up by a yet to be announced Research Director.

Donelan called the strategic partnership a testament to the UK's leadership in AI.

“It is a pivotal moment in the UK’s ability to study both the risks and potential of AI from a global lens, strengthening our partnership with the US and paving the way for other countries to tap into our expertise as we continue to lead the world on AI safety,” Donelan said.

The collaboration is expected to set new international standards for AI safety, to be discussed at the upcoming AI Seoul Summit, held in the South Korean capitol May 21st and 22nd, 2024.

Breaking New Ground

The expansion comes on the heels of the recently released results from safety testing of five publicly available advanced AI models – the first government-backed organization worldwide to unveil results of safety evaluations.

AISI considers its capabilities to conduct state-of-the-art safety testing significant progress since last November’s global AI Safety Summit held in the UK’s Bletchley Park.

AISI said five advanced AI models were tested against four key risk areas, including how effective the safeguards that developers have installed actually are in practice.

The main insights:

  • Models performed well on basic cybersecurity tasks but struggled with more complex ones.
  • Some models demonstrated PhD-level knowledge in chemistry and biology.
  • All models were vulnerable to basic "jailbreaks," with some producing harmful outputs without much effort.
  • Models had difficulty completing complex tasks without human oversight.

“The results of these tests mark the first time we’ve been able to share some details of our model evaluation work with the public,” said AI Safety Institute Chair Ian Hogarth, noting that “AI safety is still a very young and emerging field.”

Hogarth said the tests represent only a small portion of the evaluation approach AISI is developing.

“Our ambition is to continue pushing the frontier of this field by developing state-of-the-art evaluations, with an emphasis on national security-related risks,” Hogarth said.

The Institute also noted that tests were carried out on five publicly available and anonymized large language models (LLMs), all trained on large amounts of data.

The results provide a snapshot of model capabilities only, and do not designate systems as “safe” or “unsafe,” AISI said.

In addition to its US expansion, the UK AI Safety Institute has also signed a new collaboration agreement with Canada’s AI safety agency, announced Monday.

More from Cybernews:

ChatGPT's voice sounds eerily like Scarlett Johansson – media

Huawei is coming back to the high-end smartphone game 

TikTok’s Doubao overtakes Baidu’s Ernie as China’s top chatbot 

Use this app to get annoyed and drop your phone to reduce screen time

Malware that steals bank data is back despite arrests