First-ever AI Safety Report tackles 'fast-moving field' of artificial intelligence


Deep learning pioneer and AI Godfather Professor Yoshua Bengio on Thurday announces the release of the world’s first-ever International AI Safety Report, which has concluded that “the future of general-purpose AI is remarkably uncertain.”

"There is a wide range of possible outcomes even in the near future, including both very positive and very negative ones, as well as anything in between," the report states, although it also praises general-purpose AI for its “immense potential” in fields such as education, medical applications, advanced scientific research, and to “substantially improve the lives of people worldwide.”

The University of Montréal professor and 'most-cited computer scientist' posted the announcement on X Thursday, calling it an "unprecedented, large-scale effort by 100 independent AI experts from around the world, including Nobel laureates and Turing Award winners."

ADVERTISEMENT

Bengio described the collaborative white paper as summarizing “the state of the science on AI capabilities and risks, and how to mitigate those risks,” posting a link to the 298-page document.

The report is presented by the Artificial Intelligence Action Summit, a collaborative global organization that promotes “intense international dialogue” between governments, researchers, businesses, creative professionals, and civil society – all “to ensure the science, solutions, and standards that shape artificial intelligence” represent a unilateral vision for building “the society of tomorrow.”

Lauding the different perspectives brought by the panel, Bengio’s 16-part X post noted that the report was focused on answering three main questions regarding the rapid growth of AI:

  1. What can general-purpose AI do?
  2. What are its risks?
  3. How can these risks be mitigated?

What's in it?

The report itself covers everything AI from how general-purpose AI is developed to risks from malicious use, such as cyberattacks, manipulation of public opinion, and biological or chemical weapon attacks, to reliability issues and systemic risks, such as the global research and development divide, impact on the environment, and single points of failure.

ADVERTISEMENT

The "International Scientific Report on the Safety of Advanced AI" also covers technological approaches to managing those risks, including technological and societal challenges to policymaking and risk management, as well as notes on the risk management lifecycle incorporating identification, assessment, mitigation, and monitoring.

jurgita vilius Niamh Ancell BW justinasv
Join 25,260+ followers on Google News

Bengio, chair of the AI Action Summit expert advisory panel, noted in the report’s foreword how far the capabilities of general-purpose AI have increased since the organization committed to creating the report at the world’s first AI Safety Summit at Bletchley Park in November 2023.

“The capabilities of advanced AI capabilities have continued to grow. We know that this technology, if developed and utilised safely and responsibly, offers extraordinary opportunities: to grow our economies, modernise our public services, and improve lives for our people," Bengio wrote.

“To seize these opportunities, it is imperative that we deepen our collective understanding of how AI can be developed safely,” Bengio said.

AI Safety Report graph
General-purpose AI models have seen rapid performance increases in answering PhD-level science questions. This graph depicts that general-purpose AI system’s capabilities can be significantly increased by having it devote more time and computation to each individual problem. Image by AI Safety Report. Source: Epoch AI, 2024

The 98-person expert panel includes contributors from 30 countries, the United Nations, the European Union, and the Organisation for Economic Co-operation and Development (OECD), an international non-profit that develops policy standards to promote economic growth and trade between nations.

Fellow Godfather of AI and 2024 Nobel Prize winner in Physics Geoffrey Hinton is also listed as a Senior Advisor for the paper, while the work of the world’s third acknowledged AI Godfather, Meta’s chief AI scientist Yann LeCun, is also referenced in the AI Safety Report.

“AI remains a fast-moving field. To keep up with this pace, policymakers and governments need to have access to the current scientific understanding on what risks advanced AI might pose,” Bengio said.

"AI does not happen to us; choices made by people determine its future," the report stated.

ADVERTISEMENT