More than a dozen leading tech companies attending the AI Safety Summit in South Korea Tuesday have pledged to continue developing artificial intelligence technology safely and responsibly.
The sixteen companies – including US giants Google, Meta, Microsoft, and OpenAI, along with firms from China, South Korea, and the United Arab Emirates (UAE) – agreed to prioritize AI safety, innovation, and inclusivity.
High-profile attendees included Tesla's Elon Musk, former Google CEO Eric Schmidt, and Samsung Electronics' Chairman Jay Y. Lee.
The pledge took place on day one of the two-day global summit during a virtual meeting hosted by British Prime Minister Rishi Sunak and South Korean President Yoon Suk Yeol.
Also backing the “Seoul Declaration and Statement of Intent toward International Cooperation on AI Safety Science,” were world leaders from the Group of Seven (G7), the EU, Singapore, Australia, and South Korea.
"We must ensure the safety of AI to protect the wellbeing and democracy of our society," President Yoon said, citing concerns over risks such as deepfakes.
We are sharing a safety update as part of the AI Seoul Summit https://t.co/zgvVpsSlIq
undefined OpenAI (@OpenAI) May 21, 2024
The latest declaration, published online by the UK’s Science, Innovation, and Technology department, aims to build upon the “Bletchley agreement” reached at the inaugural AI Safety Summit held at England’s historic Bletchley Park this past November.
Summit participants also discussed how AI technology can be more inclusive, stressed the need for interoperability between governance frameworks, how to maintain engagement with international bodies, and how to safely further AI innovation.
The pledge affirms a “common dedication to fostering international cooperation and dialogue on artificial intelligence (AI) in the face of its unprecedented advancements and the impact on our economies and societies.”
Coinciding with the AI Seoul Summit, the US government released its latest strategic vision on AI safety which includes a plan for global cooperation among AI Safety Institutes.
On Monday, the UK’s AI Safety Institute (AISI) announced it would open its first overseas office in San Francisco as part of the global effort to enhance AI safety.
“Recent advances in AI carry exciting, life-changing potential for our society, but only if we do the hard work to mitigate the very real dangers of AI that exist if it is not developed and deployed responsibly,” US Secretary of Commerce Gina Raimondo said about the strategy.
#NEWS: Today, as the AI Seoul Summit begins, @SecRaimondo released a strategic vision for the U.S. Artificial Intelligence Safety Institute, describing the Department’s approach to #AI safety under @POTUS' leadership. https://t.co/RsDgYbChoV pic.twitter.com/c0TxloZP5r
undefined U.S. Commerce Dept. (@CommerceGov) May 21, 2024
Additionally, the pledge vows that nations and AI companies will “ensure the safe, secure, and trustworthy design, development, deployment, and use of AI.”
Other companies committing to AI safety included Zhipu.ai, backed by China's Alibaba, Tencent, Meituan, and Xiaomi, UAE's Technology Innovation Institute, Amazon, IBM, and Samsung Electronics.
The companies also committed to publishing safety frameworks for measuring risks, avoiding models where risks could not be sufficiently mitigated, and ensuring governance and transparency.
"It's vital to get international agreement on the 'red lines' where AI development would become unacceptably dangerous to public safety," said Beth Barnes, founder of METR, a group advocating for AI model safety, in response to the declaration.
Experts still warn about AI risks
Renowned computer scientist Yoshua Bengio, known as a "godfather of AI," welcomed these commitments but stressed that voluntary measures need to be accompanied by regulation.
In a warning report released ahead of the Summit, twenty-five of the world’s leading scientists said that not enough is being done to protect humanity from threats posed by AI.
“There is a very real chance that unchecked AI advancement could culminate in a large-scale loss of life and the biosphere and the marginalization or extinction of humanity,” the group of Nobel laureates, Turing award winners, and other leading AI experts cautioned.
But summit attendee Aidan Gomez, co-founder of the large language model firm Cohere, commented that since November, think tanks have shifted away from long-term doomsday scenarios to more practical concerns.
Practicalities such as how to better incorporate artificial intelligence in the medical and financial sectors.
Although China co-signed the "Bletchley Agreement" on managing AI risks collectively during the initial meeting, it did not attend Tuesday's session but is expected to join an in-person ministerial session on Wednesday, according to a South Korean presidential official.
The next AI safety meeting is scheduled to take place in France, officials announced.
Your email address will not be published. Required fields are markedmarked