UN panel says AI needs regulation, cannot be left to market forces


In its final report, “Governing AI for Humanity,” the United Nations Secretary-General’s High-level Advisory Body on Artificial Intelligence says the development of AI cannot be left to the “whims” of the market alone.

In the report, the 39-member panel agreed that national governments will inevitably play an important role in regulating AI but stressed that the technology's borderless nature also requires a “global approach."

“The accelerating development of AI concentrates power and wealth on a global scale, with geopolitical and geoeconomic implications,” says the report.

ADVERTISEMENT

“Moreover, no one currently understands all of AI’s inner workings enough to fully control its outputs or predict its evolution. Nor are decision-makers held accountable for developing, deploying, or using systems they do not understand.”

The UN advisory body was created in October 2023 and has now made seven recommendations to address AI-related risks and gaps in governance. These include establishing an AI data framework to boost transparency and accountability, and a fund to help developing countries benefit from developments in the technology.

“Many countries face fiscal and resource constraints limiting their ability to use AI appropriately and effectively,” said the panel.

Many AI enthusiasts, especially in the US, are trying to shake off initiatives to regulate the potentially dangerous AI industry. According to them, regulation will stop or slow down innovation. But the UN panel disagrees.

“The imperative of global governance, in particular, is irrefutable. AI’s raw materials, from critical minerals to training data, are globally sourced. <...> The development, deployment, and use of such a technology cannot be left to the whims of markets alone,” says the report.

However, it stops short of recommending the creation of a new international agency to govern the development and rollout of AI. The panel’s recommendations will be discussed during the upcoming UN summit.

Since the release of OpenAI's viral ChatGPT bot in 2022, the use of AI has spread rapidly, raising concerns about fueling misinformation, fake news, and infringement of copyrighted material.

Only a handful of countries have created laws to govern the spread of AI tools. The European Union has been ahead of the rest by passing a comprehensive AI Act compared with the United States' approach of voluntary compliance, while China has aimed to maintain social stability and state control.

ADVERTISEMENT