In the largest survey of its kind, thousands of artificial intelligence (AI) researchers were asked for their predictions on the pace of AI progress. Timelines should be pushed forwards, it seems.
AI Impacts, a project aiming to improve our understanding of the likely impacts of human-level AI, surveyed 2,778 researchers who had published in top-tier artificial intelligence (AI) venues.
The experts were asked for their predictions on the pace of AI progress and the nature and impacts of advanced AI systems.
“Artificial intelligence appears poised to reshape society. Navigating this situation requires judgments about how the progress and impact of AI are likely to unfold,” authors of the preprint of the survey, representing prestigious universities around the world, say.
It turns out that the aggregate forecasts give at least a 50% chance of AI systems achieving several milestones by 2028.
These include autonomously constructing a payment processing site from scratch, creating a song indistinguishable from a new song by a popular musician such as Taylor Swift, and autonomously downloading and fine-tuning a large language model.
Moreover, if science continues unabated, the chance of unaided machines outperforming humans in every possible task was estimated at 10% by 2027, and 50% by 2047. The latter estimate is 13 years earlier than that reached in a similar survey AI Impacts conducted only one year earlier.
The chance of all human occupations becoming fully automatable was forecast to reach 10% by 2037, and 50% as late as 2116 (compared to 2164 in the 2022 survey), though – it’s still far away.
Most respondents expressed substantial uncertainty about the long-term value of AI progress. While 68.3% thought good outcomes from superhuman AI are more likely than bad, 48% of these net optimists gave at least a 5% chance of extremely bad outcomes such as human extinction.
Between 37.8% and 51.4% of respondents gave at least a 10% chance to advanced AI leading to outcomes as bad as human extinction.
More than half suggested that “substantial” or “extreme” concern is warranted about six different AI-related scenarios, including the spread of false information, authoritarian population control, and worsened inequality.
“There was disagreement about whether faster or slower AI progress would be better for the future of humanity. However, there was broad agreement that research aimed at minimizing potential risks from AI systems ought to be prioritized more,” said the authors of the study.
More from Cybernews:
Subscribe to our newsletter