Democracy in danger: AI, supercomputers, and the loss of human agency


Rather than focusing on superintelligent AI, humanity has other, more pressing issues to worry about, tech insider warns. Tech-driven attention economy leaves the population less intelligent. The concentrated power of a few giant companies is currently left unchecked. Social and economic inequality continues to increase.

Jacopo Pantaleoni is a tech industry insider who spent 25 years working at the cutting edge of high-performance computing. Along the way, he’s left a broad and lasting impact on computer graphics as a two-time recipient of the High-Performance Graphics Test of Time award.

After working on ray tracing algorithms, DNA sequencing, and the massively parallel processing units used in most hyper-scale data centers and AI factories today, Pantaleoni left his role as Principal Engineer and Research Scientist at NVIDIA in July 2023.

ADVERTISEMENT

Why? To write a book on the quickest revolution, an exponential process that risks making the world more technologically advanced, but far less humane.

According to the author, the far-reaching danger is not AI itself, but the combination of a broad cognitive weakening brought by the automated extraction of human attention, the increasing concentration of power in the hands of a few tech companies, and growing geographic and social disparities.

“I believe this AI revolution is better seen as the tip of an iceberg, which is the broader, deeper, and undergoing revolution in our computational capabilities. Our computational capabilities have been increasing exponentially since the invention of computers. Their speed has been doubling every 18-24 months for over 70 years now,” Pantaleoni says in an exclusive interview with Cybernews.

“Exponential processes tend to escape our intuition, and we perform poorly at predicting and controlling their outcomes. The COVID pandemic is just a glaring example of that, the spread of a virus is another exponential process, and we all saw how unprepared the whole world has been at handling it.”

His book “The Quickest Revolution: An Insider's Guide to Sweeping Technological Change And Its Largest Threats” is due for publication in September 2023.

Jacopo Pantaleoni

Can humans grasp how fast computers really are?

Our current and fastest processors are already capable of more than 100 trillion operations per second. And we're getting supercomputers faster than an Exaflop, which means billions of billions of operations per second.

ADVERTISEMENT

These numbers are nearly unfathomable, even for an expert like myself. I've been at the forefront of the GPU revolution and worked with this kind of computing power all my life.

And still, it's really hard to grasp what these numbers truly mean, even though we can find plenty of ways to put this computing power in use.

The fall of democracy, I think, is possibly the most visible danger on the horizon right now

What kind of societal impacts can this have? Where are we heading?

I think there are three major dangers that we may be confronted with. One is the growing role of the so-called attention economy. Which is already a cause for concern.

We have already experienced some of the side effects of the automatized distribution of information (and misinformation) during the 2020 US elections and with the assault on Capitol Hill. Fake news and conspiracy theories are changing the fabric of the world and making it more brittle and fragile.

This is caused mainly by machine learning algorithms running at scale. They select which information and which facts make the news, shaping what we see and like.

Recent advances in artificial intelligence will make this issue even more complex because generative AI now makes it possible to procedurally generate enormous quantities of content and even more realistic fake news. Spotting fake news will get even more difficult.

And better AI will find ways to grab our attention at even better rates. So, this is one issue.

And what is the second one?

ADVERTISEMENT

Another cause for concern is concentration of power. Right now, only a handful of companies in the world is really in charge of this AI revolution and the underlying revolution in computational power that is implicitly driving it.

And finally, a third issue, that is intimately linked to the second, but more indirectly also to the first, is the increasing economic inequality fueled by technological progress.

Talking about big companies, a few big oil companies had too much power in the 20th century, leading to governments trying to bring them to heel. We know the history. What do you think should happen with tech companies?

I think regulatory bodies like the antitrust regulators should do a better job of limiting the power of these companies.

And perhaps, as economist Daniel Susskind has already suggested in his beautiful book A World Without Work, we will even need new types of regulatory bodies that extend the power of the antitrust beyond the economic domain. We may need entirely new entities to oversee the increasing political power that large tech companies uphold. Because increasingly, these companies have not only enormous financial strength but also power over communications, information, and distribution of information. They are increasingly gaining new forms of political power that go across borders and governments.

And talking about attention extraction, as you put it, how do we address this? Do humans simply lack the willpower to avoid engaging with sensationalized content and other distractions online?

It's difficult to generalize to all humans, but it is certainly tough to resist the constant attacks of the attention economy.

The attention economy is not something new, it was already born with radio and TV, which broadcast information and tried to attract our attention to advertisement in bulk. But things changed when machine learning started to be applied at scale.

The newly available computational power enabled selling our attention with a cookie, with hundreds of billions of micro transactions a day. Algorithms are getting better and better at exploiting our weak spots. For example, information is getting increasingly visual and more appealing to our visual cortex rather than our more reflexive thought processes.

Therefore, it is getting more and more challenging to resist. Mechanisms of liking and sharing work thanks to dopamine release in our brains. They get us excited.

ADVERTISEMENT

This fundamental imbalance between the increasingly enormous amounts of information produced and the scarcity of our attention is getting more and more stretched. We are getting stretched between this vast pile of information we are submerged with daily and the more fundamental limits of human attention. Algorithms are getting better and better at exploiting our limits.

What’s wrong with that? What are the consequences?

There's nothing fundamentally wrong except for the single fact that these algorithms are designed to maximize a single goal, which is economic in nature.

They're made to sell more advertisement of all kinds. This means the information we are submerged with is not necessarily the best, or most useful. It's not necessarily making us more informed.

And there's another problem. Distilling, absorbing information, and transforming it into knowledge requires time. And when we are constantly asked to focus on something new, we do not have that time. So we're getting less and less knowledgeable.

The human brain has about 100 trillion synapses, possibly more, whereas artificial neural networks like ChatGPT today have about 200 billion parameters

What can you do with the Exaflop computer, and are we approaching the limit of how fast they can become?

Exaflop supercomputers can be used for a tremendous amount of beneficial applications, from drug discovery to protein folding, molecular simulations of viruses and nanomaterials, jet engine simulations, climate modeling, etc.

Whether we are reaching the limits of computing, it's a very interesting question. Many think the so-called Moore's Law is hitting a wall. Moore's Law is an empirical formulation of the exponential rate at which our computational capabilities increase, which is challenged due to physical limits of silicon-based semiconductor manufacturing.

However, we need to recall one important property of technological progress. Exponential technological progress is always the compound result of many, many smaller processes that happen at the same time.

ADVERTISEMENT

And in this case, while we are possibly reaching the limits of silicon-based computation, we are also finding new ways to progress in other areas like computer architecture, chip architecture, optical interconnects, and so on.

AI itself is progressing at a much, much quicker pace than Moore's Law. Whereas processors double their compute power every 24 months, the most recent advances in AI have shown a doubling of performance and network size every six or even three months in the last two years. And, of course, AI will also help design better processors. It's a reinforcing loop.

So, I don't think that's necessarily an approaching limit. On the contrary, it might even quicken up.

We’re getting all kinds of warnings about AI taking over the world. Don’t you think the fear of superintelligence might be a little overstretched?

I tend to agree with you. I tend to think they are far overstretched, although it's a complex question. On one end, several researchers in the field are conscious of this exponential progress and know that exponential progress is getting so quick and will be getting so fast that we might reach human intelligence far more quickly than we think. On the other hand, as a scientist, I know that currently there is no clear scientific grounding to any of these fears.

First, we don't even know exactly what human intelligence is. We don't know what consciousness is at all. Second, human brains are still orders of magnitude more complex than artificial neural networks, several orders of magnitude more complex.

For example, the human brain has about 100 trillion synapses, possibly more, whereas artificial neural networks like ChatGPT today have about 200 billion parameters.

And then there's the fact that each neuron in the brain is far more complex than the simple arithmetic units used for artificial ones. The computations carried by a single neuronal cell are at least a thousand times more complex than those performed by an artificial neuron. And still, there's a lot we don't know about our brains, about cells.

So, I do think these fears are overstretched, and, most importantly, the focus on these issues, which are more ethereal and ephemeral risks, draw focus away from the much more concrete matters I mentioned before.

It's essential that we focus on the concrete dangers we are already facing.

ADVERTISEMENT

If we knew our brains well and could write that down to a single cell or synapse, would we be able to simulate those 100 trillion synapses?

If we knew how it worked? Maybe. Yes. But there's so much we still don't know about the cell that it's difficult or even impossible to answer that question.

You mentioned that the gap between the US and the rest of the world could widen. Could you elaborate on the reasons why that would happen? Wouldn’t the US feel the negative consequences first?

Most of the big tech companies are concentrated in the US, except, of course, for some others in China. For example, the services and infrastructure that we are now relying on in Europe are mostly US-based.

To give a sense of how quickly this gap is growing, it's sufficient to point out the fact that OpenAI has recently achieved over $10 billion in funding. And several other AI startups in the US have received more than a billion.

Whereas in Europe, the European Large Language Model Association has recently asked the EU to provide €230 million in funding for building a European language model capable of being at least competitive with ChatGPT. And the EU answered that they cannot provide that kind of money, which is orders of magnitude smaller than the kind of money American companies are getting.

Another data point. Amazon spent over $73 billion in R&D in 2022, which is far more than the entire public and private investment in research in a country like Italy.

So, these companies have access to funds for research that is unavailable to most countries on Earth. And they also own another precious resource, which is know-how. They have hired all the best engineers and research scientists on Earth and are getting far ahead of universities.

In the past, some universities used to be ahead of the industry in many respects. This is no longer the case, even in the US itself.

So, to answer your second question, yes, the US itself might soon see some negative consequences of this extreme concentration of power.

Amazon spent over $73 billion in R&D in 2022, which is far more than the entire public and private investment in research in a country like Italy.

The EU is trying to regulate AI to protect its citizens. However, this works as a double-edged sword because it also limits progress. In your opinion, is this strategy reasonable?

I never really bought into the stifling of innovation argument. Regulation can help drive innovation along functional rails when it's appropriately designed. It's increasingly important to do that because getting out of the rails might derail the entire world. And the train is getting faster and faster.

I think the AI Act is a very good starting point, like the GDPR. Could the GDPR be done better? Certainly, and the same, I'm sure, will be true for the AI Act. But it's a good starting point.

What's missing is more attention to these other issues, the concentration of power, where the funds go, which companies we are helping innovate, and what Europe is doing to innovate.

With all this technology and automation, in your opinion, will humans have a place in the future labor market? What will we have left to do?

That's a question that is far too complex for anybody to answer. Nobody knows. AI, as the most supreme form of automation we have come up with, will likely automate, if not all jobs on Earth, a great majority of them. And it will increasingly make it more difficult to find useful human jobs. So, it will definitely change the labor landscape.

The key question there is who will hold power in that scenario. If nobody works and we can enjoy life because AI, robots, and whatever technologies are coming up next will provide what we need, who will own those technologies? Will it be distributed and decentralized, or will it be increasingly concentrated in the hands of a few?

Is there something we could do to prepare personally? What skills will be needed, and what kind of mindset might be useful going into the future?

It's increasingly essential to restore an attitude of critical thinking. Because we are submerged with massive amounts of information, it’s especially important that we broaden our horizons. Our society and civilization have developed largely thanks to specialization. But we're now getting hyper-specialized. And very often, people in one industry cannot see outside the horizons and boundaries of that industry. Computer scientists cannot see outside of computer science, economists and sociologists do not possess an in-depth understanding of computer science, and so on.

We need to broaden our horizons so that we all have a basic preparation across the entire spectrum of knowledge. That's the best skill we can have in the future to understand the world, because the world is getting more complex.

What would be a clear sign that things aren’t going right, and we should press the stop button immediately?

The most visible sign we're seeing right now is the tendency across the entire Western world to go towards autocratic forms of government. The fall of democracy, I think, is possibly the most visible danger on the horizon right now. So, before it is too late, we need to act and understand that democracy cannot hold if power is concentrated beyond a certain point and if economic inequality is getting far too large.

Is the technological revolution related to this? Does it help to facilitate those undemocratic tendencies?

Indirectly, I think it does, because if it fuels too large an economic inequality by concentrating power in the hands of a small layer of the population, for example the layer in charge of technology and likewise for knowledge, then then the lower strata of the population which don't enjoy the growth and the benefits of technology will be more likely to follow populists who promise populist solutions, as it has often happened in the past. It's an indirect effect, but it's still very real.

Regarding your upcoming book, would you like to mention any key takeaways readers could expect?

The underlying takeaway is that we are confronted with a tremendously fast revolution. And in the collective imagination revolutions are always thought to lead mainly to positive outcomes.

However, we have to be thoughtful about it because most revolutions in the past have also led to decades of social upheaval, troubles, and public issues of all kinds. And they never went as predicted. So, we must be thoughtful and prepare for things not to go in the direction we want.

We can dream of utopias made with the technologies we are building, it's certainly fine, but we also need to prepare for things to go differently, to be more realistic. And we must understand that these utopias will never materialize as we imagine.