© 2021 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

AI can outperform humans. But at what cost?


Artificial intelligence (AI) can perform some tasks better than humans. But it leaves much bigger carbon footprints than people do.

“Artificial intelligence has made remarkable advances in recent years. And now it’s able to achieve performance closer to or even better than humans performing certain tasks,” Manuel Le Gallo, a researcher at IBM, told during the MIT Tech Review conference Future Compute.

And yet, great innovations come with even more substantial costs to our environment. To give just one example, training one natural language processing model emits as much carbon as  five cars during their entire lifespan.

“Energy consumption will blow up if we don’t do anything to mitigate this issue,” Le Gallo said.

Trying out AI software video review

Staggering consumption of power

One recent example of colossal emissions is the IBM Debater - the first AI system that can debate humans on complex topics. Project Debater digests massive texts, constructs a well-structured speech on a given topic, delivers it with clarity and purpose, and rebuts its opponent. Debater was able to perform very well against Harish Natarajan, who holds the world record in many debate competition victories. The problem is that it consumes a lot of energy.

“This machine that was used to run the project debater required almost a hundred processor cores and one terabyte of memory. For this one debate, it would consume more than a hundred times the power of the human brain. On top of that, it had to rely on IBM cloud for the speech-to-text translation, so this was not done by the machine,” Le Gallo explained.

According to him, this is just one example, clearly showing that there is an energy consumption problem with AI. It extends to AI processing.

Recent research by IBM also showed that it takes the equivalent of 2 weeks of household energy consumption to train one image recognition model.

Ai hand and a human hand
(c) Shutterstock

Another study by the University of Massachusetts showed that the carbon footprint of AI is significant. Training one natural language processing model emits as much carbon as five cars during their life cycles.

As AI is on the rise and transforming from research to commercialization, we should worry about this, Le Gallo reckons.

“Energy consumption will blow up if we don’t do anything to mitigate this issue. Clearly, if we want to sustain growth in AI, we need to make our systems significantly more energy-efficient,” he said.

According to PCMag, the recent surge in AI’s power consumption is largely caused by the rise in popularity of deep learning, which processes vast amounts of data.

The source of inefficiency and the solution

Most inefficiencies arrive from the architecture of our computers. Le Gallo explained that standard computers are based on von Neumann's architecture, which means that computing and memory units are physically separated.

“Whenever we want to do an operation in this system, we have to transfer data from the memory to the processor, which creates a bottleneck. Moving data from the memory to the processor consumes 100-1000 times more energy than a processor operation. The processor and memory alone are quite efficient but what costs most of the energy when we do a task is the data transfers,” the researcher explained.

What if you could use the memory itself to process the data? That’s the idea behind in-memory computing - to treat memory not just as a place to store data but to make it an active participant in computing.

To solve the problem, IBM has developed mixed-precision in-memory computing. And Le Gallo, who’s one of the authors of this research paper published in Nature, explained the idea behind it.

“We use computational memory units to compute an approximate solution and then just a little bit of high precision digital computing to refine this solution until it’s accurate enough. By making sure that the bulk of computation is still done in computational memory units, we can ensure that we will have both high accuracy and energy savings resulting from this system,” he said.

In 2019, IBM and its partners launched an AI hardware center to accelerate the development and commercialization of AI cores. Within that center, engineers are now developing Future Chips that will lean on this in-memory computing concept.

IBM is just one of the handful of companies that are creating hardware for AI algorithms.


More great CyberNews stories:

Post-COVID shopping: has item scanning become an unnecessary formality?

The world’s most dangerous state-sponsored hacker groups

The hype around quantum computing: it’s not too early to get in 

Spotify wants to suggest songs based on your speech and emotional state 

Post-COVID shopping: has item scanning become an unnecessary formality?

Subscribe to our monthly newsletter


Leave a Reply

Your email address will not be published. Required fields are marked