Some AI chatbot questions are more eco-friendly than others


Complex queries requiring AI chatbots like OpenAI’s ChatGPT to think logically and reason produce more carbon emissions than other types of questions, a new study has claimed.

Key takeaways:

According to researchers at Germany’s Hochschule München University of Applied Sciences, even though every query typed into a large language model (LLM) requires energy and leads to carbon dioxide emissions, their specific levels depend on the chatbot, the user, and, of course, the subject matter.

ADVERTISEMENT

Their study, published in the journal Frontiers, compared 14 AI models and found that answers requiring complex reasoning cause more carbon emissions than simple answers.

More straightforward subjects, like historical facts, for example, are more eco-friendly. But queries invoking lengthy reasoning – philosophy or abstract algebra – cause much greater emissions.

That’s why the researchers behind the study recommend that serial users of AI chatbots think about what questions they really want the models to answer – if they want to limit carbon emissions.

chat-gpt-emissions
Image by Shutterstock.

“The environmental impact of questioning trained LLMs is strongly determined by their reasoning approach, with explicit reasoning processes significantly driving up energy consumption and carbon emissions,” said first author Maximilian Dauner.

“We found that reasoning-enabled models produced up to 50 times more CO2 emissions than concise response models.”

What exactly happens when a user puts a question to an AI chatbot is this: words or parts of words of the chatbot’s answer (tokens) are converted into a string of numbers that can be processed by the LLM. This conversion produces carbon emissions.

The study found that reasoning models create an average of 543.5 tokens per question, while concise models require only 40.

ADVERTISEMENT
Marcus Walsh profile Ernestas Naprys vilius justinasv
Don't miss our latest stories on Google News

“A higher token footprint always means higher CO2 emissions,” explains the research paper.

What’s most ironic, though, is the fact that the resulting answers aren’t actually more correct. It turns out elaborate detail is not always essential for correctness.

Researchers hope their work will cause people to make more informed decisions about their own AI use.

For example, the most accurate model among those analyzed was the reasoning-enabled Cogito model with 70 billion parameters, reaching a mere 84.9% accuracy. The model, however, produced three times more carbon emissions than similar-sized models that generated concise answers.

The researchers said they hope their work will cause people to make more informed decisions about their own AI use – which is, of course, already well-documented to be extremely hungry for energy.

“Users can significantly reduce emissions by prompting AI to generate concise answers or limiting the use of high-capacity models to tasks that genuinely require that power,” Dauner pointed out.