Why AI still can’t smell the roses – the limits of machine understanding


AI may know what a flower is – but it can’t smell one. A new study reveals that even advanced models like GPT-4 struggle to understand everyday concepts rooted in human senses, exposing the limits of machine intelligence.

Key takeaways:

What’s the biggest gap between humans and AI? Perhaps it’s that AI can describe a flower – but can’t smell one.

ADVERTISEMENT

A new study from Ohio University, published in Nature Human Behaviour, tested AI’s conceptual understanding against human, sensory-rich comprehension.

The key finding: AI models lack the experiential depth humans rely on to understand many everyday concepts.

Researchers tested 4,442 words using two types of association norms. Words like flower or hoof, rooted in both sensory and motor experience, proved difficult for AI.

Tony Blair smelling flowers.
Image via Chris Ison via Getty

They used two word-rating systems: Glasgow Norms, which measure how we feel and imagine words, and Lancaster Norms, which track how words relate to our senses and body.

Where AI faltered was in understanding deeply grounded, embodied concepts. It could mimic human reasoning for abstract, emotionally neutral, or non-physical terms – like democracy, justice, or purpose – but stumbled on sensory-rich ones, like flowers.

Because “flower” isn’t just a definition – it’s a smell, a texture, a memory, a motion.

Cognitive mismatch

ADVERTISEMENT

As AI interprets the world around us differently, the responses may feel cold, strange or inappropriate.

This could pose real-world risks if AI is intervening in the real world in domains like healthcare, education or customer service.

If AI construes the world in a fundamentally different way from humans, it could affect how it interacts with us” commented Qihui Xu, lead author of the study and postdoctoral researcher.

“The human experience is far richer than words alone can hold,” commented Xu.

A boy smelling a flower.
Image from Getty Images

The future – beyond language

The study found that AI models trained on both images and text (multimodal training) performed slightly better at understanding visual concepts.

One of the pertinent questions from the findings would be to ask philosophically, “What does it mean to know something?”

As AI continues to be trained via a multitude of methods like sensory input, embodied AI, and robotics, there are still some obvious experiences it is missing out on:

“A large language model can’t smell a rose, touch the petals of a daisy, or walk through a field of wildflowers,” offered Xu.

ADVERTISEMENT

Until AI can experience the world, it may continue to misunderstand what humans mean – especially when we talk about things we feel.

justinasv Niamh Ancell BW Gintaras Radauskas Izabelė Pukėnaitė
Get our latest stories today on Google News