How a lack of AI understanding fuels its adoption


Leaders can encourage tech adoption within organizations. Career stage is a crucial factor, with those at an earlier stage more inclined to experiment with new technology than their more senior peers, who have more to lose if those experiments go wrong.

Research from Bocconi University suggests there might be an additional reason why more junior staff might be more willing to adopt technologies like AI. After analyzing the adoption of AI in people's daily lives, the researchers found that those with less of an understanding of the technology were far more likely to adopt it.

Ignorance is bliss

ADVERTISEMENT

It's a phenomenon that the researchers catchily refer to as the “lower literacy-higher receptivity” link. It's a link that they found across all of the groups, settings, and countries that they analyzed, with those with lower levels of AI literacy more likely to adopt the technology.

"People with lower AI literacy do not exhibit greater receptivity because they perceive AI as more capable or ethical, nor are they generally less fearful of AI," the researchers explain.

"Instead, we provide support for the hypothesis that people with lower AI literacy perceive AI to be more magical, and thus experience greater feelings of awe when thinking about AI completing tasks, which explains their greater receptivity towards using AI-based products and services."

In other words, when we lack a true appreciation for what AI can and cannot do, we are amazed when it can perform things that we might have thought were the sole preserve of humans.

Niamh Ancell BW Marcus Walsh profile Gintaras Radauskas Paulius Grinkevičius B&W
Don’t miss our latest stories on Google News

Behind the curtain

Of course, like the Wizard of Oz, AI doesn't really possess human qualities. However, many people don't really understand that, so when a chatbot appears to be empathetic, it's easy to assume that it is actually being so.

Those with more knowledge fully understand that generative AI is a huge pattern-matching process that relies on vast quantities of training data and algorithms to predict how things should fit together. This understanding shreds a lot of the mystery behind AI's seemingly magical results.

ADVERTISEMENT

The researchers deployed a detailed approach to understanding our level of AI literacy to ensure that the findings are applicable across a wide range of variables, including seniority and nationality.

"We employ multiple measurements of AI literacy to ensure that the results are not a function of any specific way of assessing it: a third-party measure of AI expertise assessed through AI education, skills, and jobs (study 1), a 25-item measure developed by the authors (studies 2, 4, and 5), and a 17-item measure developed by AI (studies 3 and 6)," the researchers explain. "Finally, we operationalized AI receptivity in several ways, including general interest, past usage frequency, and preference for AI over humans in completing tasks."

"Replicating" humans

The results show that the receptivity link is greatest when AI is deployed in areas commonly associated with humans, such as providing mental health support.

When the technology was deployed in areas that weren't perceived as the domain of humans, such as data analysis, this phenomenon didn't occur, and those with higher levels of AI literacy were more likely to find the technology useful because they were less taken by any perceived "magic."

The results provide fresh insight into how and why people might respond to new technology and why responses differ so much across the population. They remind us that people's awareness and understanding of AI play a huge role in their likelihood of adopting it.

"Our results suggest that attempts to increase the adoption of AI-based products and services through targeting consumers with greater AI literacy or increasing knowledge of AI may not be effective," the researchers explain.

"Finally, our results indicate that a growing understanding of AI over time (as AI education becomes more widespread) may impact the likelihood of adoption of AI-powered tools."

The findings certainly make sense to me as a writer, as many organizations have been using generative AI to produce articles since the release of ChatGPT in 2023. It's almost certain that marketing and PR teams fall into the less knowledgeable camp when it comes to the technology behind generative AI, so they are quite possibly viewing the tech as a magical means of increasing their output.

The study reminds us of the psychological and knowledge-based factors that help to shape AI adoption. The findings also point to the risk of deskilling that can emerge when we over-rely on AI.

ADVERTISEMENT

When individuals adopt AI without understanding its limitations or inner workings, they may unknowingly relinquish critical skills and judgment, instead putting blind faith in the technology to perform tasks that might previously have been done by human experts. This over-reliance could lead to a workforce ill-equipped to intervene when AI systems fail or make errors, particularly in high-stakes domains like healthcare or finance.

As AI becomes a growing feature of the modern workplace, it’s essential to strike a balance that leverages the technology's (true) capabilities while ensuring that human skills are preserved and developed alongside technological advancements.

A workforce with both AI literacy and domain expertise will not only make better use of AI but also guard against its potential downsides. This balance is critical for fostering a future where humans and AI complement each other, rather than one replacing the other.