Distrustful employees sabotaging AI adoption by manipulating data, study claims


Companies are pouring billions into AI adoption. However, the failure rate stands at 80%, with many employees distrusting the technology and sabotaging efforts to implement it.

Employees who distrust new AI tools have been known to feed them biased or imbalanced data and sabotage their performance. This leads to a ‘vicious cycle’: manipulated data degrades AI performance, other organization members lose trust, and the use of AI decreases, according to a paper by Natalia Vuori and other researchers from Aalto University in Finland published in the Journal of Management Studies.

“Under different trust configurations, organizational members can engage in other types of behaviors beyond AI use,” the researchers claim. “Individual behaviors ultimately cause AI technology to lose users and impact the overall success of AI adoption within an organization.”

ADVERTISEMENT

After interviews with employees of a medium-sized software development firm, researchers defined four different AI-trust configurations: full trust (high cognitive and high emotional trust), full distrust (low cognitive/low emotional), uncomfortable trust (high cognitive/low emotional), and blind trust (low cognitive/high emotional).

Full trust is self-explanatory, as employees are most likely to embrace new tech. The blind trust group accepted AI uncritically, placing excessive faith in the outputs. The uncomfortable trust group, while recognizing AI capabilities, had lingering doubts and was reluctant to use AI tools.

Those exhibiting full distrust completely shunned the AI. Some respondents admitted they engaged “in manipulating, confining, or withdrawing” their digital footprints when using an AI tool.

“AI adoption isn’t just a technological challenge – it’s a leadership one. Success hinges on understanding trust, addressing emotions, and meeting employees where they are,” said Vuori.

“Without this human-centered approach, even the smartest AI will fail to deliver on its promise.”

Researchers explain that neither cognitive nor emotional trust alone can fully explain AI adoption, and different compositions of both forms of trust need to be considered. Successful AI adoption necessitates personalized training of organizational members and personalized leadership styles.

Konstancija Gasaityte profile Marcus Walsh profile Niamh Ancell BW vilius
Join 25,260+ followers on Google News

To increase cognitive trust, leaders can provide training sessions, develop and communicate AI policies, and set realistic expectations.

ADVERTISEMENT

“Because cognitive trust heavily depends on AI performance, leaders should carefully manage expectations regarding the performance of AI,” the paper reads.

Emotional trust can be established with a psychologically safe environment, ethical and responsible use, and excitement or AI pride.

“Leaders could share their excitement about AI and pride in using AI with organizational members,” the researchers suggest.

The study sample was limited to a single organization, and therefore the insights need to be validated by future research.