
Since the launch of ChatGPT to a mass audience in 2023, most organizations have dipped their toes in the water with generative AI.
A recent study from ESMT Berlin reminds us that while AI can make us more effective and efficient, it's important that we maintain control over how we use it. Too much AI can blunt our decision-making prowess.
The study found that various organizational factors can result in an over-reliance on AI-based tools in the decision-making process, with high levels of delegation particularly likely to result in high usage levels.
Organizational dynamics
The researchers identified a number of key dynamics between managers and decision-makers that tend to influence AI adoption. For instance, they found that managers can often penalize decision-makers when overriding the AI tool's recommendations. This is especially likely when the decision ends badly.
This has the perhaps unintended outcome of ensuring an alignment between the way decision-makers think and the recommendations provided by the AI tools, regardless of what private knowledge or judgments the employees might bring to the table. This inevitably leads to suboptimal outcomes over time.
"Organizations should equip managers with tools to better evaluate the rationale behind decisions that deviate from AI recommendations, especially when such deviations are supported by tacit knowledge."
The researchers are at pains to point out that these outcomes aren't a result of the technical capabilities (or limitations) but are instead byproducts of the kind of behavioral incentives that organizations tend to have in place to motivate employees.
Making things better
So, how can things be changed so that AI doesn't undermine our cognitive abilities but rather supports them? A good first step is to change how decisions are made and observed. The key is to ensure that managers are in a better place to evaluate the efficacy of decisions, with this often going beyond merely observing the outcomes.
"Organizations should equip managers with tools to better evaluate the rationale behind decisions that deviate from AI recommendations, especially when such deviations are supported by tacit knowledge," the researchers explain.
The next step is to ensure that incentive structures are as fair as possible. This might involve redesigning them so that decision-makers are not penalized for using their own judgment over that of the AI. Rewards should also be aligned with the quality of the decision-making itself rather than just the outcome.
"Incentive structures that disproportionately reward adherence to AI over decision quality can exacerbate over-reliance and undermine decision performance," the researchers continue.
Human factors
The researchers also urge organizations to train managers to better spot and then overcome the biases they may hold that lead them to prefer decisions made by algorithms over humans. This deference to the machine can often be a blame-avoidance tactic that needs to be overcome. It's important that managers are aware of the strengths and limitations of AI systems.
"Training programs for managers should emphasize understanding AI limitations and recognizing the value of human judgment in complex or ambiguous scenarios," the researchers explain.
This can be achieved by the creation of clear policies and guidelines to help managers and decision-makers understand when AI can, and indeed should, be overridden. This should include auditable frameworks that ensure accountability is balanced with an inherent trust in human expertise.
"Establishing clear organizational guidelines on when and how algorithmic advice can be overridden is crucial to reducing suboptimal adherence to machine recommendations."
Working together
Ultimately, the goal should be to create systems and structures that allow man and machine to work effectively together. This should involve providing opportunities for decision-makers to input their own expertise and often nuanced information into the system or even to adapt AI-driven recommendations.
The goal should be to have AI enhance employees' capabilities rather than replace them. The worst outcome is when employees are infantilized and enslaved by the machines that should be liberating them.
"The integration of AI into decision-making should prioritize systems that augment human expertise rather than replace it, enabling meaningful collaboration between human and machine," the authors conclude.
It's clear that while there has been an inevitable focus on the technological capabilities of generative AI, we should also focus on the behavioral and organizational factors that will impact its usage. We've already seen that excessive reliance on AI can dull our own skills, and the same applies to managers over-relying on technology at the expense of the experts in their teams.
If we're to strike the right balance, we need to learn to appreciate the talents man and machine bring to the table and get a better alignment between our respective strengths.
Your email address will not be published. Required fields are markedmarked