As AI becomes a central part of our working lives, more attention is being given to the interface between us and it. We can draw important lessons from the interface between men and machines in previous technological revolutions.
For instance, research from the University of Lincoln in the UK examined the interface between humans and robots in a professional context. The study suggests that robots with the same flaws we humans so frequently exhibit tend to be viewed more favorably.
The researchers suggest that we are much more likely to form productive working relationships with robots when those robots have flaws and imperfections.
“Our research explores how we can make a robot’s interactive behavior more familiar to humans, by introducing imperfections such as judgemental mistakes, wrong assumptions, expressing tiredness or boredom, or getting overexcited," the researchers explain.
"By developing these cognitive biases in the robots – and in turn making them as imperfect as humans – we have shown that flaws in their ‘characters’ help humans to understand, relate to and interact with the robots more easily."
A second study confirmed this general desire for us to work alongside robots with the same kind of flaws we ourselves have. The study, which was published in Frontiers in Robotics and AI, tested how humans respond to robots that are ‘perfect’ versus those with more human-like flaws.
As before, the results suggest we take to the flawed robots more easily than the perfect ones.
“Our results show that decoding a human’s social signals can help the robot understand that there is an error and subsequently react accordingly,” the authors say.
Flawed AI
So, is the same the case for the kinds of AI we're increasingly seeing in the workplace today? Research from UC Berkeley suggests the answer is very much yes. The study found that when AI makes mistakes, we tend to view it as more human and, therefore, easier to work alongside.
For example, people see customer service agents who make and correct typos as more human and sometimes even more helpful.
“For decades, people worked to make machines smarter and less prone to errors,” the authors explain. “Now that we’re living through real-world Turing tests in most of our online interactions, an error can actually be a beneficial cue for signaling humanness.”
The researchers conducted five studies with over 3,000 participants. In all studies, participants rated agents who made and corrected typos as more human and warmer than those who made no typos or left them uncorrected.
The effect was strongest when participants didn't know if the agent was a bot or a human, but it still held even when they were told.
Making mistakes
The study found that when people saw the artificial agent making a typo, they believed that the agent would be more helpful to them in their own work.
It's an example of the "Pratfall Effect," which was coined in the 1960s and explains how making mistakes can increase our likability. This is seen in the perceived persona of people like former UK Prime Minister Boris Johnson.
It's something of a contentious notion, as the researchers cite other studies showing that a bumbling persona can reduce our perceptions of that person's competence and intelligence. They argue, however, that the key is not whether one makes a mistake but what happens afterward.
They suggest that making mistakes is fine, provided you correct them. Indeed, the very act of correcting oneself can increase our perceived humanity as it shows that we care about how we're perceived enough to put in that effort.
Striking the balance
Suffice it to say, the researchers aren't advocating deliberately creating AI systems that will make mistakes, as doing this could easily be construed as manipulation and raise various ethical concerns.
They highlight that there are various policies being introduced around the world to ensure that chatbots are identified as such so that customers know when they're talking to a real person and when to a bot. Even with such disclosures, however, the ability and willingness to accept and correct honest mistakes could help to build a relationship with the customer.
What they do recommend is to design chatbots so that they incorporate various "humanizing cues," which could include the ability to fix mistakes while remaining as transparent to the customer as possible. These cues can help the company to really connect with the customer and help to offset any depersonalization that occurs as a result of introducing chatbots in the first place.
Your email address will not be published. Required fields are markedmarked