Is it ever acceptable for a robot to lie? Study finds it is – sometimes


People will tolerate a robot that lies to spare someone’s feelings but not a machine that does so to manipulate, a new study has shown.

According to the study, people generally expect robots to follow the same moral rules they do when it comes to lying. For instance, many would approve of a caretaker robot reassuring a woman with Alzheimer’s that her late husband will soon be home.

This was one of the three scenarios presented to 500 research participants, who were asked to rate their comfort level with different forms of deceptive robot behavior.

ADVERTISEMENT

⁤In the case of external state deception – a lie about the world beyond the robot – participants found it justifiable because it prevented the patient from unnecessary pain, prioritizing kindness over honesty. ⁤

⁤However, they were less forgiving of a retail robot pretending to feel pain while moving furniture, perceiving it as manipulative. ⁤⁤This type of superficial state deception involves a robot exaggerating its capabilities. ⁤

⁤The third scenario covered hidden state deception, where a robot's design conceals its capabilities. ⁤⁤In this case, a housekeeping robot was secretly recording a visitor – a form of deception that participants disapproved of the most. ⁤

⁤Researchers selected these scenarios to reflect the real-world use of robots in medical, retail, and cleaning fields.

“I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers,” said Andres Rosero, PhD candidate at George Mason University and lead author of the paper.

“With the advent of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users,” Rosero said.

Call for regulation

According to researchers, participants could justify all three types of deception to some degree. However, even when they acknowledged that the housekeeping robot might be filming for security reasons, most still found it unacceptable.

ADVERTISEMENT

Roughly half of the participants also deemed superficial state deception – where the robot lies about feeling pain – as unjustifiable. In most cases, they tended to blame robot developers and owners for such behavior, not the machines themselves.

“I think we should be concerned about any technology that is capable of withholding the true nature of its capabilities because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended,” said Rosero.

“We’ve already seen examples of companies using web design principles and AI chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions,” he said.

The paper was published in Frontiers in Robotics and AI, a peer-reviewed journal.