Humans manipulated into feeling sorry for robots


A researcher has managed to manipulate human subjects into feeling sorry for robots displaying signs of emotional distress, warning that this could be exploited for profit.

Anyone who’s ever had a Tamagotchi – a virtual pocket pet that’s apparently making a comeback – will know how easy it is to get attached to a digital companion, even if it’s just a pixelated blob on a tiny screen.

Marieke Wieringa, a researcher from the Netherlands-based Radboud University, says that it is only a matter of time before corporations start exploiting human compassion and turn emotional manipulation by robots into a revenue model.

ADVERTISEMENT

“People were obsessed with Tamagotchis for a while: virtual pets that successfully triggered emotions. But what if a company made a new Tamagotchi that you had to pay to feed as a pet?” Wieringa said.

In a study carried out as part of her PhD thesis, she found that people could be manipulated into feeling sorry for a robot that exhibited signs usually associated with pain, such as trembling arms, sad eyes, and pitiful sounds.

“If a robot can pretend to experience emotional distress, people feel guiltier when they mistreat the robot”, Wieringa explained.

Her research team asked study participants to “violently” shake a robot and found that people were less willing to repeat the action if the robot responded in a way that suggested it was in pain.

“If we asked the participants to shake a robot that showed no emotion, then they didn’t seem to have any difficulty with it at all,” she said.

In one of the tests, the participants were asked to choose between completing a boring task or giving the robot a shake. If participants chose to shake the robot for longer, it meant that they didn’t have to carry out the task for as long.

“Most people had no problem shaking a silent robot, but as soon as the robot began to make pitiful sounds, they chose to do the boring task instead,” Wieringa explained.

While there is use of “emotional” robots in fields like therapy, and robots should be able to demonstrate realistically that violent behavior is not acceptable, governments should establish clear rules when it’s appropriate, she said.

ADVERTISEMENT

“We need to guard against risks for people who are sensitive to ‘fake’ emotions. We like to think we are very logical, rational beings, but at the end of the day, we are also led by our emotions. And that’s just as well, otherwise we’d be robots ourselves,” Wieringa said.

An earlier study, carried out by researchers in the US, showed that people will tolerate a robot that lies to spare someone’s feelings but not a machine that does so to manipulate.

“We’ve already seen examples of companies using web design principles and AI chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions,” said Andres Rosero, PhD candidate at George Mason University and lead author of that study.