Forget Terminator – this robot learned how to dance and hug to find friends


Californian and MIT engineers have trained a humanoid robot to perform a variety of movements. The team believes that this can improve robot-human relationships.

Researchers at the University of California, San Diego, and Massachusetts Institute of Technology (MIT) have achieved the next milestone in robot kinesthetics.

While human motions from actual humans and animals are rich, diverse, and expressive of their intent or emotional valence, robot movements have singular patterns.

ADVERTISEMENT

Scientists tried to tackle the challenge.The team trained the human-size robot to perform expressive movements, including dance moves and interaction gestures such as waving, giving a high five, and hugging.

Most importantly, machine learning helped the robot learn how to adapt to the situation and commands and actually follow some dance moves with its flesh-and-blood pal.

Training robots on human motion datasets

The scientists developed a new method called Expressive Whole-Body Control (ExBody). Instead of making a robot copy the reference motion exactly, they developed a new controller that uses both a reference motion and a root movement command to control a robot.

The robot was trained using reinforcement learning (RL) on a dataset that included various human body motions. RL is a type of machine learning in which a computer program called an agent learns to make decisions by trying different actions and getting rewards or penalties based on the outcomes.

The agent aims to maximize its total reward over time by determining which actions are best in different situations. During training, the robot was encouraged to use the upper body to imitate various complex human motions for expressiveness while being less strict with the leg movements.

Dancing humanoid robot
Source: research paper

The reward system for the legs focused on following the root movement commands from the reference motion rather than matching every joint angle exactly.

ADVERTISEMENT

Despite the separate training of the upper and lower body, the robot operates under a unified policy that governs its entire structure. This coordinated policy ensures the robot can perform complex upper-body gestures while walking steadily at different speeds on different surfaces and terrains.

Currently, a human operator uses a game controller to direct the robot's movements, controlling its speed, direction, and specific actions. The team envisions a future version with a camera, allowing the robot to independently perform tasks and navigate terrains.

Friendly robots will be more acceptable

Researchers believe that robots able to perform friendly movements would be more acceptable in the environment alongside human beings, for example, at factory assembly lines or hospitals.

“Through expressive and more human-like body motions, we aim to build trust and showcase the potential for robots to co-exist in harmony with humans,” said Xiaolong Wang, a professor in the Department of Electrical and Computer Engineering at the UC San Diego Jacobs School of Engineering.

“We are working to help reshape public perceptions of robots as friendly and collaborative rather than terrifying like The Terminator.”The mastering of robotic movements is rapidly advancing.

In January, a group of scientists at Tohoku University in Japan reached a breakthrough in robotics, enabling robots to achieve human-like walking. Scientists used a musculoskeletal model that mimicked a reflex control method similar to the human nervous system.