Google DeepMind teaches mini robots soccer skills


Scientists at Google DeepMind have trained miniature humanoid robots to master their soccer skills while playing on the field – just the way humans would.

The robots were trained using an artificial intelligence-based method called deep reinforcement learning, which allowed them to master a range of dynamic skills and tactics needed for a game of soccer.

They were not only able to chase and kick the ball but also defend it from each other in a one-on-one game and swiftly recover from the fall, demonstrating much faster times in all tasks than robots in scripted scenarios would.

“Our players were able to walk, turn, kick, and stand up faster than manually programmed skills on this type of robot. They could also combine movements to score goals, anticipate ball movements, and block opponent shots – thereby developing a basic understanding of a 1v1 game,” Google DeepMind said.

The scientists trained their artificial intelligence agents in a simulation using the MuJoCo physics engine and then transferred them to the small humanoid robots with 20 actuated joints.

To address the simulation-to-reality gap, which Google Deepmind described as a “major challenge,” the scientists deliberately added disruptive forces and randomness to the simulator.

“This meant agents, which learn by trial and error, could cope with unexpected interference in the real world,” the company explained.

It said: “This work is a step towards training general robots, rather than training them for specific tasks. To do this, we need to understand the minimal amount of guidance they need to learn agile motor skills, while leveraging the capabilities of multimodal foundation models too.”

While the results of the study were “fun to watch,” Google DeepMind said its research was part of its ultimate goal of bringing robots into people’s everyday lives. The paper detailing the study was published in Science Robotics, a peer-reviewed journal.