Google crafts helper robot to make you snacks and wipe your table


Google has introduced three new systems tailored to “help robots make decisions faster, and better understand and navigate their environments.”

“Picture a future in which a simple request to your personal helper robot – “tidy the house” or “cook us a delicious, healthy meal” – is all it takes to get those jobs done. These tasks, straightforward for humans, require a high-level understanding of the world for robots,” Google’s DeepMind Robotics team said.

ADVERTISEMENT

AutoRT, SARA-RT, and RT-Trajectory – three new systems that the team introduced – are supposed to improve real-world robot data collection, speed, and generalization.

AutoRT is a system that can collect more experiential training data to help robots better understand practical goals, so, essentially, to better equip them for the real world.

It can direct multiple robots to carry out diverse tasks, for example, placing snacks on countertops.

As per Google, the system has “orchestrated as many as 20 robots simultaneously, and up to 52 unique robots in total, in a variety of office buildings, gathering a diverse dataset comprising 77,000 robotic trials across 6,650 unique tasks.”

SARA-RT is a system built to boost efficiency – making robotics transformer (RT) models more accurate and faster. RT takes a short history of images from a robot’s camera, natural language descriptions, and the web and translates this knowledge into generalized instructions for robotic control.

Building more powerful RTs or giving them more complex tasks requires more computational resources and can slow down their decision-making.

“When we applied SARA-RT to a state-of-the-art RT-2 model with billions of parameters, it resulted in faster decision-making and better performance on a wide range of robotic tasks,” Google explained.

The RT-Trajectory model essentially helps robots to understand how to translate instructions into physical motions, for example, how to clean the table when a robot is tasked to do so.

ADVERTISEMENT

“RT-Trajectory takes each video in a training dataset and overlays it with a 2D trajectory sketch of the robot arm’s gripper as it performs the task. These trajectories, in the form of RGB images, provide low-level, practical visual hints to the model as it learns its robot-control policies,” Google explained.