Google researchers demo robots that can understand what people want
Researchers at Google LLC have devised a new way for robots to understand what people want by teaching them how language fits with the real world.
People already interact daily speaking naturally with chatbots on their phones to do internet searches, set alarms and even order pizzas. But what if you could call to your Roomba and say, “Hey, I’m thirsty, get me something to drink,” and have a Coke arrive from the fridge?
To make this sort of thing happen, Google Research is combining efforts with Everyday Robots, a helper robot maker, to do exactly that.
The research, announced Tuesday, is called PaLM-SayCan, It uses PaLM, or Pathways Language Model, with an Everyday Robots helper robot to do ordinary tasks around a micro-kitchen on a Google campus. So far the robot has assisted researchers in grabbing snacks, helping them clean up spills and throwing away trash.
PaLM is what is called a “large language model” that allows the robot to understand the context of what people say to it and translate it into a series of tasks.
“PaLM-SayCan enables the robot to understand the way we communicate, facilitating more natural interaction. Language is a reflection of the human mind’s ability to assemble tasks, put them in context and even reason through problems,” said Vincent Vanhoucke, head of robotics research at Google Research. “PaLM can help the robotic system process more complex, open-ended prompts and respond to them in ways that are reasonable and sensible.”
For example, if a user were to ask the robot, “I spilled my drink, can you help?” the language model could come up with a set of potential suggestions to reasonably clean up the mess, such as a vacuum or a sponge. Clearly a vacuum would not be the most sensible way to clean up a liquid spill, so the robot would choose to bring the sponge.
This is the action of the “SayCan” portion of the model, which takes what the user “says” and deals with how the world works, or what it “can” do. The objective is to come up with a set of potential actions and then constrain them to what is safe and reasonable given the environment.
Once these two elements are considered, the robot then comes up with a series of tasks that will reasonably create the outcome. For example, moving to locate the sponge, picking up the sponge and then bringing it back to the researchers so that they can use it.
Google Research says safety is the No. 1 priority when working with AI models and robotics, even when asking robots to fetch sponges and sodas from the kitchen fridge.
“Whether it’s moving about busy offices — or understanding common sayings — we still have many mechanical and intelligence challenges to solve in robotics,” said Vanhoucke. “So, for now, these robots are just getting better at grabbing snacks for Googlers in our micro-kitchens.”
Photo: Google
A message from John Furrier, co-founder of SiliconANGLE:
Your vote of support is important to us and it helps us keep the content FREE.
One click below supports our mission to provide free, deep, and relevant content.
Join our community on YouTube
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.
THANK YOU