Skip to main content

Google wants to help robots understand us better through natural language

Google Logo
(Image credit: Android Central)

What you need to know

  • Google Research and Everyday Robots join hands to create new robotics algorithm PaLm-SayCan.
  • The new efforts should assist robots in better understanding humans through language via voice or text.
  • The companies are also using "chain of thought prompting" to help robots understand a task and the tools to complete it.

Google Research and Everyday Robots have partnered together to help robots better understand and interact with us through language.

Both Google Research and Everyday Robots have combined forces to create PaLm-SayCan, a joint effort utilizing Google's Pathways Language Model (PaLm) and an Everyday Robots helper robot.

Google says this effort "is the first implementation that uses a large-scale language model to plan for a real robot." The new project should help people better communicate with robots via voice or text and allow the robots to execute complex tasks with a better understanding of language.

Regarding language use, Google and Everyday Robots are hoping the PaLm-SayCan algorithm can help robots gain a more natural interaction with people. Google prefaces its language research by saying that human interactions, even the simplest ones, are pretty complex. The companies are hoping that by using the PaLm language model, robots can better complete and understand open-ended prompts.

According to the research, PaLm saw a 14% improvement in helping a robot plan and approach a task reasonably over other models. There was also a 13% success rate improvement when carrying out tasks and a 26% improvement for robots when given a lengthy task to complete, such as those that include eight steps or more.

Google continues to explain how its new research efforts are helping robots make sense of our world. Using PaLm and "chain of thought prompting," a robot should be able to take a prompt and discern what the person really wants. The example given is, "Bring me a snack and something to wash it down with." Using the chain of thought prompting, a robot can understand what a suitable snack may be and also that a person wants something to drink when they say, "something to wash it down with."

Grounding AI in the real world is something Google Research says is crucial to its development process. It's hoping to use the language model and the capabilities of a robot to help it understand what needs to be done to complete a task. Google explains that PaLm will suggest possible methods of achieving a task, and the robot model will do the same based on the robot's capabilities. The ideal goal is for both to work in unison to reach a goal in the best way possible.

Google polishes things off by mentioning the safety measures in place for its robots using PaLm-SayCan. The algorithm is confined to commands that keep the robot's safety in mind and also keep things "highly interpretable." Google says this allows it to examine and understand every decision the robot has made.

While Google hasn't revealed plans for its own consumer robot helper, it would be cool to see the company build and launch its own version of Amazon's Astro robot.

Nickolas Diaz
News Writer

Nickolas is always excited about tech and getting his hands on it. Writing for him can vary from delivering the latest tech story to scribbling in his journal. When Nickolas isn't hitting a story, he's often grinding away at a game or chilling with a book in his hand.