Here's how Google is succeeding at making Assistant think like a human

Google Nest Hub Max
Google Nest Hub Max (Image credit: Android Central)

Google Assistant has grown into its own platform since it was launched. Not only is it smarter than ever, but with devices like the Nest Hub or Nest Mini, it's no longer just a part of Android.

Of course, when software grows in capabilities it also becomes more complex. That means developers need dedicated tools to build the content we want to use on our smart displays and other Assistant-enabled devices. Like any developer tools, there are two important things they need to do: be easy to use, and work well. It's tough to make both of those important things happen, but the release of the Actions Builder web interface and Actions SDK look like they'll be winners.

Let the conversation flow

For a Google Assistant Action to be great it needs to be able to talk and listen as a person would. If you ask your Google Home to tell your kids a bedtime story or sing the "Brush Your Teeth" song, it needs to recognize what you want and be able to do what's asked without being a robot that only follows logic and order.

The two important things that make this a reality are called conversational flow and natural language understanding. Without those, Assistant wouldn't be able to interact the way we've come to enjoy.

"Yes", "yeah", and "yep" all mean the same thing. You have to teach a computer that kind of flawed human logic.

Conversational flow is a pretty easy concept to understand, and it literally means exactly what it sounds like: Assistant needs to be ready to talk back to you whenever you've said something to it. That's easy when you ask the weather or even ask it to sing a song about brushing teeth, but when things get complex — like a choose your own adventure game, for example — there are some very specific ways a conversation needs to be steered so Assistant has an answer to give.

That's where natural language understanding (NLU) comes into the picture. Assistant needs to know things like "yeah" and "yep" and "yes" all mean the same thing and it needs to recognize how speech is fluid; we all talk very differently than we write. And since Assistant is a computer that only acts like a person, this all needs to be input into any Conversational Action project. Computers can't really learn, they need to be programmed.

Voice Match Assistant

Source: Android Central (Image credit: Source: Android Central)

That's where Google's new Actions Builder and Actions SDK come into play. They are two new tools that let developers build a project from start to finish the way they are most comfortable with. Actions Builder is a web-based tool that lets developers build Conversational Actions sort of the same way you build a flow chart. But it has the same tools that a traditional development SDK would. Any developer can use this graphical interface to visualize the conversational flow, input any NLU data that the project has been trained to understand, and even debug the final product in a convenient and easy to understand way.

Developers can use a web-based block builder or a new IDE for building Assistant content. Or they can use both!

The Actions SDK does the same thing but in a more traditional IDE (integrated development environment) for developers who prefer to work locally or through a file-based overview of their project. And if developers already have a preferred IDE, they can use it combined with command-line tools to build the final product with all the same benefits.

These tools are the front end to a better and faster Assistant platform. The runtime (think of that as the engine that powers the software we use) for Assistant is now faster than ever before, and an all-new Interaction Model means Assistant is both smarter and easier to build for.

Homestorage

Source: Google (Image credit: Source: Google)

The new interaction model is built so things like real-time conversations are faster and more efficient to build, and NLU training is more robust. Developers can create scenes and use them as a building block where each part of a Conversational Action has its own data and logic. Best of all, developers can build scenes and reuse them in the same action through active intents.

Tools like this are how Assistant gets better and better.

Most of us aren't ever going to develop any sort of content for Google Assistant, but this is still really important for us. With tools like these, we can expect better "apps" for our Assistant-enabled devices so we'll get far more use out of them. Today we can set up a morning routine that turns on lights and plays music. Next year, who knows what we'll be able to do?

The Google Home may soon be replaced by a new Nest-branded smart speaker

Jerry Hildenbrand
Senior Editor — Google Ecosystem

Jerry is an amateur woodworker and struggling shade tree mechanic. There's nothing he can't take apart, but many things he can't reassemble. You'll find him writing and speaking his loud opinion on Android Central and occasionally on Twitter.