Friday, March 29, 2024

Google Home can now recognize different voices and personalize interaction

Share

Why it matters to you

Thanks to artificial intelligence, Google Home can now tell the difference between you, your family members, and anyone else linked to the speaker.

Google Home, Google’s voice-imbued home speaker, is pretty darn good at deciphering mumbled commands and garbled words. But what it hasn’t been able to do is differentiate between various people, which means it has only supported one account. If your significant other asked your Google Home for a list of upcoming appointments, for example, Home would diligently read out your agenda. But that’s changing.

On Thursday, Google announced multiuser support for Google Home. Thanks to a powerful machine-learning algorithm and upgraded smartphone app, Google Home can serve up information relevant to you — and not your roommate’s grocery list or your brother’s Madonna playlist.

It’s as simple as connecting your account (up to a total of six) to the Google Home app. Once you update to the latest version, you’ll see a card that says, “Multiuser Support is available.” Tap Link your account, and once that’s done, you’ll be asked to teach Google Home the sound of your voice by repeating phrases like, “OK Google” and “Hey Google.” A specialized form of artificial intelligence called a neural network analyzes your spoken phrases, and from that point on compares the sound of your voice to previous analyses. It all takes place in a matter of milliseconds.

Multi_user_animation.gif

It’s a fairly hands-off affair, from there. When you ask Google Home for your commute time, for example, you’ll get a response based on your saved work and home preferences. If another family member who’s gone through the setup process asks about their drive, they’ll get a different reply. The same goes for schedules, lists, music, and more.

Google has used neural networks to improve voice recognition before. In September 2016, the search giant rolled out a machine-learning update to Google Translate, a digital interpreter that supports more than 100 languages. In a test of linguistic precision, Translate’s old model achieved a score of 3.6 on a scale of 6; the neural network ranked 5.0, just below the average human’s score of 5.1.

Google Assistant, the computer intelligence behind Google Home’s friendly exterior, uses AI to personalize its replies. The Assistant learns about preferences like favorite apps and services, restaurants, frequently asked questions, and identity information including age, gender, and birth date. It gets better over time as the algorithms start to learn usage patterns and behaviors, Google said.

The introduction of AI-powered multiuser support for the Google Home could signal an expansion of those efforts. In an interview with Backchannel earlier this year, Fernando Pereira, Google’s lead natural language scientist, predicted that the Assistant’s machine learning would become “more fluent, more able to help you do what you want to do, understand more of the context of the conversation, [and] be more able to bring information from different sources.”

One thing’s for certain: Google Home’s competition has a long way to go. Amazon’s Alexa assistant, which launched in 2014, still lacks multiuser support. Microsoft’s Cortana doesn’t have it either, and neither does Apple’s Siri.

Multiuser support starts rolling out to Google Home users in the U.S. on Thursday, and will expand to the U.K. in the coming months.




Read more

More News