Credit: Project Euphonia.
Voice assistants have a reputation for misunderstanding people with accents, stutters, or slightly slurred speech. It's even worse when the person has conditions such as Down syndrome, cerebral palsy, or ALS.
While some brands are slowly including users with non-standard speech in their speech recognition models to make their voice assistants more inclusive, Google has stepped it up a notch with Project Euphonia.
Training Voice to understand everyone
"To understand and be understood, it's unbelievable." Says Google Research Scientist Dmitri Kanevsky, who you may remember from LiveTranscribe—an app that helps the deaf and hard of hearing communicate.
Dmitri is one of the people at Google working on Project Euphonia, which aims to make Voice systems work for those with impaired speech. As someone who lost his hearing at just one year old, Dmitri was the first to record thousands of phrases that researchers could then use to train their speech recognition algorithms, helping them understand people who spoke like him.
"If we can make a speech recognizer to work for Dmitri, it's possible to make one that works for many people—even people who can't speak," says Michael Brenner, a research scientist at Google in a video about Project Euphonia.
ALS technologist Steve Saling is also featured in the video as someone who "speaks" by typing every word into an on-screen keyboard using his eyes. (Yes, just like Stephen Hawking.)
"For me, communicating is slooooow," his text-to-speech announces, as Steve grins in amusement.
Irene Alvarado, a designer/developer at Google explains that the idea with Project Euphonia is to create a tool that allows people like Steve to train machine learning models themselves. They can train them to translate specific facial expressions or even sounds into actions for Voice systems, like turning on the lights or playing the sound of laughter.
"To be able to laugh, cheer, or boo; things that maybe seem superfluous but are so core to being human," she says.
A step towards independence
Across the country, Andrea Peet was also diagnosed with ALS and forced to recognize the fact that she would soon lose most (or all) of her voice. Like Dmitri, she lent her words to Project Euphonia, and in return, Google set her up with a revamped voice assistant that could understand her commands.
Now, Andrea doesn't rely on her husband to lock the door of her home or turn on the lights. She can do things like ask to play her favorite song from The Cranberries without struggling to be understood by anyone—not even by her voice assistant. It also gives her husband, David Peet, the peace of mind that she can take care of herself while he's at work.
"[Project Euphonia] gives her the independence of feeling like she's not a patient, she's a person." He tells TODAY Show.
How you can get involved
Like most voice-first projects, it begins with voice recordings by real people. On their website, they explain that they're actively gathering voice recordings from people with speech impairments to train speech recognition models to understand them better.
While the team notes that research timelines can be long and unpredictable, the truth is that each recorded voice bring us all a step closer to a voice ecosystem that is truly made for everyone.
If you have a speech impediment, you can request to contribute your voice.
If you're fascinated by how voice technology is being used to empower others and want to join the people building these inclusive systems, come to VOICE 2020—the meeting place for everyone in Voice who believes in its power to change the world.
You'll hear from industry leaders and will have plenty of opportunities to chat with like-minded professionals about your own projects. Click the button below to learn more about this all-inclusive event, or use the code VOICEBLOG10 to get 10% off your ticket today.