If you had told people just a few years ago that you were going to place an always-on microphone in their home, they’d have balked, and then refused.
Today, voice assistants are everywhere. You probably have one sitting in the room you’re in now. They've moved into our phones, cars, TVs, microwaves, and even refrigerators. If you don’t have at least one Amazon Echo, Google Home, or Apple HomePod in your house at this point, you might be in the minority.
Voice assistants have moved into our everyday lives in a big way, becoming the new norm. But somehow, people don’t seem to be concerned about the impact of that on their privacy.
The problem with privacy in Voice
Both Amazon and Google store recordings of your voice as you use their devices, and both companies are able to decrypt those recordings to perform analysis, ultimately creating the world’s biggest voice database. Recently, Amazon admitted that their employees routinely listen in to captured conversations, and even share them on internal IM channels.
In our rush to voice assistants, we've forgotten the importance of privacy, and what having this data at scale means in the future. While we're getting excited over a new feature or app, others haverecreated someone’s entire voice using a computer and a handful of snippets. If that’s not terrifying, I don’t know what is.
There are additional privacy implications as well. Due to the nature of how your voice is processed: we’re wiring hundreds of pieces of metadata up to the cloud—like our bank accounts—to use them with Alexa and Google Assistant, without really considering it.
"Alexa, what’s my bank balance" could be a real command, available from any financial services provider. It’s a legitimately useful use case for the user, but it’s also a great way for Amazon to figure out how much money you have on hand, and an even better way for an attacker to find out more information about your bank account.
This is great for Amazon, but presents a new problem in terms of privacy and security for end users. If a simple attack on iCloud accounts can wreak so much havoc on people’s lives, what happens if that voice database, and the accounts connected, are compromised?
The future of privacy
Privacy is the final frontier, and it’ll be a huge trend throughout 2019 relating to voice assistants. GDPR, the European Union’s biggest piece of new legislation in decades, may drive that conversation forward as it raises many questions about whether or not smart voice applications can be compatible with strong privacy law at all.
Companies will now have to ask for consent in simple terms, rather than buried in legalese terms and conditions. This creates many challenges, in particular for cloud-based voice assistants. Voice is considered to be personal data, therefore devices that listen ambiently should in theory ask everyone in the room for consent before sending their voice to the cloud.
Imagine the nightmare of having ten people over for dinner and having your Google Home device asking each of them for consent!”
Right now, most APIs for voice recognition are cloud-based, provided by Amazon and Google. This presents challenges for businesses looking to build experiences for their own apps with privacy in mind, especially with GDPR in the picture. Local-only APIs, and on-premise solutions do exist, and may be worth considering as these concerns become even more important throughout 2019. Your customers may demand the peace of mind and guaranteeing a level of predictable privacy is good business.
With the global smart speaker market expected to top 200 million units by the end of this year, if we’re to imagine a future in which we’re talking to computers all day, we need to understand what happens with our voice once it leaves the room and goes online. The question really is wide open with consumer voice: where is the line? It’s an important question to ask before it’s too late.