Business Columns & Blogs

When you talk to Siri, Cortana and Google Now, who’s listening?

I’m starting to enjoy talking to computers. I don’t use speech-to-text all that much because I’m so keyboard-oriented, but I love being able to run a search by voice or ask for last night’s baseball scores. Siri works well for this, though I’m now using Android and am thus in the hands of Google Now, which keeps asking me to re-train it by speaking ‘OK Google’ over and over again. Despite this annoyance, a day rarely passes that I don’t talk to Google Now.

Which raises an interesting question. What exactly happens to the things we say to services like these? We learned a few years ago that Apple keeps everything you say to Siri for two years, storing such deathless statements as “Remind me to buy eggs at the store” on distant servers in the cloud. Your name gets dropped in this process, as Apple assigns a random number to the voice file rather than tying it to your email or your Apple ID. Six months later, even this user number is detached from the clip, so your privacy is more or less protected.

I say “more or less” because hacking seems ubiquitous and I assume a truly determined hacker could get into this system, though the payoff seems low. On the Android side, Google Now will record your voice and save it as long as you have Voice & Audio Activity turned on. If you so choose, you can delete voice items one at a time or purge all of them from the same page, which can be found in the depths of your Google account online.

Actually, deleting all your voice clips doesn’t purge them from Google’s system. Rather, it archives the recordings but removes any link to your Google account. So even when you switch off voice activity inside your account, Google is still able to record what you say to Google Now.

The cloud is stuffed with our pronouncements. I went through my recordings and found it was an odd experience. On my Voice & Activity page, I find each spoken item presented with a button next to it, allowing me to play the voice command back or delete it entirely. On June 12, I asked Google Now how many atoms there were in the universe. On April 4, I asked it to set a four minute alarm. And so on. Each statement is stored here, and I hear my own voice reciting it.

I can see why companies like Apple and Google want to work with spoken commands because speech recognition can only get better when computers confront more and more speech. Personalizing a system to suit your voice means the system has to keep practicing.

The downside of all this is in the emerging model. To make artificial intelligence work on a personalized level, the machines behind it need to know as much as possible about you. That means you either agree to give up that information or you settle for something less than full performance from a system that could technically anticipate most of your needs.

Thus Google’s upcoming Allo app, which is a text messager vitalized with artificial intelligence. Start talking with your friend about going out for a pizza, and Allo will suggest restaurants near you. To do this, Allo needs your location information. It also needs to be reading your texting traffic, and to know the location of the person you’re talking to. You can turn this feature off, but again, you’re losing performance, which is part of the artificial intelligence bargain.

The good news: Help is on the way. Word out of MIT is that a new kind of neural networking chip – Eyeriss by name – is under development, one that will dramatically increase the processing power available to our mobile devices. The goal: Instead of sending all that information to the cloud, future iterations of Siri and Google Now and Microsoft’s Cortana will be able to handle all but the final request locally. If you’re queasy about privacy, this chip cannot appear too soon.

Paul A. Gilster is the author of several books on technology. Reach him at gilster@mindspring.com.

  Comments