> Don’t read Your smart speaker is always listening
. Just kidding, you can read it, but I want to spend a moment analyzing why these types of privacy analyses are flawed. In the case of this article, the headline is really misleading, as are the many headlines that have sprung up after it was publicized that smart speakers use real humans to help debug voice commands / train the voice recognition to do the correct thing. Maybe it’s possible for someone to hack in to your smart speaker and listen in to your conversations, but that’s not what these articles are about and the headlines are truly misleading. They range from the more benign title “Smart speakers are everywhere – and they’re listening to more than you think
” which actually has a thoughtful analysis. This brings me to:
> Do read
“Alexa has been eavesdropping on you this whole time
.” This is not from Buzz Feed, it’s fromThe Washington Post “where (they do, however, provide a helpful link to see what your Alexa device has recorded. Your Alexa history is located here
). While the headline is alarmist, the analysis is excellent, if you actually read the whole thing.
At first it seems unreasonable: Given "Alexa has been eavesdropping on you this whole time” was written in the Washington Post. However let’s look at its content specifically:
“Any personal data that’s collected can and will be used against us. An obvious place to begin: Alexa, stop recording us.”
However beforehand, the author writes:
“Alexa keeps a record of what it hears every time an Echo speaker activates. It’s supposed to record only with a “wake word” — “Alexa!” — but anyone with one of these devices knows they go rogue. I counted dozens of times when mine recorded without a legitimate prompt. (Amazon says it has improved the accuracy of “Alexa” as a wake word by 50 percent over the past year.)
Computer scientists will recognize that these two statements are at odds with one another – the way to make the Alexa wake word more accurate is by improving the model that identifies the wake word. This requires a dataset which means it requires recording successful and failed attempts and then labeling that data. So what the author is proposing is a catch 22.
However the author acknowledges this:
Noah Goodman, an associate professor of computer science and psychology at Stanford University, told me it’s true that AI needs data to get smarter.
And finally, provides this metaphor, which is actually pretty awesome:
Think of “Downton Abbey”: In those days, rich families could have human helpers who were using their intelligence to observe and learn their habits, and make their lives easier. Breakfast was always served exactly at the specified time. But the residents knew to be careful about what they let the staff see and hear.
It’s interesting to think about our smart assistants as a sort of helper around whom we need to behave differently because they are inherently flawed – humans may make decisions to share our information even though we don’t want them to, and software companies are a sort of organism run by humans, flawed in that they may use our data in a way we didn’t intend for it to be used, or share it in some other way.
If we’re going to start propose ways to address data privacy, we need to acknowledge why we need to use the data to make our technology better, and then decide if that tradeoff is worth it. While the analysis is thoughtful, calling the article "Your smart speaker is always listening” buries the nuance.