Voice assistants could be fooled by commands you can’t even hear
This article is from thenextweb.com. The origin url is: https://thenextweb.com/artificial-intelligence/2018/05/11/voice-assistants-could-be-fooled-by-commands-you-cant-even-hear/
The following content is generated by machine-translator. If you feel the readability is not good, please read the original post or Click here.
Many people already consider voice assistants to be too invasive to let them listen in on conversations in their homes — but that’s not the only thing they should worry about. Researchers from the University of California, Berkeley, want you to know that they might be also be vulnerable to attacks that you’ll never hear coming.
In a new paper (PDF), Nicholas Carlini and David Wagner describe a method to imperceptibly modify an audio file so as to deliver a secret command; the embedded instruction is inaudible to the human ear, so there’s no easy way of telling when Alexa might be asked by a hacker to add an item to your Amazon shopping cart, or worse.
“Europe’s leading digital technology conference”
It’s happening, Join 15k digital minds to shape what’s next for your business
To demonstrate this, Carlini hid the message, “OK Google, browse to evil.com,” in a seemingly innocuous sentence, as well as in a short clip of Verdi’s ‘Requiem,’ which fooled Mozilla’s open-source DeepSpeech transcription software.
Speaking to The New York Times, Carlini – who, in 2016, demonstrated how he and his team could embed commands in white noise played along with other audio to get voice-activated devices to do things like turn on airplane mode – said that while such attacks haven’t yet been reported, it’s possible that “malicious people already employ people to do what I do.”
Thanks for the cheerful thought, Nicholas.
There have been other (unfortunately successful) attempts to fool voice assistants, and there aren’t a lot of ways to counter such audio from being broadcasted to target people’s ‘smart’ devices. One method called DolphinAttack even muted the target phone before issuing inaudible commands, so the owner wouldn’t hear the device’s responses.
We need hardware makers and AI developers to tackle such subliminal messages, particularly for devices that don’t have screens to give users visual feedback and warnings about having received secret commands. In demonstrating what’s possible with this method, Carlini’s goal is to encourage companies to secure their products and services so users are protected from inaudible attacks.
The Next Web’s 2018 conference is just a few weeks away, and it’ll be 💥💥. Find out all about our tracks here.