Research

Google Assistant may be vulnerable to attacks via subsonic commands

Google Assistant may be vulnerable to attacks via subsonic commands

That means if you fire up the wrong playlist-a malicious one-it may sound like everything is normal, but in reality someone is attacking your phone or smart speaker. Well, it turns out that there are certain types of white noise that only devices like Amazon's Alexa and Apple's Siri can hear - and experts think hackers know how to use them. A report in Thursday's New York Times revealed that students from University of California, Berkeley, and Georgetown University were able to hide subliminal commands for virtual personal assistants inside white noise played over loudspeakers.

So far, there's nothing to suggest that such subliminal messages have been successful outside of a student research setting, but one of the Berkeley paper's authors says he believes it's just a matter of time:"My assumption is that the malicious people already employ people to do what I do", he told the New York Times.

Amazon said that it doesn't disclose specific security measures, but it has taken steps to ensure its Echo smart speaker is secure.

More news: Family escapes cheetah attack in Dutch safari park

Apple has additional security features to prevent the HomePod smart speaker from unlocking doors and requires users to provide extra authentication, such as unlocking their iPhone, in order to access sensitive data. The receiver must be close to the device, but a more powerful ultrasonic transmitter can help increase the effective range. You'll still need a direct line to the device, as the commands are incapable of penetrating through walls. Moreover, Chinese and American researchers from China's Academy of Sciences and other institutions are said to have showcased how they could control voice-activated devices with commands embedded in songs that can broadcast over the radio or played on YouTube.

Testing against Mozilla's open source DeepSpeech voice recognition implementation, Carlini and Wagner achieved a 100 percent success rate without having to resort to large amounts of distortion, a hallmark of past attempts at creating audio attacks. They say that they have created a way to get rid of sounds that would normally be heard by Google Assistant, Siri and Alexa, and replace them with audio files that can not be heard by the human ear. In the latest development of the research, some of the UC Berkeley students determined a way to hide commands within music or spoken text recordings. But can they be used for nefarious purposes? Researchers used the loophole to embed this command into a four-second clip from Verdi's Requiem in music files.

Carlini added: "We want to demonstrate that it's possible and then hope that other people will say, 'OK this is possible, now let's try and fix it'".