by: Thomas Macaulay
Voice activation is a familiar feature in the millions of UK households that have an active smart speaker, and the quarter of British consumers who use a virtual assistant on their smartphones. But has the technology created a new cyber threat?
Hackers need only a short audio sample to synthesize or replay a human voice convincingly enough to trick people and security systems.
David Emm, principal security researcher at Kaspersky Lab, believes the central risk posed by a voice-activated device is that we forget that it’s there.
“What it does offer is a way to get people when they’re less guarded,” Emm tells Techworld. “If somebody has access to a device that can listen to what’s going on, then they can scoop up lots and lots of information because in daily life we talk a lot – a lot more than we type on a keyboard – and therefore the potential for gathering information is so much greater.”
The results have already made headlines. A family in Oregon found out Alexa had recorded a private conversation and sent it to a contact in their address book and there have been numerous reports of people making accidental orders while their smart speaker was listening in to their conversation.
Emm says another danger is that companies could use people’s voice to personalize advertising.
“They’re clearly recording lots of information and that information is useful. If you’re going to have a useful Alexa, or a useful Siri, then that needs to be trained,” he says. “Otherwise you’re going to end up with it just getting stuff wrong and ordering milk when you wanted a microwave oven or something like that.
“To make it an effective device to use, its capabilities have got to be honed. That’s going to be done through machine learning at the back end, but it does give them an awful lot of information.
“Now I don’t know if they’re currently using them for advertising purposes – what I’ve read suggests not – but the issue is in the future, if they have all this kind of information, it’s going to be a very tempting pot of datato try and monetize.”
The threat could grow in the future as the technology becomes more powerful and the methods to intercept it improve. Google Virtual Assistant can now make phone calls on your behalf, while researchers have shown that Siri, Alex, Google Assistant can be breached used audio commands that are undetectable to the human ear.
Still, not everyone shares Emm’s concerns. Listening in on conversations is time-consuming and most people would struggle to accurately replay a voice.
Dan Kaminsky, the CSO and co-founder of cyber-security company White Ops, joked that the incident in Oregon was caused because “Alexa re-implemented the butt dial”.
Emm admits that hackers will likely only use it for highly targeted attacks.
“That would go together with a targeted campaign where what they’re looking to do is to gather intel for a wider attack,” he explains. “Apply that to you at home and what’s to be gained by doing that? The answer is well probably not actually much – unless they want to get to your company through you.
“On the other hand, from the point of view of gathering data and the privacy implications, they certainly are much bigger.”
For those who are concerned, there are ways they can mitigate the risks. Emm suggests that they secure the network, turn off the microphone and installing antivirus software on the device.
They should also consider blocking or password protecting purchasing in the device settings.
“We’re inclined when we buy new devices to just get it up and running with the default settings and we’ve already seen instances where people have ordered things online inadvertently using the Amazon Echo simply because they’ve got one-click purchasing set up,” says Emm.
Further support will soon be on its way. Researchers from the Ubiquitous Security and Privacy Research Laboratory at the University of Buffalo are developing an app that can detect a voice replay attack. The system uses the magnetometer in a smartphone compass to detect the magnetic fields from a smart speaker, and the phone’s trajectory mapping algorithm to measure the distance between the speaker and the phone.
“We cannot decide if voice authentication will be pervasive in the future. It might be. We’re already seeing the increasing trend,” the laboratory’s director Kui Ren said in a press release. “And if that is the case, we have to defend against voice replay attacks. Otherwise, voice authentication cannot be secure.”
Developers also need to watch out that they’re not neglecting security in favour of usability and cost. Emm recommends they make security the foremost priority in the product design stage.
“That may mean talking to companies like Kaspersky Lab to see if we want to build security into this thing and bake it into the actual firmware, or maybe we need to just be aware that if we’re doing something that’s capturing data somebody might want to get access to that so we need to make this thing robust,” he says.
“We need to be able to give it the capability to update. If we deliver a device to somebody and there is no capability of updating it and then somebody finds a vulnerability, then everybody who’s got that device is wide open to attack and there’s no way after the event of mitigating it.
“Developing a device which is as secure as you can make it at the start is going to be great, but also making sure that you can patch anything that comes to light is going to be vital.”