Entries Tagged "eavesdropping"

Page 4 of 6

Eavesdropping on Typing Over Voice-Over-IP

Interesting research: “Don’t Skype & Type! Acoustic Eavesdropping in Voice-Over-IP“:

Abstract: Acoustic emanations of computer keyboards represent a serious privacy issue. As demonstrated in prior work, spectral and temporal properties of keystroke sounds might reveal what a user is typing. However, previous attacks assumed relatively strong adversary models that are not very practical in many real-world settings. Such strong models assume: (i) adversary’s physical proximity to the victim, (ii) precise profiling of the victim’s typing style and keyboard, and/or (iii) significant amount of victim’s typed information (and its corresponding sounds) available to the adversary.

In this paper, we investigate a new and practical keyboard acoustic eavesdropping attack, called Skype & Type (S&T), which is based on Voice-over-IP (VoIP). S&T relaxes prior strong adversary assumptions. Our work is motivated by the simple observation that people often engage in secondary activities (including typing) while participating in VoIP calls. VoIP software can acquire acoustic emanations of pressed keystrokes (which might include passwords and other sensitive information) and transmit them to others involved in the call. In fact, we show that very popular VoIP software (Skype) conveys enough audio information to reconstruct the victim’s input ­ keystrokes typed on the remote keyboard. In particular, our results demonstrate
that, given some knowledge on the victim’s typing style and the keyboard, the attacker attains top-5 accuracy of 91:7% in guessing a random key pressed by the victim. (The accuracy goes down to still alarming 41:89% if the attacker is oblivious to both the typing style and the keyboard). Finally, we provide evidence that Skype & Type attack is robust to various VoIP issues (e.g., Internet bandwidth fluctuations and presence of voice over keystrokes), thus confirming feasibility of this attack.

News article.

Posted on October 28, 2016 at 5:24 AMView Comments

Hacking Wireless Tire-Pressure Monitoring System

Research paper: “Security and Privacy Vulnerabilities of In-Car Wireless Networks: A Tire Pressure Monitoring System Case Study,” by Ishtiaq Rouf, Rob Miller, Hossen Mustafa, Travis Taylor, Sangho Oh, Wenyuan Xu, Marco Gruteser, Wade Trapper, Ivan Seskar:

Abstract: Wireless networks are being integrated into the modern automobile. The security and privacy implications of such in-car networks, however, have are not well understood as their transmissions propagate beyond the confines of a car’s body. To understand the risks associated with these wireless systems, this paper presents a privacy and security evaluation of wireless Tire Pressure Monitoring Systems using both laboratory experiments with isolated tire pressure sensor modules and experiments with a complete vehicle system. We show that eavesdropping is easily possible at a distance of roughly 40m from a passing vehicle. Further, reverse-engineering of the underlying protocols revealed static 32 bit identifiers and that messages can be easily triggered remotely, which raises privacy concerns as vehicles can be tracked through these identifiers. Further, current protocols do not employ authentication and vehicle implementations do not perform basic input validation, thereby allowing for remote spoofing of sensor messages. We validated this experimentally by triggering tire pressure warning messages in a moving vehicle from a customized software radio attack platform located in a nearby vehicle. Finally, the paper concludes with a set of recommendations for improving the privacy and security of tire pressure monitoring systems and other forthcoming in-car wireless sensor networks.

Posted on September 16, 2016 at 8:59 AMView Comments

Leaked Product Demo from RCS Labs

We have leak from yet another cyberweapons arms manufacturer: the Italian company RCS Labs. Vice Motherboard reports on a surveillance video demo:

The video shows an RCS Lab employee performing a live demo of the company’s spyware to an unidentified man, including a tutorial on how to use the spyware’s control software to perform a man-in-the-middle attack and infect a target computer who wanted to visit a specific website.

RCS Lab’s spyware, called Mito3, allows agents to easily set up these kind of attacks just by applying a rule in the software settings. An agent can choose whatever site he or she wants to use as a vector, click on a dropdown menu and select “inject HTML” to force the malicious popup to appear, according to the video.

Mito3 allows customers to listen in on the target, intercept voice calls, text messages, video calls, social media activities, and chats, apparently both on computer and mobile platforms. It also allows police to track the target and geo-locate it thanks to the GPS. It even offers automatic transcription of the recordings, according to a confidential brochure obtained by Motherboard.

Slashdot thread

Posted on September 9, 2016 at 2:18 PMView Comments

Eavesdropping by the Foscam Security Camera

Brian Krebs has a really weird story about the built-in eavesdropping by the Chinese-made Foscam security camera:

Imagine buying an internet-enabled surveillance camera, network attached storage device, or home automation gizmo, only to find that it secretly and constantly phones home to a vast peer-to-peer (P2P) network run by the Chinese manufacturer of the hardware. Now imagine that the geek gear you bought doesn’t actually let you block this P2P communication without some serious networking expertise or hardware surgery that few users would attempt.

Posted on February 24, 2016 at 12:05 PMView Comments

Security vs. Surveillance

Both the “going dark” metaphor of FBI Director James Comey and the contrasting “golden age of surveillance” metaphor of privacy law professor Peter Swire focus on the value of data to law enforcement. As framed in the media, encryption debates are about whether law enforcement should have surreptitious access to data, or whether companies should be allowed to provide strong encryption to their customers.

It’s a myopic framing that focuses only on one threat — criminals, including domestic terrorists — and the demands of law enforcement and national intelligence. This obscures the most important aspects of the encryption issue: the security it provides against a much wider variety of threats.

Encryption secures our data and communications against eavesdroppers like criminals, foreign governments, and terrorists. We use it every day to hide our cell phone conversations from eavesdroppers, and to hide our Internet purchasing from credit card thieves. Dissidents in China and many other countries use it to avoid arrest. It’s a vital tool for journalists to communicate with their sources, for NGOs to protect their work in repressive countries, and for attorneys to communicate with their clients.

Many technological security failures of today can be traced to failures of encryption. In 2014 and 2015, unnamed hackers — probably the Chinese government — stole 21.5 million personal files of U.S. government employees and others. They wouldn’t have obtained this data if it had been encrypted. Many large-scale criminal data thefts were made either easier or more damaging because data wasn’t encrypted: Target, TJ Maxx, Heartland Payment Systems, and so on. Many countries are eavesdropping on the unencrypted communications of their own citizens, looking for dissidents and other voices they want to silence.

Adding backdoors will only exacerbate the risks. As technologists, we can’t build an access system that only works for people of a certain citizenship, or with a particular morality, or only in the presence of a specified legal document. If the FBI can eavesdrop on your text messages or get at your computer’s hard drive, so can other governments. So can criminals. So can terrorists. This is not theoretical; again and again, backdoor accesses built for one purpose have been surreptitiously used for another. Vodafone built backdoor access into Greece’s cell phone network for the Greek government; it was used against the Greek government in 2004-2005. Google kept a database of backdoor accesses provided to the U.S. government under CALEA; the Chinese breached that database in 2009.

We’re not being asked to choose between security and privacy. We’re being asked to choose between less security and more security.

This trade-off isn’t new. In the mid-1990s, cryptographers argued that escrowing encryption keys with central authorities would weaken security. In 2013, cybersecurity researcher Susan Landau published her excellent book Surveillance or Security?, which deftly parsed the details of this trade-off and concluded that security is far more important.

Ubiquitous encryption protects us much more from bulk surveillance than from targeted surveillance. For a variety of technical reasons, computer security is extraordinarily weak. If a sufficiently skilled, funded, and motivated attacker wants in to your computer, they’re in. If they’re not, it’s because you’re not high enough on their priority list to bother with. Widespread encryption forces the listener — whether a foreign government, criminal, or terrorist — to target. And this hurts repressive governments much more than it hurts terrorists and criminals.

Of course, criminals and terrorists have used, are using, and will use encryption to hide their planning from the authorities, just as they will use many aspects of society’s capabilities and infrastructure: cars, restaurants, telecommunications. In general, we recognize that such things can be used by both honest and dishonest people. Society thrives nonetheless because the honest so outnumber the dishonest. Compare this with the tactic of secretly poisoning all the food at a restaurant. Yes, we might get lucky and poison a terrorist before he strikes, but we’ll harm all the innocent customers in the process. Weakening encryption for everyone is harmful in exactly the same way.

This essay previously appeared as part of the paper “Don’t Panic: Making Progress on the ‘Going Dark’ Debate.” It was reprinted on Lawfare. A modified version was reprinted by the MIT Technology Review.

Posted on February 3, 2016 at 6:09 AMView Comments

Integrity and Availability Threats

Cyberthreats are changing. We’re worried about hackers crashing airplanes by hacking into computer networks. We’re worried about hackers remotely disabling cars. We’re worried about manipulated counts from electronic voting booths, remote murder through hacked medical devices and someone hacking an Internet thermostat to turn off the heat and freeze the pipes.

The traditional academic way of thinking about information security is as a triad: confidentiality, integrity, and availability. For years, the security industry has been trying to prevent data theft. Stolen data is used for identity theft and other frauds. It can be embarrassing, as in the Ashley Madison breach. It can be damaging, as in the Sony data theft. It can even be a national security threat, as in the case of the Office of Personal Management data breach. These are all breaches of privacy and confidentiality.

As bad as these threats are, they seem abstract. It’s been hard to craft public policy around them. But this is all changing. Threats to integrity and availability are much more visceral and much more devastating. And they will spur legislative action in a way that privacy risks never have.

Take one example: driverless cars and smart roads.

We’re heading toward a world where driverless cars will automatically communicate with each other and the roads, automatically taking us where we need to go safely and efficiently. The confidentiality threats are real: Someone who can eavesdrop on those communications can learn where the cars are going and maybe who is inside them. But the integrity threats are much worse.

Someone who can feed the cars false information can potentially cause them to crash into each other or nearby walls. Someone could also disable your car so it can’t start. Or worse, disable the entire system so that no one’s car can start.

This new rise in integrity and availability threats is a result of the Internet of Things. The objects we own and interact with will all become computerized and on the Internet. It’s actually more complicated.

What I’m calling the “World Sized Web” is a combination of these Internet-enabled things, cloud computing, mobile computing and the pervasiveness that comes from these systems being always on all the time. Together this means that computers and networks will be much more embedded in our daily lives. Yes, there will be more need for confidentiality, but there is a newfound need to ensure that these systems can’t be subverted to do real damage.

It’s one thing if your smart door lock can be eavesdropped to know who is home. It’s another thing entirely if it can be hacked to prevent you from opening your door or allow a burglar to open the door.

In separate testimonies before different House and Senate committees last year, both the Director of National Intelligence James Clapper and NSA Director Mike Rogers warned of these threats. They both consider them far larger and more important than the confidentiality threat and believe that we are vulnerable to attack.

And once the attacks start doing real damage — once someone dies from a hacked car or medical device, or an entire city’s 911 services go down for a day — there will be a real outcry to do something.

Congress will be forced to act. They might authorize more surveillance. They might authorize more government involvement in private-sector cybersecurity. They might try to ban certain technologies or certain uses. The results won’t be well-thought-out, and they probably won’t mitigate the actual risks. If we’re lucky, they won’t cause even more problems.

I worry that we’re rushing headlong into the World-Sized Web, and not paying enough attention to the new threats that it brings with it. Again and again, we’ve tried to retrofit security in after the fact.

It would be nice if we could do it right from the beginning this time. That’s going to take foresight and planning. The Obama administration just proposed spending $4 billion to advance the engineering of driverless cars.

How about focusing some of that money on the integrity and availability threats from that and similar technologies?

This essay previously appeared on CNN.com.

Posted on January 29, 2016 at 7:29 AMView Comments

Smart Watch that Monitors Typing

Here’s a watch that monitors the movements of your hand and can guess what you’re typing.

Using the watch’s built-in motion sensors, more specifically data from the accelerometer and gyroscope, researchers were able to create a 3D map of the user’s hand movements while typing on a keyboard.

The researchers then created two algorithms, one for detecting what keys were being pressed, and one for guessing what word was typed.

The first algorithm recorded the places where the smartwatch’s sensors would detect a dip in movement, considering this spot as a keystroke, and then created a heatmap of common spots where the user would press down.

Based on known keyboard layouts, these spots were attributed to letters on the left side of the keyboard.

The second algorithm took this data, and analyzing the pauses between smartwatch (left hand) keystrokes, it was able to detect how many letters were pressed with the right hand, based on the user’s regular keystroke frequency.

Based on a simple dictionary lookup, the algorithm then managed to reliably reproduce what words were typed on the keyboard.

Posted on September 18, 2015 at 5:20 AMView Comments

Drone Self-Defense and the Law

Last month, a Kentucky man shot down a drone that was hovering near his backyard.

WDRB News reported that the camera drone’s owners soon showed up at the home of the shooter, William H. Merideth: “Four guys came over to confront me about it, and I happened to be armed, so that changed their minds,” Merideth said. “They asked me, ‘Are you the S-O-B that shot my drone?’ and I said, ‘Yes I am,'” he said. “I had my 40 mm Glock on me and they started toward me and I told them, ‘If you cross my sidewalk, there’s gonna be another shooting.'” Police charged Meredith with criminal mischief and wanton endangerment.

This is a trend. People have shot down drones in southern New Jersey and rural California as well. It’s illegal, and they get arrested for it.

Technology changes everything. Specifically, it upends long-standing societal balances around issues like security and privacy. When a capability becomes possible, or cheaper, or more common, the changes can be far-reaching. Rebalancing security and privacy after technology changes capabilities can be very difficult, and take years. And we’re not very good at it.

The security threats from drones are real, and the government is taking them seriously. In January, a man lost control of his drone, which crashed on the White House lawn. In May, another man was arrested for trying to fly his drone over the White House fence, and another last week for flying a drone into the stadium where the U.S. Open was taking place.

Drones have attempted to deliver drugs to prisons in Maryland, Ohio and South Carolina ­so far.

There have been many near-misses between drones and airplanes. Many people have written about the possible terrorist uses of drones.

Defenses are being developed. Both Lockheed Martin and Boeing sell anti-drone laser weapons. One company sells shotgun shells specifically designed to shoot down drones.

Other companies are working on technologies to detect and disable them safely. Some of those technologies were used to provide security at this year’s Boston Marathon.

Law enforcement can deploy these technologies, but under current law it’s illegal to shoot down a drone, even if it’s hovering above your own property. In our society, you’re generally not allowed to take the law into your own hands. You’re expected to call the police and let them deal with it.

There’s an alternate theory, though, from law professor Michael Froomkin. He argues that self-defense should be permissible against drones simply because you don’t know their capabilities. We know, for example, that people have mounted guns on drones, which means they could pose a threat to life. Note that this legal theory has not been tested in court.

Increasingly, government is regulating drones and drone flights both at the state level and by the FAA. There are proposals to require that drones have an identifiable transponder, or no-fly zones programmed into the drone software.

Still, a large number of security issues remain unresolved. How do we feel about drones with long-range listening devices, for example? Or drones hovering outside our property and photographing us through our windows?

What’s going on is that drones have changed how we think about security and privacy within our homes, by removing the protections we used to get from fences and walls. Of course, being spied on and shot at from above is nothing new, but access to those technologies was expensive and largely the purview of governments and some corporations. Drones put these capabilities into the hands of hobbyists, and we don’t know what to do about it.

The issues around drones will get worse as we move from remotely piloted aircraft to true drones: aircraft that operate autonomously from a computer program. For the first time, autonomous robots — ­with ever-increasing intelligence and capabilities at an ever-decreasing cost — ­will have access to public spaces. This will create serious problems for society, because our legal system is largely based on deterring human miscreants rather than their proxies.

Our desire to shoot down a drone hovering nearby is understandable, given its potential threat. Society’s need for people not to take the law into their own hands­ — and especially not to fire guns into the air­ — is also understandable. These two positions are increasingly coming into conflict, and will require increasing government regulation to sort out. But more importantly, we need to rethink our assumptions of security and privacy in a world of autonomous drones, long-range cameras, face recognition, and the myriad other technologies that are increasingly in the hands of everyone.

This essay previously appeared on CNN.com.

Posted on September 11, 2015 at 6:45 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.