Entries Tagged "keys"

Page 5 of 15

Man-in-the-Middle Attack against Electronic Car-Door Openers

This is an interesting tactic, and there’s a video of it being used:

The theft took just one minute and the Mercedes car, stolen from the Elmdon area of Solihull on 24 September, has not been recovered.

In the footage, one of the men can be seen waving a box in front of the victim’s house.

The device receives a signal from the key inside and transmits it to the second box next to the car.

The car’s systems are then tricked into thinking the key is present and it unlocks, before the ignition can be started.

Posted on November 28, 2017 at 6:03 AMView Comments

Vulnerability in Amazon Key

Amazon Key is an IoT door lock that can enable one-time access codes for delivery people. To further secure that system, Amazon sells Cloud Cam, a camera that watches the door to ensure that delivery people don’t abuse their one-time access privilege.

Cloud Cam has been hacked:

But now security researchers have demonstrated that with a simple program run from any computer in Wi-Fi range, that camera can be not only disabled but frozen. A viewer watching its live or recorded stream sees only a closed door, even as their actual door is opened and someone slips inside. That attack would potentially enable rogue delivery people to stealthily steal from Amazon customers, or otherwise invade their inner sanctum.

And while the threat of a camera-hacking courier seems an unlikely way for your house to be burgled, the researchers argue it potentially strips away a key safeguard in Amazon’s security system.

Amazon is patching the system.

Posted on November 20, 2017 at 6:19 AMView Comments

New KRACK Attack Against Wi-Fi Encryption

Mathy Vanhoef has just published a devastating attack against WPA2, the 14-year-old encryption protocol used by pretty much all Wi-Fi systems. It’s an interesting attack, where the attacker forces the protocol to reuse a key. The authors call this attack KRACK, for Key Reinstallation Attacks.

This is yet another of a series of marketed attacks; with a cool name, a website, and a logo. The Q&A on the website answers a lot of questions about the attack and its implications. And lots of good information in this ArsTechnica article.

There is an academic paper, too:

“Key Reinstallation Attacks: Forcing Nonce Reuse in WPA2,” by Mathy Vanhoef and Frank Piessens.

Abstract: We introduce the key reinstallation attack. This attack abuses design or implementation flaws in cryptographic protocols to reinstall an already-in-use key. This resets the key’s associated parameters such as transmit nonces and receive replay counters. Several types of cryptographic Wi-Fi handshakes are affected by the attack. All protected Wi-Fi networks use the 4-way handshake to generate a fresh session key. So far, this 14-year-old handshake has remained free from attacks, and is even proven secure. However, we show that the 4-way handshake is vulnerable to a key reinstallation attack. Here, the adversary tricks a victim into reinstalling an already-in-use key. This is achieved by manipulating and replaying handshake messages. When reinstalling the key, associated parameters such as the incremental transmit packet number (nonce) and receive packet number (replay counter) are reset to their initial value. Our key reinstallation attack also breaks the PeerKey, group key, and Fast BSS Transition (FT) handshake. The impact depends on the handshake being attacked, and the data-confidentiality protocol in use. Simplified, against AES-CCMP an adversary can replay and decrypt (but not forge) packets. This makes it possible to hijack TCP streams and inject malicious data into them. Against WPA-TKIP and GCMP the impact is catastrophic: packets can be replayed, decrypted, and forged. Because GCMP uses the same authentication key in both communication directions, it is especially affected.

Finally, we confirmed our findings in practice, and found that every Wi-Fi device is vulnerable to some variant of our attacks. Notably, our attack is exceptionally devastating against Android 6.0: it forces the client into using a predictable all-zero encryption key.

I’m just reading about this now, and will post more information as I learn it.

EDITED TO ADD: More news.

EDITED TO ADD: This meets my definition of brilliant. The attack is blindingly obvious once it’s pointed out, but for over a decade no one noticed it.

EDITED TO ADD: Matthew Green has a blog post on what went wrong. The vulnerability is in the interaction between two protocols. At a meta level, he blames the opaque IEEE standards process:

One of the problems with IEEE is that the standards are highly complex and get made via a closed-door process of private meetings. More importantly, even after the fact, they’re hard for ordinary security researchers to access. Go ahead and google for the IETF TLS or IPSec specifications—you’ll find detailed protocol documentation at the top of your Google results. Now go try to Google for the 802.11i standards. I wish you luck.

The IEEE has been making a few small steps to ease this problem, but they’re hyper-timid incrementalist bullshit. There’s an IEEE program called GET that allows researchers to access certain standards (including 802.11) for free, but only after they’ve been public for six months—coincidentally, about the same time it takes for vendors to bake them irrevocably into their hardware and software.

This whole process is dumb and—in this specific case—probably just cost industry tens of millions of dollars. It should stop.

Nicholas Weaver explains why most people shouldn’t worry about this:

So unless your Wi-Fi password looks something like a cat’s hairball (e.g. “:SNEIufeli7rc”—which is not guessable with a few million tries by a computer), a local attacker had the capability to determine the password, decrypt all the traffic, and join the network before KRACK.

KRACK is, however, relevant for enterprise Wi-Fi networks: networks where you needed to accept a cryptographic certificate to join initially and have to provide both a username and password. KRACK represents a new vulnerability for these networks. Depending on some esoteric details, the attacker can decrypt encrypted traffic and, in some cases, inject traffic onto the network.

But in none of these cases can the attacker join the network completely. And the most significant of these attacks affects Linux devices and Android phones, they don’t affect Macs, iPhones, or Windows systems. Even when feasible, these attacks require physical proximity: An attacker on the other side of the planet can’t exploit KRACK, only an attacker in the parking lot can.

EDITED TO ADD (11/13): The official link to the paper blocks anonymous users. Here’s an alternate.

Posted on October 16, 2017 at 8:39 AMView Comments

NSA Brute-Force Keysearch Machine

The Intercept published a story about a dedicated NSA brute-force keysearch machine being built with the help of New York University and IBM. It’s based on a document that was accidentally shared on the Internet by NYU.

The article is frustratingly short on details:

The WindsorGreen documents are mostly inscrutable to anyone without a Ph.D. in a related field, but they make clear that the computer is the successor to WindsorBlue, a next generation of specialized IBM hardware that would excel at cracking encryption, whose known customers are the U.S. government and its partners.

Experts who reviewed the IBM documents said WindsorGreen possesses substantially greater computing power than WindsorBlue, making it particularly adept at compromising encryption and passwords. In an overview of WindsorGreen, the computer is described as a “redesign” centered around an improved version of its processor, known as an “application specific integrated circuit,” or ASIC, a type of chip built to do one task, like mining bitcoin, extremely well, as opposed to being relatively good at accomplishing the wide range of tasks that, say, a typical MacBook would handle. One of the upgrades was to switch the processor to smaller transistors, allowing more circuitry to be crammed into the same area, a change quantified by measuring the reduction in nanometers (nm) between certain chip features.

Unfortunately, the Intercept decided not to publish most of the document, so all of those people with “a Ph.D. in a related field” can’t read and understand WindsorGreen’s capabilities. What sorts of key lengths can the machine brute force? Is it optimized for symmetric or asymmetric cryptanalysis? Random brute force or dictionary attacks? We have no idea.

Whatever the details, this is exactly the sort of thing the NSA should be spending their money on. Breaking the cryptography used by other nations is squarely in the NSA’s mission.

EDITED TO ADD (6/13): Some of the documents are online.

Posted on May 16, 2017 at 6:40 AMView Comments

Shadow Brokers Releases the Rest of Their NSA Hacking Tools

Last August, an unknown group called the Shadow Brokers released a bunch of NSA tools to the public. The common guesses were that the tools were discovered on an external staging server, and that the hack and release was the work of the Russians (back then, that wasn’t controversial). This was me:

Okay, so let’s think about the game theory here. Some group stole all of this data in 2013 and kept it secret for three years. Now they want the world to know it was stolen. Which governments might behave this way? The obvious list is short: China and Russia. Were I betting, I would bet Russia, and that it’s a signal to the Obama Administration: “Before you even think of sanctioning us for the DNC hack, know where we’ve been and what we can do to you.”

They published a second, encrypted, file. My speculation:

They claim to be auctioning off the rest of the data to the highest bidder. I think that’s PR nonsense. More likely, that second file is random nonsense, and this is all we’re going to get. It’s a lot, though.

I was wrong. On November 1, the Shadow Brokers released some more documents, and two days ago they released the key to that original encrypted archive:

EQGRP-Auction-Files is CrDj”(;Va.*NdlnzB9M?@K2)#>deB7mN

I don’t think their statement is worth reading for content. I still believe the Russia are more likely to be the perpetrator than China.

There’s not much yet on the contents of this dump of Top Secret NSA hacking tools, but it can’t be a fun weekend at Ft. Meade. I’m sure that by now they have enough information to know exactly where and when the data got stolen, and maybe even detailed information on who did it. My guess is that we’ll never see that information, though.

EDITED TO ADD (4/11): Seems like there’s not a lot here.

Posted on April 10, 2017 at 5:51 AMView Comments

Using Intel's SGX to Attack Itself

Researchers have demonstrated using Intel’s Software Guard Extensions to hide malware and steal cryptographic keys from inside SGX’s protected enclave:

Malware Guard Extension: Using SGX to Conceal Cache Attacks

Abstract:In modern computer systems, user processes are isolated from each other by the operating system and the hardware. Additionally, in a cloud scenario it is crucial that the hypervisor isolates tenants from other tenants that are co-located on the same physical machine. However, the hypervisor does not protect tenants against the cloud provider and thus the supplied operating system and hardware. Intel SGX provides a mechanism that addresses this scenario. It aims at protecting user-level software from attacks from other processes, the operating system, and even physical attackers.

In this paper, we demonstrate fine-grained software-based side-channel attacks from a malicious SGX enclave targeting co-located enclaves. Our attack is the first malware running on real SGX hardware, abusing SGX protection features to conceal itself. Furthermore, we demonstrate our attack both in a native environment and across multiple Docker containers. We perform a Prime+Probe cache side-channel attack on a co-located SGX enclave running an up-to-date RSA implementation that uses a constant-time multiplication primitive. The attack works although in SGX enclaves there are no timers, no large pages, no physical addresses, and no shared memory. In a semi-synchronous attack, we extract 96% of an RSA private key from a single trace. We extract the full RSA private key in an automated attack from 11 traces within 5 minutes.

News article.

Posted on March 16, 2017 at 5:54 AMView Comments

"Proof Mode" for your Smartphone Camera

ProofMode is an app for your smartphone that adds data to the photos you take to prove that they are real and unaltered:

On the technical front, what the app is doing is automatically generating an OpenPGP key for this installed instance of the app itself, and using that to automatically sign all photos and videos at time of capture. A sha256 hash is also generated, and combined with a snapshot of all available device sensor data, such as GPS location, wifi and mobile networks, altitude, device language, hardware type, and more. This is also signed, and stored with the media. All of this happens with no noticeable impact on battery life or performance, every time the user takes a photo or video.

This doesn’t solve all the problems with fake photos, but it’s a good step in the right direction.

Posted on March 1, 2017 at 6:02 AMView Comments

Apple's Cloud Key Vault

Ever since Ian Krstić, Apple’s Head of Security Engineering and Architecture, presented the company’s key backup technology at Black Hat 2016, people have been pointing to it as evidence that the company can create a secure backdoor for law enforcement.

It’s not. Matthew Green and Steve Bellovin have both explained why not. And the same group of us that wrote the “Keys Under Doormats” paper on why backdoors are a bad idea have also explained why Apple’s technology does not enable it to build secure backdoors for law enforcement. Michael Specter did the bulk of the writing.

The problem with Tait’s argument becomes clearer when you actually try to turn Apple’s Cloud Key Vault into an exceptional access mechanism. In that case, Apple would have to replace the HSM with one that accepts an additional message from Apple or the FBI­—or an agency from any of the 100+ countries where Apple sells iPhones­—saying “OK, decrypt,” as well as the user’s password. In order to do this securely, these messages would have to be cryptographically signed with a second set of keys, which would then have to be used as often as law enforcement access is required. Any exceptional access scheme made from this system would have to have an additional set of keys to ensure authorized use of the law enforcement access credentials.

Managing access by a hundred-plus countries is impractical due to mutual mistrust, so Apple would be stuck with keeping a second signing key (or database of second signing keys) for signing these messages that must be accessed for each and every law enforcement agency. This puts us back at the situation where Apple needs to protect another repeatedly-used, high-value public key infrastructure: an equivalent situation to what has already resulted in the theft of Bitcoin wallets, RealTek’s code signing keys, and Certificate Authority failures, among many other disasters.

Repeated access of private keys drastically increases their probability of theft, loss, or inappropriate use. Apple’s Cloud Key Vault does not have any Apple-owned private key, and therefore does not indicate that a secure solution to this problem actually exists.

It is worth noting that the exceptional access schemes one can create from Apple’s CKV (like the one outlined above) inherently entails the precise issues we warned about in our previous essay on the danger signs for recognizing flawed exceptional access systems. Additionally, the Risks of Key Escrow and Keys Under Doormats papers describe further technical and nontechnical issues with exceptional access schemes that must be addressed. Among the nontechnical hurdles would be the requirement, for example, that Apple run a large legal office to confirm that requests for access from the government of Uzbekistan actually involved a device that was located in that country, and that the request was consistent with both US law and Uzbek law.

My colleagues and I do not argue that the technical community doesn’t know how to store high-value encryption keys­—to the contrary that’s the whole point of an HSM. Rather, we assert that holding on to keys in a safe way such that any other party (i.e. law enforcement or Apple itself) can also access them repeatedly without high potential for catastrophic loss is impossible with today’s technology, and that any scheme running into fundamental sociotechnical challenges such as jurisdiction must be evaluated honestly before any technical implementation is considered.

Posted on September 8, 2016 at 8:00 AMView Comments

Collision Attacks Against 64-Bit Block Ciphers

We’ve long known that 64 bits is too small for a block cipher these days. That’s why new block ciphers like AES have 128-bit, or larger, block sizes. The insecurity of the smaller block is nicely illustrated by a new attack called “Sweet32.” It exploits the ability to find block collisions in Internet protocols to decrypt some traffic, even though the attackers never learn the key.

Paper here. Matthew Green has a nice explanation of the attack. And some news articles. Hacker News thread.

Posted on August 26, 2016 at 2:19 PMView Comments

1 3 4 5 6 7 15

Sidebar photo of Bruce Schneier by Joe MacInnis.