Entries Tagged "keys"

Page 5 of 13

CONIKS

CONIKS is an new easy-to-use transparent key-management system:

CONIKS is a key management system for end users capable of integration in end-to-end secure communication services. The main idea is that users should not have to worry about managing encryption keys when they want to communicate securely, but they also should not have to trust their secure communication service providers to act in their interest.

Here’s the academic paper. And here’s a good discussion of the protocol and how it works. This is the problem they’re trying to solve:

One of the main challenges to building usable end-to-end encrypted communication tools is key management. Services such as Apple’s iMessage have made encrypted communication available to the masses with an excellent user experience because Apple manages a directory of public keys in a centralized server on behalf of their users. But this also means users have to trust that Apple’s key server won’t be compromised or compelled by hackers or nation-state actors to insert spurious keys to intercept and manipulate users’ encrypted messages. The alternative, and more secure, approach is to have the service provider delegate key management to the users so they aren’t vulnerable to a compromised centralized key server. This is how Google’s End-To-End works right now. But decentralized key management means users must “manually” verify each other’s keys to be sure that the keys they see for one another are valid, a process that several studies have shown to be cumbersome and error-prone for the vast majority of users. So users must make the choice between strong security and great usability.

And this is CONIKS:

In CONIKS, communication service providers (e.g. Google, Apple) run centralized key servers so that users don’t have to worry about encryption keys, but the main difference is CONIKS key servers store the public keys in a tamper-evident directory that is publicly auditable yet privacy-preserving. On a regular basis, CONIKS key servers publish directory summaries, which allow users in the system to verify they are seeing consistent information. To achieve this transparent key management, CONIKS uses various cryptographic mechanisms that leave undeniable evidence if any malicious outsider or insider were to tamper with any key in the directory and present different parties different views of the directory. These consistency checks can be automated and built into the communication apps to minimize user involvement.

Posted on April 6, 2016 at 10:27 AMView Comments

Companies Handing Source Code Over to Governments

ZDNet has an article on US government pressure on software companies to hand over copies of their source code. There’s no details because no one is talking on the record, but I also believe that this is happening.

When asked, a spokesperson for the Justice Dept. acknowledged that the department has demanded source code and private encryption keys before.

These orders would probably come from the FISA Court:

These orders are so highly classified that simply acknowledging an order’s existence is illegal, even a company’s chief executive or members of the board may not be told. Only those who are necessary to execute the order would know, and would be subject to the same secrecy provisions.

Given that Federighi heads the division, it would be almost impossible to keep from him the existence of a FISA order demanding the company’s source code.

It would not be the first time that the US government has reportedly used proprietary code and technology from American companies to further its surveillance efforts.

Top secret NSA documents leaked by whistleblower Edward Snowden, reported in German magazine Der Spiegel in late-2013, have suggested some hardware and software makers were compelled to hand over source code to assist in government surveillance.

The NSA’s catalog of implants and software backdoors suggest that some companies, including Dell, Huawei, and Juniper — which was publicly linked to an “unauthorized” backdoor — had their servers and firewall products targeted and attacked through various exploits. Other exploits were able to infiltrate firmware of hard drives manufactured by Western Digital, Seagate, Maxtor, and Samsung.

Last year, antivirus maker and security firm Kaspersky later found evidence that the NSA had obtained source code from a number of prominent hard drive makers — a claim the NSA denied — to quietly install software used to eavesdrop on the majority of the world’s computers.

“There is zero chance that someone could rewrite the [hard drive] operating system using public information,” said one of the researchers.

The problem is, of course, is that any company forced by the US to hand over their source code would also be forbidden from talking about it.

It’s the sort of thing China does:

For most computing and networking equipment, the chart says, source code must be turned over to Chinese officials. But many foreign companies would be unwilling to disclose code because of concerns about intellectual property, security and, in some cases, United States export law.

The chart also calls for companies that want to sell to banks to set up research and development centers in China, obtain permits for workers servicing technology equipment and build “ports” to allow Chinese officials to manage and monitor data processed by their hardware.

The draft antiterrorism law pushes even further, calling for companies to store all data related to Chinese users on servers in China, create methods for monitoring content for terror threats and provide keys to encryption to public security authorities.

Slashdot thread.

Posted on March 18, 2016 at 11:27 AMView Comments

Practical TEMPEST Attack

Four researchers have demonstrated a TEMPEST attack against a laptop, recovering its keys by listening to its electrical emanations. The cost for the attack hardware was about $3,000.

News article:

To test the hack, the researchers first sent the target a specific ciphertext — ­in other words, an encrypted message.

“During the decryption of the chosen ciphertext, we measure the EM leakage of the target laptop, focusing on a narrow frequency band,” the paper reads. The signal is then processed, and “a clean trace is produced which reveals information about the operands used in the elliptic curve cryptography,” it continues, which in turn “is used in order to reveal the secret key.”

The equipment used included an antenna, amplifiers, a software-defined radio, and a laptop. This process was being carried out through a 15cm thick wall, reinforced with metal studs, according to the paper.

The researchers obtained the secret key after observing 66 decryption processes, each lasting around 0.05 seconds. “This yields a total measurement time of about 3.3 sec,” the paper reads. It’s important to note that when the researchers say that the secret key was obtained in “seconds,” that’s the total measurement time, and not necessarily how long it would take for the attack to actually be carried out. A real world attacker would still need to factor in other things, such as the target reliably decrypting the sent ciphertext, because observing that process is naturally required for the attack to be successful.

For half a century this has been a nation-state-level espionage technique. The cost is continually falling.

Posted on February 23, 2016 at 5:49 AMView Comments

Judge Demands that Apple Backdoor an iPhone

A judge has ordered that Apple bypass iPhone security in order for the FBI to attempt a brute-force password attack on an iPhone 5c used by one of the San Bernardino killers. Apple is refusing.

The order is pretty specific technically. This implies to me that what the FBI is asking for is technically possible, and even that Apple assisted in the wording so that the case could be about the legal issues and not the technical ones.

From Apple’s statement about its refusal:

Some would argue that building a backdoor for just one iPhone is a simple, clean-cut solution. But it ignores both the basics of digital security and the significance of what the government is demanding in this case.

In today’s digital world, the “key” to an encrypted system is a piece of information that unlocks the data, and it is only as secure as the protections around it. Once the information is known, or a way to bypass the code is revealed, the encryption can be defeated by anyone with that knowledge.

The government suggests this tool could only be used once, on one phone. But that’s simply not true. Once created, the technique could be used over and over again, on any number of devices. In the physical world, it would be the equivalent of a master key, capable of opening hundreds of millions of locks ­ from restaurants and banks to stores and homes. No reasonable person would find that acceptable.

The government is asking Apple to hack our own users and undermine decades of security advancements that protect our customers ­ including tens of millions of American citizens ­ from sophisticated hackers and cybercriminals. The same engineers who built strong encryption into the iPhone to protect our users would, ironically, be ordered to weaken those protections and make our users less safe.

We can find no precedent for an American company being forced to expose its customers to a greater risk of attack. For years, cryptologists and national security experts have been warning against weakening encryption. Doing so would hurt only the well-meaning and law-abiding citizens who rely on companies like Apple to protect their data. Criminals and bad actors will still encrypt, using tools that are readily available to them.

Congressman Ted Lieu comments.

Here’s an interesting essay about why Tim Cook and Apple are such champions for encryption and privacy.

Today I walked by a television showing CNN. The sound was off, but I saw an aerial scene which I presume was from San Bernardino, and the words “Apple privacy vs. national security.” If that’s the framing, we lose. I would have preferred to see “National security vs. FBI access.”

Slashdot thread.

EDITED TO ADD (2/18): Good analysis of Apple’s case. Interesting debate. Nicholas Weaver’s comments. And commentary from some other planet.

EDITED TO ADD (2/19): Ben Adida comments:

What’s probably happening is that the FBI is using this as a test case for the general principle that they should be able to compel tech companies to assist in police investigations. And that’s pretty smart, because it’s a pretty good test case: Apple obviously wants to help prevent terrorist attacks, so they’re left to argue the slippery slope argument in the face of an FBI investigation of a known terrorist. Well done, FBI, well done.

And Julian Sanchez’s comments. His conclusion:

These, then, are the high stakes of Apple’s resistance to the FBI’s order: not whether the federal government can read one dead terrorism suspect’s phone, but whether technology companies can be conscripted to undermine global trust in our computing devices. That’s a staggeringly high price to pay for any investigation.

A New York Times editorial.

Also, two questions: One, what do we know about Apple’s assistance in the past, and why this one is different? Two, has anyone speculated on how much this will cost Apple? The FBI is demanding that Apple give them free engineering work. What’s the value of that work?

EDITED TO ADD (2/20): Jonathan Zdziarski writes on the differences between the FBI compelling someone to provide a service versus build a tool, and why the latter will 1) be difficult and expensive, 2) will get out into the wild, and 3) set a dangerous precedent.

This answers my first question, above:

For years, the government could come to Apple with a subpoena and a phone, and have the manufacturer provide a disk image of the device. This largely worked because Apple didn’t have to hack into their phones to do this. Up until iOS 8, the encryption Apple chose to use in their design was easily reversible when you had code execution on the phone (which Apple does). So all through iOS 7, Apple only needed to insert the key into the safe and provide FBI with a copy of the data.

EFF wrote a good technical explainer on the case. My only complaint is with the last section. I have heard directly from Apple that this technique still works on current model phones using the current iOS version.

I am still stunned by how good a case the FBI chose to push this. They have all the sympathy in the media that they could hope for.

EDITED TO ADD (2/20): Tim Cook as privacy advocate. How the back door works on modern iPhones. Why the average American should care. The grugq on what this all means.

EDITED TO ADD (2/22): I wrote an op ed for the Washington Post.

Posted on February 17, 2016 at 2:15 PMView Comments

UK Government Promoting Backdoor-Enabled Voice Encryption Protocol

The UK government is pushing something called the MIKEY-SAKKE protocol to secure voice. Basically, it’s an identity-based system that necessarily requires a trusted key-distribution center. So key escrow is inherently built in, and there’s no perfect forward secrecy. The only reasonable explanation for designing a protocol with these properties is third-party eavesdropping.

Steven Murdoch has explained the details. The upshot:

The design of MIKEY-SAKKE is motivated by the desire to allow undetectable and unauditable mass surveillance, which may be a requirement in exceptional scenarios such as within government departments processing classified information. However, in the vast majority of cases the properties that MIKEY-SAKKE offers are actively harmful for security. It creates a vulnerable single point of failure, which would require huge effort, skill and cost to secure ­ requiring resource beyond the capability of most companies. Better options for voice encryption exist today, though they are not perfect either. In particular, more work is needed on providing scalable and usable protection against man-in-the-middle attacks, and protection of metadata for contact discovery and calls. More broadly, designers of protocols and systems need to appreciate the ethical consequences of their actions in terms of the political and power structures which naturally follow from their use. MIKEY-SAKKE is the latest example to raise questions over the policy of many governments, including the UK, to put intelligence agencies in charge of protecting companies and individuals from spying, given the conflict of interest it creates.

And GCHQ previously rejected a more secure standard, MIKEY-IBAKE, because it didn’t allow undetectable spying.

Both the NSA and GCHQ repeatedly choose surveillance over security. We need to reject that decision.

Posted on January 22, 2016 at 2:23 PMView Comments

Breaking Diffie-Hellman with Massive Precomputation (Again)

The Internet is abuzz with this blog post and paper, speculating that the NSA is breaking the Diffie-Hellman key-exchange protocol in the wild through massive precomputation.

I wrote about this at length in May when this paper was first made public. (The reason it’s news again is that the paper was just presented at the ACM Computer and Communications Security conference.)

What’s newly being talked about his how this works inside the NSA surveillance architecture. Nicholas Weaver explains:

To decrypt IPsec, a large number of wiretaps monitor for IKE (Internet Key Exchange) handshakes, the protocol that sets up a new IPsec encrypted connection. The handshakes are forwarded to a decryption oracle, a black box system that performs the magic. While this happens, the wiretaps also record all traffic in the associated IPsec connections.

After a period of time, this oracle either returns the private keys or says “i give up”. If the oracle provides the keys, the wiretap decrypts all the stored traffic and continues to decrypt the connection going forward.

[…]

This would also better match the security implications: just the fact that the NSA can decrypt a particular flow is a critical secret. Forwarding a small number of potentially-crackable flows to a central point better matches what is needed to maintain such secrecy.

Thus by performing the decryption in bulk at the wiretaps, complete with hardware acceleration to keep up with the number of encrypted streams, this architecture directly implies that the NSA can break a massive amount of IPsec traffic, a degree of success which implies a cryptanalysis breakthrough.

That last paragraph is Weaver explaining how this attack matches the NSA rhetoric about capabilities in some of their secret documents.

Now that this is out, I’m sure there are a lot of really upset people inside the NSA.

EDITED TO ADD (11/15): How to protect yourself.

Posted on October 16, 2015 at 6:19 AMView Comments

TSA Master Keys

Someone recently noticed a Washington Post story on the TSA that originally contained a detailed photograph of all the TSA master keys. It’s now blurred out of the Washington Post story, but the image is still floating around the Internet. The whole thing neatly illustrates one of the main problems with backdoors, whether in cryptographic systems or physical systems: they’re fragile.

Nicholas Weaver wrote:

TSA “Travel Sentry” luggage locks contain a disclosed backdoor which is similar in spirit to what Director Comey desires for encrypted phones. In theory, only the Transportation Security Agency or other screeners should be able to open a TSA lock using one of their master keys. All others, notably baggage handlers and hotel staff, should be unable to surreptitiously open these locks.

Unfortunately for everyone, a TSA agent and the Washington Post revealed the secret. All it takes to duplicate a physical key is a photograph, since it is the pattern of the teeth, not the key itself, that tells you how to open the lock. So by simply including a pretty picture of the complete spread of TSA keys in the Washington Post’s paean to the TSA, the Washington Post enabled anyone to make their own TSA keys.

So the TSA backdoor has failed: we must assume any adversary can open any TSA “lock”. If you want to at least know your luggage has been tampered with, forget the TSA lock and use a zip-tie or tamper-evident seal instead, or attach a real lock and force the TSA to use their bolt cutters.

It’s the third photo on this page, reproduced here. There’s also this set of photos. Get your copy now, in case they disappear.

Reddit thread. BoingBoing post. Engadget article.

EDITED TO ADD (9/10): Someone has published a set of CAD files so you can make your own master keys.

Posted on September 8, 2015 at 6:02 AMView Comments

The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve — the most efficient algorithm for breaking a Diffie-Hellman connection — is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

[…]

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation — just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

EDITED TO ADD: The DH precomputation easily lends itself to custom ASIC design, and is something that pipelines easily. Using BitCoin mining hardware as a rough comparison, this means a couple orders of magnitude speedup.

EDITED TO ADD (5/23): Good analysis of the cryptography.

EDITED TO ADD (5/24): Good explanation by Matthew Green.

Posted on May 21, 2015 at 6:30 AMView Comments

Subconscious Keys

I missed this paper when it was first published in 2012:

“Neuroscience Meets Cryptography: Designing Crypto Primitives Secure Against Rubber Hose Attacks”

Abstract: Cryptographic systems often rely on the secrecy of cryptographic keys given to users. Many schemes, however, cannot resist coercion attacks where the user is forcibly asked by an attacker to reveal the key. These attacks, known as rubber hose cryptanalysis, are often the easiest way to defeat cryptography. We present a defense against coercion attacks using the concept of implicit learning from cognitive psychology. Implicit learning refers to learning of patterns without any conscious knowledge of the learned pattern. We use a carefully crafted computer game to plant a secret password in the participant’s brain without the participant having any conscious knowledge of the trained password. While the planted secret can be used for authentication, the participant cannot be coerced into revealing it since he or she has no conscious knowledge of it. We performed a number of user studies using Amazon’s Mechanical Turk to verify that participants can successfully re-authenticate over time and that they are unable to reconstruct or even recognize short fragments of the planted secret.

Posted on January 28, 2015 at 6:39 AMView Comments

1 3 4 5 6 7 13

Sidebar photo of Bruce Schneier by Joe MacInnis.