Entries Tagged "https"

Page 1 of 2

Oblivious DNS-over-HTTPS

This new protocol, called Oblivious DNS-over-HTTPS (ODoH), hides the websites you visit from your ISP.

Here’s how it works: ODoH wraps a layer of encryption around the DNS query and passes it through a proxy server, which acts as a go-between the internet user and the website they want to visit. Because the DNS query is encrypted, the proxy can’t see what’s inside, but acts as a shield to prevent the DNS resolver from seeing who sent the query to begin with.

IETF memo.

The paper:

Abstract: The Domain Name System (DNS) is the foundation of a human-usable Internet, responding to client queries for host-names with corresponding IP addresses and records. Traditional DNS is also unencrypted, and leaks user information to network operators. Recent efforts to secure DNS using DNS over TLS (DoT) and DNS over HTTPS (DoH) havebeen gaining traction, ostensibly protecting traffic and hiding content from on-lookers. However, one of the criticisms ofDoT and DoH is brought to bear by the small number of large-scale deployments (e.g., Comcast, Google, Cloudflare): DNS resolvers can associate query contents with client identities in the form of IP addresses. Oblivious DNS over HTTPS (ODoH) safeguards against this problem. In this paper we ask what it would take to make ODoH practical? We describe ODoH, a practical DNS protocol aimed at resolving this issue by both protecting the client’s content and identity. We implement and deploy the protocol, and perform measurements to show that ODoH has comparable performance to protocols like DoH and DoT which are gaining widespread adoption,while improving client privacy, making ODoH a practical privacy enhancing replacement for the usage of DNS.

Slashdot thread.

Posted on December 8, 2020 at 3:02 PMView Comments

Firefox Enables DNS over HTTPS

This is good news:

Whenever you visit a website—even if it’s HTTPS enabled—the DNS query that converts the web address into an IP address that computers can read is usually unencrypted. DNS-over-HTTPS, or DoH, encrypts the request so that it can’t be intercepted or hijacked in order to send a user to a malicious site.

[…]

But the move is not without controversy. Last year, an internet industry group branded Mozilla an “internet villain” for pressing ahead the security feature. The trade group claimed it would make it harder to spot terrorist materials and child abuse imagery. But even some in the security community are split, amid warnings that it could make incident response and malware detection more difficult.

The move to enable DoH by default will no doubt face resistance, but browser makers have argued it’s not a technology that browser makers have shied away from. Firefox became the first browser to implement DoH—with others, like Chrome, Edge, and Opera—quickly following suit.

I think DoH is a great idea, and long overdue.

Slashdot thread. Tech details here. And here’s a good summary of the criticisms.

Posted on February 25, 2020 at 9:15 AMView Comments

E-Mailing Private HTTPS Keys

I don’t know what to make of this story:

The email was sent on Tuesday by the CEO of Trustico, a UK-based reseller of TLS certificates issued by the browser-trusted certificate authorities Comodo and, until recently, Symantec. It was sent to Jeremy Rowley, an executive vice president at DigiCert, a certificate authority that acquired Symantec’s certificate issuance business after Symantec was caught flouting binding industry rules, prompting Google to distrust Symantec certificates in its Chrome browser. In communications earlier this month, Trustico notified DigiCert that 50,000 Symantec-issued certificates Trustico had resold should be mass revoked because of security concerns.

When Rowley asked for proof the certificates were compromised, the Trustico CEO emailed the private keys of 23,000 certificates, according to an account posted to a Mozilla security policy forum. The report produced a collective gasp among many security practitioners who said it demonstrated a shockingly cavalier treatment of the digital certificates that form one of the most basic foundations of website security.

Generally speaking, private keys for TLS certificates should never be archived by resellers, and, even in the rare cases where such storage is permissible, they should be tightly safeguarded. A CEO being able to attach the keys for 23,000 certificates to an email raises troubling concerns that those types of best practices weren’t followed.

I am croggled by the multiple layers of insecurity here.

BoingBoing post.

Posted on March 13, 2018 at 6:31 AMView Comments

Hacking Password-Protected Computers via the USB Port

PoisonTap is an impressive hacking tool that can compromise computers via the USB port, even when they are password-protected. What’s interesting is the chain of vulnerabilities the tool exploits. No individual vulnerability is a problem, but together they create a big problem.

Kamkar’s trick works by chaining together a long, complex series of seemingly innocuous software security oversights that only together add up to a full-blown threat. When PoisonTap—a tiny $5 Raspberry Pi microcomputer loaded with Kamkar’s code and attached to a USB adapter—is plugged into a computer’s USB drive, it starts impersonating a new ethernet connection. Even if the computer is already connected to Wifi, PoisonTap is programmed to tell the victim’s computer that any IP address accessed through that connection is actually on the computer’s local network rather than the internet, fooling the machine into prioritizing its network connection to PoisonTap over that of the Wifi network.

With that interception point established, the malicious USB device waits for any request from the user’s browser for new web content; if you leave your browser open when you walk away from your machine, chances are there’s at least one tab in your browser that’s still periodically loading new bits of HTTP data like ads or news updates. When PoisonTap sees that request, it spoofs a response and feeds your browser its own payload: a page that contains a collection of iframes—a technique for invisibly loading content from one website inside another­that consist of carefully crafted versions of virtually every popular website address on the internet. (Kamkar pulled his list from web-popularity ranking service Alexa‘s top one million sites.)

As it loads that long list of site addresses, PoisonTap tricks your browser into sharing any cookies it’s stored from visiting them, and writes all of that cookie data to a text file on the USB stick. Sites use cookies to check if a visitor has recently logged into the page, allowing visitors to avoid doing so repeatedly. So that list of cookies allows any hacker who walks away with the PoisonTap and its stored text file to access the user’s accounts on those sites.

There’s more. Here’s another article with more details. Also note that HTTPS is a protection.

Yesterday, I testified about this at a joint hearing of the Subcommittee on Communications and Technology, and the Subcommittee on Commerce, Manufacturing, and Trade—both part of the Committee on Energy and Commerce of the US House of Representatives. Here’s the video; my testimony starts around 1:10:10.

The topic was the Dyn attacks and the Internet of Things. I talked about different market failures that will affect security on the Internet of Things. One of them was this problem of emergent vulnerabilities. I worry that as we continue to connect things to the Internet, we’re going to be seeing a lot of these sorts of attacks: chains of tiny vulnerabilities that combine into a massive security risk. It’ll be hard to defend against these types of attacks. If no one product or process is to blame, no one has responsibility to fix the problem. So I gave a mostly Republican audience a pro-regulation message. They were surprisingly polite and receptive.

Posted on November 17, 2016 at 8:22 AMView Comments

Breaking Diffie-Hellman with Massive Precomputation (Again)

The Internet is abuzz with this blog post and paper, speculating that the NSA is breaking the Diffie-Hellman key-exchange protocol in the wild through massive precomputation.

I wrote about this at length in May when this paper was first made public. (The reason it’s news again is that the paper was just presented at the ACM Computer and Communications Security conference.)

What’s newly being talked about his how this works inside the NSA surveillance architecture. Nicholas Weaver explains:

To decrypt IPsec, a large number of wiretaps monitor for IKE (Internet Key Exchange) handshakes, the protocol that sets up a new IPsec encrypted connection. The handshakes are forwarded to a decryption oracle, a black box system that performs the magic. While this happens, the wiretaps also record all traffic in the associated IPsec connections.

After a period of time, this oracle either returns the private keys or says “i give up”. If the oracle provides the keys, the wiretap decrypts all the stored traffic and continues to decrypt the connection going forward.

[…]

This would also better match the security implications: just the fact that the NSA can decrypt a particular flow is a critical secret. Forwarding a small number of potentially-crackable flows to a central point better matches what is needed to maintain such secrecy.

Thus by performing the decryption in bulk at the wiretaps, complete with hardware acceleration to keep up with the number of encrypted streams, this architecture directly implies that the NSA can break a massive amount of IPsec traffic, a degree of success which implies a cryptanalysis breakthrough.

That last paragraph is Weaver explaining how this attack matches the NSA rhetoric about capabilities in some of their secret documents.

Now that this is out, I’m sure there are a lot of really upset people inside the NSA.

EDITED TO ADD (11/15): How to protect yourself.

Posted on October 16, 2015 at 6:19 AMView Comments

The Logjam (and Another) Vulnerability against Diffie-Hellman Key Exchange

Logjam is a new attack against the Diffie-Hellman key-exchange protocol used in TLS. Basically:

The Logjam attack allows a man-in-the-middle attacker to downgrade vulnerable TLS connections to 512-bit export-grade cryptography. This allows the attacker to read and modify any data passed over the connection. The attack is reminiscent of the FREAK attack, but is due to a flaw in the TLS protocol rather than an implementation vulnerability, and attacks a Diffie-Hellman key exchange rather than an RSA key exchange. The attack affects any server that supports DHE_EXPORT ciphers, and affects all modern web browsers. 8.4% of the Top 1 Million domains were initially vulnerable.

Here’s the academic paper.

One of the problems with patching the vulnerability is that it breaks things:

On the plus side, the vulnerability has largely been patched thanks to consultation with tech companies like Google, and updates are available now or coming soon for Chrome, Firefox and other browsers. The bad news is that the fix rendered many sites unreachable, including the main website at the University of Michigan, which is home to many of the researchers that found the security hole.

This is a common problem with version downgrade attacks; patching them makes you incompatible with anyone who hasn’t patched. And it’s the vulnerability the media is focusing on.

Much more interesting is the other vulnerability that the researchers found:

Millions of HTTPS, SSH, and VPN servers all use the same prime numbers for Diffie-Hellman key exchange. Practitioners believed this was safe as long as new key exchange messages were generated for every connection. However, the first step in the number field sieve—the most efficient algorithm for breaking a Diffie-Hellman connection—is dependent only on this prime. After this first step, an attacker can quickly break individual connections.

The researchers believe the NSA has been using this attack:

We carried out this computation against the most common 512-bit prime used for TLS and demonstrate that the Logjam attack can be used to downgrade connections to 80% of TLS servers supporting DHE_EXPORT. We further estimate that an academic team can break a 768-bit prime and that a nation-state can break a 1024-bit prime. Breaking the single, most common 1024-bit prime used by web servers would allow passive eavesdropping on connections to 18% of the Top 1 Million HTTPS domains. A second prime would allow passive decryption of connections to 66% of VPN servers and 26% of SSH servers. A close reading of published NSA leaks shows that the agency’s attacks on VPNs are consistent with having achieved such a break.

Remember James Bamford’s 2012 comment about the NSA’s cryptanalytic capabilities:

According to another top official also involved with the program, the NSA made an enormous breakthrough several years ago in its ability to cryptanalyze, or break, unfathomably complex encryption systems employed by not only governments around the world but also many average computer users in the US. The upshot, according to this official: “Everybody’s a target; everybody with communication is a target.”

[…]

The breakthrough was enormous, says the former official, and soon afterward the agency pulled the shade down tight on the project, even within the intelligence community and Congress. “Only the chairman and vice chairman and the two staff directors of each intelligence committee were told about it,” he says. The reason? “They were thinking that this computing breakthrough was going to give them the ability to crack current public encryption.”

And remember Director of National Intelligence James Clapper’s introduction to the 2013 “Black Budget“:

Also, we are investing in groundbreaking cryptanalytic capabilities to defeat adversarial cryptography and exploit internet traffic.

It’s a reasonable guess that this is what both Bamford’s source and Clapper are talking about. It’s an attack that requires a lot of precomputation—just the sort of thing a national intelligence agency would go for.

But that requirement also speaks to its limitations. The NSA isn’t going to put this capability at collection points like Room 641A at AT&T’s San Francisco office: the precomputation table is too big, and the sensitivity of the capability is too high. More likely, an analyst identifies a target through some other means, and then looks for data by that target in databases like XKEYSCORE. Then he sends whatever ciphertext he finds to the Cryptanalysis and Exploitation Services (CES) group, which decrypts it if it can using this and other techniques.

Ross Anderson wrote about this earlier this month, almost certainly quoting Snowden:

As for crypto capabilities, a lot of stuff is decrypted automatically on ingest (e.g. using a “stolen cert”, presumably a private key obtained through hacking). Else the analyst sends the ciphertext to CES and they either decrypt it or say they can’t.

The analysts are instructed not to think about how this all works. This quote also applied to NSA employees:

Strict guidelines were laid down at the GCHQ complex in Cheltenham, Gloucestershire, on how to discuss projects relating to decryption. Analysts were instructed: “Do not ask about or speculate on sources or methods underpinning Bullrun.”

I remember the same instructions in documents I saw about the NSA’s CES.

Again, the NSA has put surveillance ahead of security. It never bothered to tell us that many of the “secure” encryption systems we were using were not secure. And we don’t know what other national intelligence agencies independently discovered and used this attack.

The good news is now that we know reusing prime numbers is a bad idea, we can stop doing it.

EDITED TO ADD: The DH precomputation easily lends itself to custom ASIC design, and is something that pipelines easily. Using BitCoin mining hardware as a rough comparison, this means a couple orders of magnitude speedup.

EDITED TO ADD (5/23): Good analysis of the cryptography.

EDITED TO ADD (5/24): Good explanation by Matthew Green.

Posted on May 21, 2015 at 6:30 AMView Comments

Economic Failures of HTTPS Encryption

Interesting paper: “Security Collapse of the HTTPS Market.” From the conclusion:

Recent breaches at CAs have exposed several systemic vulnerabilities and market failures inherent in the current HTTPS authentication model: the security of the entire ecosystem suffers if any of the hundreds of CAs is compromised (weakest link); browsers are unable to revoke trust in major CAs (“too big to fail”); CAs manage to conceal security incidents (information asymmetry); and ultimately customers and end users bear the liability and damages of security incidents (negative externalities).

Understanding the market and value chain for HTTPS is essential to address these systemic vulnerabilities. The market is highly concentrated, with very large price differences among suppliers and limited price competition. Paradoxically, the current vulnerabilities benefit rather than hurt the dominant CAs, because among others, they are too big to fail.

Posted on November 28, 2014 at 6:26 AMView Comments

Man-in-the-Middle Attacks Against Browser Encryption

Last week, a story broke about how Nokia mounts man-in-the-middle attacks against secure browser sessions.

The Finnish phone giant has since admitted that it decrypts secure data that passes through HTTPS connections—including social networking accounts, online banking, email and other secure sessions—in order to compress the data and speed up the loading of Web pages.

The basic problem is that https sessions are opaque as they travel through the network. That’s the point—it’s more secure—but it also means that the network can’t do anything about them. They can’t be compressed, cached, or otherwise optimized. They can’t be rendered remotely. They can’t be inspected for security vulnerabilities. All the network can do is transmit the data back and forth.

But in our cloud-centric world, it makes more and more sense to process web data in the cloud. Nokia isn’t alone here. Opera’s mobile browser performs all sorts of optimizations on web pages before they are sent over the air to your smart phone. Amazon does the same thing with browsing on the Kindle. MobileScope, a really good smart-phone security application, performs the same sort of man-in-the-middle attack against https sessions to detect and prevent data leakage. I think Umbrella does as well. Nokia’s mistake was that they did it without telling anyone. With appropriate consent, it’s perfectly reasonable for most people and organizations to give both performance and security companies that ability to decrypt and re-encrypt https sessions—at least most of the time.

This is an area where security concerns are butting up against other issues. Nokia’s answer, which is basically “trust us, we’re not looking at your data,” is going to increasingly be the norm.

Posted on January 17, 2013 at 9:50 AMView Comments

New SSL Vulnerability

It’s hard for me to get too worked up about this vulnerability:

Many popular applications, HTTP(S) and WebSocket transport libraries, and SOAP and REST Web-services middleware use SSL/TLS libraries incorrectly, breaking or disabling certificate validation. Their SSL and TLS connections are not authenticated, thus they—and any software using them—are completely insecure against a man-in-the-middle attacker.

Great research, and—yes—the vulnerability should be fixed, but it doesn’t feel like a crisis issue.

Another article.

Posted on November 7, 2012 at 1:39 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.