Entries Tagged "web"

Page 2 of 14

Heartbleed

Heartbleed is a catastrophic bug in OpenSSL:

“The Heartbleed bug allows anyone on the Internet to read the memory of the systems protected by the vulnerable versions of the OpenSSL software. This compromises the secret keys used to identify the service providers and to encrypt the traffic, the names and passwords of the users and the actual content. This allows attackers to eavesdrop communications, steal data directly from the services and users and to impersonate services and users.

Basically, an attacker can grab 64K of memory from a server. The attack leaves no trace, and can be done multiple times to grab a different random 64K of memory. This means that anything in memory—SSL private keys, user keys, anything—is vulnerable. And you have to assume that it is all compromised. All of it.

“Catastrophic” is the right word. On the scale of 1 to 10, this is an 11.

Half a million sites are vulnerable, including my own. Test your vulnerability here.

The bug has been patched. After you patch your systems, you have to get a new public/private key pair, update your SSL certificate, and then change every password that could potentially be affected.

At this point, the probability is close to one that every target has had its private keys extracted by multiple intelligence agencies. The real question is whether or not someone deliberately inserted this bug into OpenSSL, and has had two years of unfettered access to everything. My guess is accident, but I have no proof.

This article is worth reading. Hacker News thread is filled with commentary. XKCD cartoon.

EDITED TO ADD (4/9): Has anyone looked at all the low-margin non-upgradable embedded systems that use OpenSSL? An upgrade path that involves the trash, a visit to Best Buy, and a credit card isn’t going to be fun for anyone.

EDITED TO ADD (4/10): I’m hearing that the CAs are completely clogged, trying to reissue so many new certificates. And I’m not sure we have anything close to the infrastructure necessary to revoke half a million certificates.

Possible evidence that Heartbleed was exploited last year.

EDITED TO ADD (4/10): I wonder if there is going to be some backlash from the mainstream press and the public. If nothing really bad happens—if this turns out to be something like the Y2K bug—then we are going to face criticisms of crying wolf.

EDITED TO ADD (4/11): Brian Krebs and Ed Felten on how to protect yourself from Heartbleed.

Posted on April 9, 2014 at 5:03 AMView Comments

Changes to the Blog

I have made a few changes to my blog that I’d like to talk about.

The first is the various buttons associated with each post: a Facebook Like button, a Retweet button, and so on. These buttons are ubiquitous on the Internet now. We publishers like them because it makes it easier for our readers to share our content. I especially like them because I can obsessively watch the totals see how my writings are spreading out across the Internet.

The problem is that these buttons use images, scripts, and/or iframes hosted on the social media site’s own servers. This is partly for webmasters’ convenience; it makes adoption as easy as copy-and-pasting a few lines of code. But it also gives Facebook, Twitter, Google, and so on a way to track you—even if you don’t click on the button. Remember that: if you see sharing buttons on a webpage, that page is almost certainly being tracked by social media sites or a service like AddThis. Or both.

What I’m using instead is SocialSharePrivacy, which was created by the German website Heise Online and adapted by Mathias Panzenböck. The page shows a grayed-out mockup of a sharing button. You click once to activate it, then a second time to share the page. If you don’t click, nothing is loaded from the social media site, so it can’t track your visit. If you don’t care about the privacy issues, you can click on the Settings icon and enable the sharing buttons permanently.

It’s not a perfect solution—two clicks instead of one—but it’s much more privacy-friendly.

(If you’re thinking of doing something similar on your own site, another option to consider is shareNice. ShareNice can be copied to your own webserver; but if you prefer, you can use their hosted version, which makes it as easy to install as AddThis. The difference is that shareNice doesn’t set cookies or even log IP addresses—though you’ll have to trust them on the logging part. The problem is that it can’t display the aggregate totals.)

The second change is the search function. I changed the site’s search engine from Google to DuckDuckGo, which doesn’t even store IP addresses. Again, you have to trust them on that, but I’m inclined to.

The third change is to the feed. Starting now, if you click the feed icon in the right-hand column of my blog, you’ll be subscribing to a feed that’s hosted locally on schneier.com, instead of one produced by Google’s Feedburner service. Again, this reduces the amount of data Google collects about you. Over the next couple of days, I will transition existing subscribers off of Feedburner, but since some of you are subscribed directly to a Feedburner URL, I recommend resubscribing to the new link to be sure. And if by chance you have trouble with the new feed, this legacy link will always point to the Feedburner version.

Fighting against the massive amount of surveillance data collected about us as we surf the Internet is hard, and possibly even fruitless. But I think it’s important to try.

Posted on March 22, 2013 at 3:46 PMView Comments

Security Theater on the Wells Fargo Website

Click on the “Establishing secure connection” link at the top of this page. It’s a Wells Fargo page that displays a progress bar with a bunch of security phrases—”Establishing Secure Connection,” “Sending credentials,” “Building Secure Environment,” and so on—and closes after a few seconds. It’s complete security theater; it doesn’t actually do anything but make account holders feel better.

Posted on March 13, 2013 at 1:30 PMView Comments

Man-in-the-Middle Attacks Against Browser Encryption

Last week, a story broke about how Nokia mounts man-in-the-middle attacks against secure browser sessions.

The Finnish phone giant has since admitted that it decrypts secure data that passes through HTTPS connections—including social networking accounts, online banking, email and other secure sessions—in order to compress the data and speed up the loading of Web pages.

The basic problem is that https sessions are opaque as they travel through the network. That’s the point—it’s more secure—but it also means that the network can’t do anything about them. They can’t be compressed, cached, or otherwise optimized. They can’t be rendered remotely. They can’t be inspected for security vulnerabilities. All the network can do is transmit the data back and forth.

But in our cloud-centric world, it makes more and more sense to process web data in the cloud. Nokia isn’t alone here. Opera’s mobile browser performs all sorts of optimizations on web pages before they are sent over the air to your smart phone. Amazon does the same thing with browsing on the Kindle. MobileScope, a really good smart-phone security application, performs the same sort of man-in-the-middle attack against https sessions to detect and prevent data leakage. I think Umbrella does as well. Nokia’s mistake was that they did it without telling anyone. With appropriate consent, it’s perfectly reasonable for most people and organizations to give both performance and security companies that ability to decrypt and re-encrypt https sessions—at least most of the time.

This is an area where security concerns are butting up against other issues. Nokia’s answer, which is basically “trust us, we’re not looking at your data,” is going to increasingly be the norm.

Posted on January 17, 2013 at 9:50 AMView Comments

Man-in-the-Middle Bank Fraud Attack

This sort of attack will become more common as banks require two-factor authentication:

Tatanga checks the user account details including the number of accounts, supported currency, balance/limit details. It then chooses the account from which it could steal the highest amount.

Next, it initiates a transfer.

At this point Tatanga uses a Web Inject to trick the user into believing that the bank is performing a chipTAN test. The fake instructions request that the user generate a TAN for the purpose of this “test” and enter the TAN.

Note that the attack relies on tricking the user, which isn’t very hard.

Posted on September 14, 2012 at 11:23 AMView Comments

Cryptocat

I’m late writing about this one. Cryptocat is a web-based encrypted chat application. After Wired published a pretty fluffy profile on the program and its author, security researcher Chris Soghoian wrote an essay criticizing the unskeptical coverage. Ryan Singel, the editor (not the writer) of the Wired piece, responded by defending the original article and attacking Soghoian.

At this point, I would have considered writing a long essay explaining what’s wrong with the whole concept behind Cryptocat, and echoing my complaints about the dangers of uncritically accepting the security claims of people and companies that write security software, but Patrick Ball did a great job:

CryptoCat is one of a whole class of applications that rely on what’s called “host-based security”. The most famous tool in this group is Hushmail, an encrypted e-mail service that takes the same approach. Unfortunately, these tools are subject to a well-known attack. I’ll detail it below, but the short version is if you use one of these applications, your security depends entirely the security of the host. This means that in practice, CryptoCat is no more secure than Yahoo chat, and Hushmail is no more secure than Gmail. More generally, your security in a host-based encryption system is no better than having no crypto at all.

Sometimes it’s nice to come in late.

EDITED TO ADD (8/14): As a result of this, CryptoCat is moving to a browser plug-in model.

Posted on August 14, 2012 at 6:00 AMView Comments

Lessons in Trust from Web Hoaxes

Interesting discussion of trust in this article on web hoaxes.

Kelly’s students, like all good con artists, built their stories out of small, compelling details to give them a veneer of veracity. Ultimately, though, they aimed to succeed less by assembling convincing stories than by exploiting the trust of their marks, inducing them to lower their guard. Most of us assess arguments, at least initially, by assessing those who make them. Kelly’s students built blogs with strong first-person voices, and hit back hard at skeptics. Those inclined to doubt the stories were forced to doubt their authors. They inserted articles into Wikipedia, trading on the credibility of that site. And they aimed at very specific communities: the “beer lovers of Baltimore” and Reddit.

That was where things went awry. If the beer lovers of Baltimore form a cohesive community, the class failed to reach it. And although most communities treat their members with gentle regard, Reddit prides itself on winnowing the wheat from the chaff. It relies on the collective judgment of its members, who click on arrows next to contributions, elevating insightful or interesting content, and demoting less worthy contributions. Even Mills says he was impressed by the way in which redditors “marshaled their collective bits of expert knowledge to arrive at a conclusion that was largely correct.” It’s tough to con Reddit.

[…]

If there’s a simple lesson in all of this, it’s that hoaxes tend to thrive in communities which exhibit high levels of trust. But on the Internet, where identities are malleable and uncertain, we all might be well advised to err on the side of skepticism.

Posted on May 23, 2012 at 12:32 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.