Forging SSL Certificates

We already knew that MD5 is a broken hash function. Now researchers have successfully forged MD5-signed certificates:

Molnar, Appelbaum, and Sotirov joined forces with the European MD5 research team in mid-2008, along with Swiss cryptographer Dag Arne Osvik. They realized that the co-construction technique could be used to simultaneously generate one normal SSL certificate and one forged certificate, which could be used to sign and vouch for any other. They purchased a signature for the legitimate certificate from an established company that was still using MD5 for signing, and then applied the legitimate signature to the forged certificate. Because the legitimate and forged certificates had the same MD5 value, the legitimate signature also marked the forged one as acceptable.

Lots and lots more articles, and the research.

This isn’t a big deal. The research is great; it’s good work, and I always like to see cryptanalytic attacks used to break real-world security systems. Making that jump is often much harder than cryptographers think.

But SSL doesn’t provide much in the way of security, so breaking it doesn’t harm security very much. Pretty much no one ever verifies SSL certificates, so there’s not much attack value in being able to forge them. And even more generally, the major risks to data on the Internet are at the endpoints—Trojans and rootkits on users’ computers, attacks against databases and servers, etc—and not in the network.

I’m not losing a whole lot of sleep because of these attacks. But—come on, people—no one should be using MD5 anymore.

EDITED TO ADD (12/31): While it is true that browsers do some SSL certificate verification, when they find an invalid certificate they display a warning dialog box which everyone—me included—ignores. There are simply too many valid sites out there with bad certificates for that warning to mean anything. This is far too true:

If you’re like me and every other user on the planet, you don’t give a shit when an SSL certificate doesn’t validate. Unfortunately, commons-httpclient was written by some pedantic fucknozzles who have never tried to fetch real-world webpages.

Posted on December 31, 2008 at 1:39 PM66 Comments


Chris Finch December 31, 2008 2:02 PM

One of the other “major risks to data on the Internet” is the process of uploading your entire life and photo albums to myspace, facebook and bebo, and then trying to get as many friends as possible.

Anurag December 31, 2008 2:54 PM

Sadly, it seems like this is the sort of thing it takes for people to stop using broken hash fns.

I first read about this in Ars Technica which talked about how the attacks before were ‘theoretical’ but have now yielded a ‘practical’ attack, so we should think about shifting away from md5

Glen December 31, 2008 3:00 PM

Surely you mean “no one should be using MD5 anymore [for cryptographic purposes]”. It’s a perfectly fine hashing function; just don’t count on it for security.

Mark R December 31, 2008 3:14 PM

I’m a bit surprised by the statement, “Pretty much no one ever verifies SSL certificates.” Most modern browsers at least verify the signatures, the validity of the chain up to the root CA, and the fact that the server name in the address bar matches the name on the cert. It’s something like 5 clicks now in Firefox to go to a site where any of these checks fail.

Most MITM attacks rely on the user ignoring this kind of certificate warning. Being able to pull off that kind of attack without a peep from the browser seems significant to me. Or am I missing something?

Steve J December 31, 2008 3:27 PM

“But SSL doesn’t provide much in the way of security,”

This statement caught me off guard. Is there a suggestion on what we should be using in place of it? Or is the problem really one of educating users?

Andrew December 31, 2008 3:32 PM

By “Pretty much no one ever verifies SSL certificates,” do you mean that CAs don’t verify who they’re issuing certificates to (as seen here: ), or that software doesn’t verify their peer’s certificate? Web browsers certainly do the latter (Firefox 3 quite tyrannically so).

But, the comparison to rootkits and trojans puts this in good perspective, so point taken.

Jimbo Jones December 31, 2008 3:48 PM

I think that “Pretty much no one ever verifies SSL certificates” might mean that users just click “go there anyway” when they get a security warning dialog. People ignore warnings and just click through anything.

“Are you sure you want to reformat your disk?” Yeah, whatever. Just take me to the website I wanted. “This EULA gives company X power of attorney over your financial affairs.” Sure, just install the program for me.

aikimark December 31, 2008 3:51 PM

This is one of many real dangers to Internet users. I usually pay attention to such warnings, and feel VERY vulnerable to this spoof. How are we (IT professionals) able to advise our clients?

I wouldn’t know where to begin to look for the hash method used for an SSL certificate. How are the great unwashed supposed to cope?

While the threat of malware (Rootkits, trojans, etc.) is more important right now, I think this spoof might be more risky down the road. This affects the credibility of secure transactions for payments and guards against identity theft.

silence December 31, 2008 4:51 PM

aikimark: In Firefox:

Tools -> Page Info -> Security -> View Certificate -> Details -> Certificate Fields -> Certificate Signature Algorithm

Abe December 31, 2008 4:54 PM

Mozilla Firefox made it quite a bit more difficult to go past a certificate error. It is unknown how big a difference this has made in security, but at least one layperson finally ended up filing a bug at Mozilla about continuously getting errors, only to be informed that she was the subject of a man-in-the-middle attack. Supposedly that person has learned their lesson now.

Randall December 31, 2008 5:32 PM

Whether it’s a big deal depends on how you look at it. It’s true that the most practical way to break bank security is by fooling users, not hacking SSL. (A downside of fooling users is that it doesn’t involve 200 PlayStations, unless that’s what you buy with your illicit gains.)

But SSL’s security does put an upper bound on how secure the Web can be even when browser authors make the right user-interface decisions and users know what to trust.

It’s also a reminder that algorithms matter, even if OS security and informed users generally matter more. Perhaps it will speed adoption of SHA-2 and eventually -3, though I’m not holding my breath there.

The potential for user error, the earlier problems with bad “Microsoft” certs, and this crack all make me wonder if we should be using a different model for Web crypto — maybe something more like SSH that only aspires to warn users when the bank they’re submitting data to this week isn’t the same one they used last week, and does it with a clear UI.

PeregrineBF December 31, 2008 5:46 PM

Most users don’t care, won’t notice, or don’t know about the lock icon or whatever your browser uses. To these users it’s not a change at all. To anyone who does try to verify the cert it IS a bad thing, since you can’t do so anymore (well, without manually verifying the whole cert chain.)

B-Con December 31, 2008 6:05 PM

I usually click through invalid SSL certificates too. But my rule is that I only do so if I wouldn’t mind not using SSL whatsoever. Most invalid SSL certificates that I encounter are for web pages that I don’t even care to have SSL for at all.

If Amazon, eBay, or Wells Fargo give me an invalid certificate error, I won’t be transacting anything with them, but if I’m browsing and some inconsequential page gives me a certificate error, I won’t care.

Sam Thomas December 31, 2008 7:18 PM

I wouldn’t go so far as to say it’s not a big deal. Certainly, it’s no big deal to anyone that’s already realized the folly of using MD5 and moved on. We’re also very unlikely to see any zero-day exploits of this (it’s pretty time consuming to find a collision). The problem is obviously not even wide-spread.

What we could see is this kind of falsification being used on is any decent sized company still using MD5 signed certs coupled with a DNS hijack and a professional phishing site. It would be a nasty form of phishing, particularly because on the face of it the cert will check out. I don’t think very many people will conduct a transaction with a company whose web server throws blatant certificate warnings (though I could be wrong, I’m a computer security amateur).

I know that I don’t bother to check whether the IP has changed for a company I do business with on the internet, but I do check their certs. It’s a long shot, but if there’s a lot of money to be made it could attract those capable of making the meager investment for a couple of hundred PS3s necessary to perform. Even a short term disruption for a big enough company could result in some pretty serious fraud involving thousands of people.

It’s the stuff of movies, sure. Now all we have to do is wait and see if life imitates art.

The big message here for companies is that it’s time to get off of MD5-signed certs if they’ve been hanging on to it for any reason (mistrust of the government and the NSA has been one I’ve seen in organizations in the past).

Gordon Messmer December 31, 2008 10:13 PM

I don’t think I could disagree with you (for once) more strongly, Bruce.

This attack isn’t about a browser popping up a warning that users ignore. This attack would allow a MITM attack where no warning is given, leaving even the careful and vigilant significantly more open to attack.

In other words, I couldn’t care less whether the majority of people pay attention to SSL validation. When I go to my online bank access, I pay attention to it. Whether or not it’s significant to the public, it is significant to me. Now I can’t trust that the site I’m viewing is really my bank’s site.

I guess I’ll have to remove the signers identified from my root certificates list and hope none of those remaining will validate a cert with an MD5 hash. I haven’t decided yet whether or not that’s sufficient. It seems to me that the most prudent course may be to drop all of the root certificates, expend extra effort to validate the few certs that I rely on to authenticate the remote server, and accept only those permanently. I already make a habit of never accepting certs permanently if they pop up warnings. This won’t be much of a change for me in practice.

Gordon Messmer December 31, 2008 10:41 PM


Unless I completely misunderstand this research, the problem isn’t limited to companies that use MD5 hashed certs. The problem is that your browser (and mine) trusts certificates signed by RSA Data Security and Thawte (among others). Those signers have, until now, been willing to sign certificates with MD5 hashes. Everyone who has received such a certificate, and everyone who receives one in the future has a tool that they can potentially use to create new certificates which appear to be signed by an intermediate of the trusted root certificate.

It sounds like both Microsoft and Mozilla are working to ensure that the companies who own the root certificates trusted by IE and Firefox will refuse to issue certificates signed with MD5. That’s an important first step, but the potential for attack doesn’t actually go away until all of the existing certificates expire, possibly several years from now. Until then, we’re all at risk of trusting a certificate that was forged.

Personally, I’d like to see MS and Mozilla get a CRL from each of the roots that includes all of the active certs they’ve signed with MD5 hashes so that users can, at our option, avoid trusting them.

Owen Crow December 31, 2008 10:42 PM

[Gordon commented while I was typing so some of my comments overlap.]

I think many are missing the who’s and the what’s of this exploit. The research team abused RapidSSL’s CA cert, which is signed with MD5, AND a vulnerability of their SSL cert generation process (sequential serial numbers) to produce a false CA cert that passes all modern browser tests. I.e. once the rogue CA cert is generated, it can be used to generate an SSL cert for that would be accepted without alert on all modern browsers by default. It has nothing to do with if you or your company use MD5 certs and it doesn’t matter where the bank site got their SSL cert. This is an attack on some of the root CA authorities. Your browser trusts over 100 by default.

Yes, you can un-trust RapidSSL in your browser, but then all those validly-signed certs from RapidSSL out there will get alerts. And do you want to un-trust all the other offending CAs such as Thawte?

I agree this is not a significant change to the average user who will happily click cert warnings all day. It does mean that I, as an Infosec professional, have to worry about accessing my own SSL-protected resources whenever I traverse 3rd party networks now.

OK, maybe I always worried anyway since there could be 50 other attacks on SSL certificates that governments know about. I tend to worry less about governments because I can’t change them and my defense against them is not in the OSI stack.

Lawrence D'Oliveiro January 1, 2009 1:56 AM

Bruce Scheneier quotes Ted Dziuba! He’s done some entertaining articles for The Register, too.

“Fucknozzles” — did he just make that up?

Pat Cahalan January 1, 2009 3:08 AM

@ Steve J

But SSL doesn’t provide much in the way of security,
so breaking it doesn’t harm security very much.

There’s a couple of other threads on this blog about SSL.

I’ve said before that SSL is really only good for encrypting a session, not for providing authoritative identification. Some people have argued otherwise, but (in my current “I’ve had a bit to drink on New Year’s” state) I’ll go ahead and claim this supports my earlier statements 🙂

There is limited security value in trusting a certificate authority in today’s CA market. The fact that some CA’s still use MD5 underscores that point.

Jeroen January 1, 2009 4:03 AM

While it is true that browsers do some
SSL certificate verification, when they
find an invalid certificate they display a
warning dialog box which everyone — me
included — ignores.

It’s not quite that simple anymore in the latest versions of Firefox. When accessing a site using an untrusted cert, firefox shows a “connection failed” page, with a dinky little “add exception” link at the bottom.

Click that link and you’ll get another warning you really shouldn’t be doing this. Click the button to do it anyway, and you enter a dialog (with more warnings) where you first have to “retrieve” the cert and then excplicitly accept it as an exception.

This 4-step process must be initiated pro-actively, and is easy for someone who knows what they are doing, but will hopefully deter a sizable portion of Joe Average users from entering sites without properly signed certs

There was quite a heated debate on Slashdot about how this unfairly forced smalltime websites to purchase an expensive cert form the money-grubbing CA’s yaddayadda etc.

Muffin January 1, 2009 5:12 AM

SSL at least guards against passive listening attacks, even if it doesn’t protect against MITM: some jokster on my LAN won’t be able to see what I do on the web if I use https.

That being said, I actually do care about invalid SSL certs etc., depending on the site I’m accessing. If it’s my bank, for example, you can be sure I’d give them a call if something was amiss. And if I don’t care about an invalid (e.g. self-signed) cert, I at least KNOW that it doesn’t provide any assurance about the identity of the website I’m using.

Particular Random Guy January 1, 2009 11:10 AM

The author of the final quote recommends a library which is being secured by a…

drum roll

…MD5 checksum.

Anonymous January 1, 2009 11:36 AM

Re: Pretty much no one ever verifies SSL certificates, so there’s not much attack value in being able to forge them.

Maybe not for Web pages, but for WEB SERVICES (SOAP and its cousins) there are quite a lot of programmers who like to verify them.

This already causes a fair amount of hassle whenever a site certificate expires, and must be replaced by a new one for the same domain, but with a different signature. The problem is with making simultaneous out-of-hours updates across several organizations.

So that’s something to look forward to next week when I’m officially back at work. Will clients be grateful to have a security hole closed? What do you think!

Peter Pearson January 1, 2009 12:07 PM

I’m baffled, Bruce. How do you keep your mom from getting phished? Don’t you tell her to watch for the SSL padlock while banking, and tolerate no certificate nonsense? What does everybody else do? Tell her to drive down to the bank? That won’t work for mine.

Yossi January 1, 2009 12:09 PM

That bugzilla-finds-MITM story is very interesting – I’ve never heard of it before. Can you post the Bugzilla URL?

Andrew January 1, 2009 12:37 PM

There have been a number of comments along the lines of “I verify my certificates, so this concerns me very much.” I also verify my certificates, and refuse to conduct sensitive transactions with a site with bad certificates. However, being readers of Bruce Schneier’s blog, we are not typical Internet users. If a bad guy is going to pull off a MITM, he probably won’t even bother to forge a certificate, since so many people will just ignore the SSL warnings. And even then, since attacking the network infrastructure is so hard, bad guys will probably just go attack the endpoints instead of bothering with a MITM.

blueg3 January 1, 2009 1:26 PM

I think as relevant as a CA agreeing to sign certs using only MD5 (which they shouldn’t do) is that they are signing certs automatically with the purchaser being able to control or predict the entire signed object. If the automatic signer introduced 128 bits of entropy into the cert before it was signed (nonsequential IDs, less-predictable valid times), this attack would fail.

QQ January 1, 2009 2:03 PM

You ignore the warning about invalid certificates? Because the only ones I get are real MITM attacks. Granted, it’s only “Smart”Filter trying to feed me a password form when I’m trying to use gmail, but I still don’t allow it…

tiny voice January 1, 2009 2:08 PM

“If the automatic signer introduced 128 bits of entropy into the cert before it was signed… ”


Considering only a 128-bit digest, a 64-bit random header should be sufficient entropy.

I’m glad you brought this up, though. This deep and enduring lesson shouldn’t be lost in the noise: NEVER SIGN A CHOSEN TEXT!

Fwiw, these researchers do bring this up in their recommedations for certificate authorities when they state:

“To prevent chosen prefix attacks, they can add a sufficient amount of randomness to the certificate fields, preferably as far to the start of the certificate as possible. The serial number is a good field to use for this, as it comes very close to the beginning of the certificate, and it allows for 20 bytes of randomness. Many Certification Authorities seem to use already random serial numbers, albeit probably for different reasons.”

But that observation isn’t general enough nor imperative enough.


Pat Cahalan January 1, 2009 3:28 PM

@ Peter Pearson

What does everybody else do? Tell her to drive down
to the bank? That won’t work for mine.

I tell mine that fundamentally, she should never really trust a computer that is hooked up to the Internet. You wouldn’t feel comfortable if you walked up to a bank and instead of seeing an ATM machine you saw a general access computer terminal running a consumer operating system; why in heavens do people continue to insist that they can trust their own PC for financial transactions?

You shouldn’t use a general purpose PC for banking, period. Millions of home PCs are known to be compromised – the last number I can recall of the top of my head is over six million worldwide (of which over 10% are in the U.S.), and that’s a couple of years old, I’m dead certain it’s much higher now.

Would you ever use an ATM machine if you knew that even 1% of the bank-owned and operated ones were known to be hacked and serving out your money to other people? Some people might; the bank can at least be held responsible for transactions that are executed with their own equipment.

Not so much if you’re using your own gear.

Sam Thomas January 1, 2009 3:32 PM

Gordon Messmer,

I should have been a bit more specific. Obviously I’m aware that this is a root CA compromise – they falsified an intermediary cert amusingly from “MD5 Collisions Inc.” and issued to “”

I did mention the potential for issuing false certs that can’t be distinguished from the real ones as a man-in-the-middle attack (though I didn’t identify it by name). With a good enough phishing site disguised to look like the real thing and DNS redirection hack, the false cert that verifies as if it were real is the final bit to make the site pass as genuine.

I guess I kind of expected that, for most of the bloggers that would follow Schneier’s blog, the fact that the threat is one derived from a significant hack further up the CA chain was obvious and didn’t need to be explicitly mentioned (obviously, you knew it and told me so).

Still, thanks for trying to set me straight. You and I may understand it, but the way I said it may give others the wrong idea about the scope of this attack. It can be a little difficult to collect one’s thoughts in a 10 row, 40 column box. Even after proofreading my entries sometimes seem scatterbrained. 😉

Admittedly, it will would take a pretty significant amount of computing resources (and a little bit of luck) to duplicate this, so I agree with Bruce that it’s not something we should lose much sleep over. It’s not like there are an infinite amount of collisions in the MD5 algorithm, but there are obviously some. It does make me wonder why they selected the Equifax eBusiness CA cert over others – did they try quite a few in an attempt to find one (a shotgun approach), pull one (un)lucky winner at random, or select this one specifically for some reason (analysis suggested it was particularly ripe for this attack)?

Despite the fact that this attack was carried out on 200 PS3s, it could obviously be coded to function on a large enough botnet, though they’d risk both detection and intentional meddling with the results.

I have to agree with the observation that Pat Cahalan made: of the two goals SSL set out to solve, providing authoritative identification is certainly the weaker of the two. I wouldn’t count it as completely worthless though, just of diminished value to a society that clicks through security warnings and never verifies authorities.

Out of curiosity, what percentage of people are willing to click through browser warnings and how effective are the various warnings? Anecdotally, I’ve witnessed it over-the-shoulder of users, but is there any scientific research on the subject? It sounds like the new Firefox warning is somewhat more stern, but IE7 uses a similar one these days too (big white page that looks like a browsing error rather than a popup). I suspect the original popup boxes, the ones that look like every error message box in Windows, were clicked-through a lot more often than the new one.

Sam Thomas January 1, 2009 3:50 PM

Pat Cahalan,

You said “Would you ever use an ATM machine if you knew that even 1% of the bank-owned and operated ones were known to be hacked and serving out your money to other people? Some people might; the bank can at least be held responsible for transactions that are executed with their own equipment.”

I think you’ve hit the nail on the head here. With credit cards, the issuer eats the losses associated with fraud (as long as you inform them of it within a reasonable amount of time). With banking, usually no such protections are extended. I do think that most of us are reasonably safe in that a majority of compromised machines seem to be used less for harvesting banking data than for inundating people with advertisements, spam or coordinating the occasional DDoS attack on someone a botnet herder doesn’t like.

It’s probably wise to make a distinction between legally protected financial transactions and those that aren’t. It’s not so much that most, it seems, are vulnerable to direct fraud as identity theft. That can severely harm an individual’s credit rating, even if they are exonerated of wrongdoing and don’t have to pay a dime of the fraudulent charges. Credit ratings are notoriously difficult to correct, and the big three are very resistant to doing so even when fraud is proven.

I commend you for your very astute observations.

Myoukochou January 1, 2009 5:12 PM

I do think this is a very nice piece of work, if only because it’ll poke the CAs still (incredibly) signing with MD5 into using at least SHA-1 instead, which isn’t exactly a comprehensive improvement, but – well, it’s a start.

Verisign’s RapidSSL have, in fact, finally stopped using MD5 as a response to this:

I do wonder what would happen if SHA-1 acquired similarly effective chosen-prefix attacks. It could happen given the 2005/2006 work; in fact, I reckon it probably will happen in a couple of years, but I don’t think anyone’s managed it yet.

And would we see an actual movement to SHA-2 in TLS/X.509 given the sheer standards inertia involved? Probably not before SHA-3’s been picked. 🙂

And of course as Bruce astutely points out, endpoint attacks are easier in practice and 95%* of users who aren’t using Firefox 3 will click right through the MITM warnings. But that doesn’t necessarily mean we should give up. It’s just another shell in the great arms race.

  • Statistic made up on the spot. I would like to see a study on this; I wouldn’t be surprised to see depressingly higher results.

Vin January 1, 2009 10:59 PM

Gordon Messmer wrote:
“The problem is that your browser (and mine) trusts certificates signed by RSA Data Security and Thawte (among others). Those signers have, until now, been willing to sign certificates with MD5 hashes.”

Actually, this is an all-VeriSign show. AFAIK, all six of the CAs Sotirov et allisted as using MD5 owned or controlled by VeriSign. The “RSA Data Security” CA that they listed as vulnerable was transferred from RSA to VeriSign in 1995, back when RSA spun VeriSign off as an independent entity.

This “RSA Secure Server Certification Authority,”serial number 02:41:00:00:01, was created in November, 1994, but it was supposed to no longer be valid after Dec 31, 1999.

RSA itself, now a division of EMC, endorsed and promoted MD5 in 1991, but started urging developers to shift from MD5 to SHA1 in Nov, 1996. Today, RSA still controls two root CAs. I don’t think either has ever used MD5 in their signing procedure; both rely upon SHA1. Unfortunately, a lot of journalists and bloggers just assumed that the RSADSI CA listed in the Sotirov paper referred to a CA today owned by RSA/EMC. OTOH, if their grasp of industrial history was a little shakey, both the Web and print media offered all sorts of neat diagrams illustrating how to corrupt PKI.

Andrew January 2, 2009 3:12 AM

I am quite surprised that even Bruce wouldn’t check SSL ceritificate warnings. If he doesn’t check what hope is there for the rest of us?

SSL certificate provides a vital (and ONLY) security to your online transactions. If I were to visit my bank’s website or doing a online purchase, I would definitely tread very carefully if I see such warning.

Calum January 2, 2009 3:32 AM

This is classic economic security incentives gone wrong. The website you are communicating with has no incentive to make sure their certificate comes from a reliable CA. Your browser has no incentive not to include untustworthy CA roots. Browsers should be shipped without any CAs and end users should buy or subscribe to a root that they feel they can trust.

If you haven’t looked at the list of root certificates in your browser recently, have a gander. The list of people that you “trust” is pretty scary.

Cassandra January 2, 2009 5:02 AM

The biggest issue I find with validating certificates is not that browsers complain about invalid ones, but that a vanishingly small number of legitimate organisations provide a means of independently verifying certificate signatures/fingerprints.

As an experiment, I ‘removed’ the root CA certificates from my browser (this is surprisingly difficult to do in Firefox), so all SSL access throws an exception. I learned that few organisations expect people to check certificate fingerprints; and that many organisations use certificates from multiple authorities, even during the same transaction.

My original reason for doing this was to check if a new root CA had been installed on one of my laptops that allowed an SSL proxy to do a MITM on my traffic – thankfully not, but it exposed the can of worms that SSL, root CAs, and certificate validation actually is.


Lennie January 2, 2009 6:47 AM

The problem with https is, it’s as strong as the weakest link. Every party you (or your browser vendor) trust(s) can sign anything. So if one does something stupid, it really doesn’t matter if the others are any good.

Lennie January 2, 2009 7:02 AM

@Sam Thomas wrote:

“Admittedly, it will would take a pretty significant amount of computing resources (and a little bit of luck) to duplicate this”

Actually, I think they proved that no CA should give out these kinds of certificates anymore. And the CA’s acted accordingly. So there won’t be any duplication.

I do think CA’s should shape up, the recent Comodo-reseller and now RapidSSL, shows these folks don’t do there work well. The serialnumber for example was very predictable (+1). I don’t know if the audit-processes suck or not, but this is getting silly.

Jonadab the Unsightly One January 2, 2009 7:17 AM

users just click “go there anyway”
when they get a security warning dialog.

Of course they do. The major causes of these warning dialog boxes are all crying wolf. The user’s CMOS battery is kaput and his computer thinks it’s 2069 and the cert expired decades ago. The site experienced a change of personnel sometime in the last year, and nobody put the cert expiration date on the new guy’s calendar. (This is VERY common, though it usually only lasts a few days.) The site has multiple domain names, which all point to the same IP address, and for some reason you’re not using the canonical one that the certificate refers to. (Sometimes even a subdomain of the correct domain is considered a mismatch by overly-pedantic browser checks, but even if it’s really a mismatch it’s often just because the site followed standard practice of registering .com and .net along with their .org but only spent the money for one cert, and you’ve arrived via one of the non-canonical addresses.)

This is one of many real dangers to Internet
users. I usually pay attention to such warnings,
and feel VERY vulnerable to this spoof. How are
we (IT professionals) able to advise our clients?

Advise them not to give out dangerous information that needs to be kept private. If you don’t trust anyone, then it doesn’t matter if an SSL cert is forged, because one entity you don’t trust is impersonating another entity you don’t trust.

Now, everything I have said up to this point is in reference to https specifically, not SSL generally. The considerations are different for other uses of SSL. But as a general rule the people using things like ssh are better able to educate themselves on the relevant security issues than many of the people visiting https sites. As for https, I consider it to have, in practice, almost exactly the same level of security as unencrypted http. The supposed difference is mostly theoretical.

Jonadab the Unsightly One January 2, 2009 7:54 AM

the most practical way to break bank security

I suspect the most practical way to break bank security is probably to forget about the bank itself and get a database full of credit card numbers from another source. You know, get copies of a commercial site’s backup tapes or something.

Most invalid SSL certificates that I encounter
are for web pages that I don’t even care to
have SSL for at all.

I don’t do anything on the web that actually needs to be encrypted. (I do use SSL for things that legitimately need encryption, but now we’re talking about ssh, not https, so certificate authorities are irrelevant there.)

I don’t think very many people will conduct a
transaction with a company whose web server
throws blatant certificate warnings

Years of extraneous application-modal dialog boxes presenting non-critcial information, which in many cases a normal user has absolutely no hope of ever understanding, have trained everyone to just click through everything. Now you’re visiting an HTTPS site. You’re now visiting an unencrypted site. Oh, no, do you really want to send these search terms to Google unencrypted? Do you really want to close Solitaire? DNS resolution failed for the address you mistyped. The server is not responding. You need the latest version of YetAnother Plugin to view this content, do you want to install it? The SSL cert for the website you are trying to view expires in 2012 (and your computer thinks it’s 2069 because the CMOS battery gave up the ghost three years ago), are you sure you want to trust it?

Phillip January 2, 2009 8:23 AM


True, when an SSL cert fails to validate on my twitter account, pffft, okay. When an SSL cert fails to validate on my Credit Union — I’m not going to log in, I’m caling the Credit Union.

Clive Robinson January 2, 2009 9:30 AM

@ Jonadab the Unsightly One,

“As for https, I consider it to have, in practice, almost exactly the same level of security as unencrypted http. The supposed difference is mostly theoretical.”

Err No and No…

The level of security is most definatly worse, and the reason is not theoretical at all.

The security is worse for a couple of reason,

1, Firstly is the fairly obvious user security issue of “it’s secure so I’ll send my personal information (etc)” false premise.

2, Secondly as a business or other organisation with information you may not want released HTTPS effectivly blinds you from checking out going information sent by employees or MALWARE etc.

The first issue is because the “Jo Average” users don’t have the background to understand the somewhat complex issues involved (or understanably interest to learn). And likewise as noted by others there is not enough time in the world to make them understand…

However It is realy the second issue that gets up my nose the most (after all do I realy care if an uninformed user gives away their personal details? NO, it’s their choice, But business info? YES it’s what we are paid to protect both moraly and legaly).

For a Security Admin HTTPS or any other encrypted end to end traffic is a night mare not just waiting to happen but you know without doubt will come back to haunt you one way or another any time real soon.

From a SecAdmin perspective the only sensible precaution (due to malware etc) is not to alow HTTPS traffic at all (Just say no…).

Which is a sane and cost effective solution that not only can be done relativly easily at a choke point such as a firewall, but importantly easily checked at any time.

However in the real world this almost always comes into conflict with the users who need HTTPS to access online services.

Not all of these services are non-business related (think payed for business information services etc). But also now that people are starting to use “social networking” sites as ways of doing “business networking” these services are turning into another unexpected bussiness resource like EMail and SMS services have (and will have to be catered for to maintain “business advantage”). So unlike the drug advert you cannot “just say no” your users are hooked.

Enevatably there will be a person of influence (like an accountant or senior marketing droid) who will object to HTTPS being blocked or controled.

And they or someone similar will come up with a spurious “cost saving” reason to have the port fully unrestricted, by denying the resources to make it more secure in one way or the other (the not on my budget syndrome).

If you try to adopt a restricted HTTPS solution at the firewall it is expensive in resources to operate and often leaves users feeling like they are being spyed on (sometimes justafiably so in these leaner meaner credit crunch times).

And if you (are daft enough to) suggest the sensible solution of having no HTTPS or other encrypted traffic from internal sensitive areas, and put forward the idea of a “canteen system” of PC’s that employees and others can use in their own time. That is usuallly seen as an unwarented expense by the “person of influence” or it comes out of your budget without a coresponding alocation…

You end up with either no real security or the expense and hassel of tying down access from particular machines to particular external hosts/services.

Which also effects your internal flexability unless you have a system tied into access control (Firewall and login service) which is invariably problematical.

Then of course there are all the issues of the external hosts and services changing something or letting it expire etc etc.

You end up wishing you had the Mil/Gov option of being able to “just say no” and back it up with the law and a well armed Marine when a user gets “cute” or “influencial”…

MrObvious January 2, 2009 1:17 PM

The reason this is so problematic is that the browsers have MD5 root certs “built-in”, hence creating the implicit trust.

So…. Why not just delete all the MD5 root certs out of your browser? Problem solved!!

Or better yet, Microsoft, Firefox, etc. could simply issue an update for the browser to not allow/use MD5 root certs.

Any CAs that still use MD5 root certs (which should have been replaced by now anyway) simply need to work to get their non-MD5 root certs “installed” into the updated browsers. Microsoft has provided a “root cert update” in the past, so this should not be a big problem.

Milo January 2, 2009 4:28 PM

Well, if “you don’t give a shit” (whoever you are) it’s silly to write stuff to accept all SSLs. Without verification they are useless. Perhaps it’s better for you to reject HTTPS?

The Raven January 2, 2009 5:09 PM

If we want people to use certificates, we have to make high-quality certificates inexpensive enough so that people don’t need to go to unreliable authorities to get them. Sounds like a real job for government identity agencies. Krawk!

(And while we’re at it, we could probably close up the end-user computer problems with hardware solutions.)

Myoukochou January 3, 2009 8:04 AM

I propose that vendors providing TLS implementations (OpenSSL, GnuTLS, YaSSL, NSS, Microsoft) should now move MD5 into the so-called “LOW” set of ciphers that are no longer trusted and won’t be negotiated; in the same bucket with the old 40-bit “export” symmetric ciphers.

I think that’s the right solution; it won’t require any action removing individual root authorities. pokes Thawte

Perhaps it’s time to re-examine if we want 1024-bit RSA certification authorities to remain, too. The CAs who haven’t should perhaps roll new keys out.

That’s the effective, general mitigation against this; MD5 signatures are effectively broken and cannot be trusted.

Clive Robinson: Please show how blocking https: provides any meaningful security benefit in a corporate environment.

If you actually have truly confidential data, and you don’t trust employees who have access to that data not to intentionally leak it, you have a serious security problem well beyond the realm of network administration.

If you’re really worried about leaks, you have bigger problems:– disconnect from the internet entirely; hotglue all Firewire and USB ports; physical search in and out allowing no electronic devices/media; install monitoring software; brick up all windows in the secure area. Oh, and more fundamentally, hire people you trust. That last one might be a bit cheaper.

If you just read the BOFH and you absolutely insist you want to spy on the users, don’t block https: – that’s the dumb way round. They’re your computers; install (and disclose the installation of) monitoring software. Endpoint attack -> Game over.

Don’t mess around with blocking protocols. It’s a pointless waste of resources that makes enemies of your own users – enemies to which you will inevitably lose when they are forced to circumvent your security policies to get work done.

Clive Robinson January 3, 2009 12:22 PM

@ Myoukochou,

“Clive Robinson: Please show how blocking https: provides any meaningful security benefit in a corporate environment.”

First off it’s not just HTTPS it’s all encrypted traffic.

Secondly it’s not a mater of trust in employees, it never was and it never will be (it is they that think it is an issue of trust).

However as a passing note about 0.1% of the population in the UK is currently under lock and key (it’s the maximum capacity we have and it’s full all the time). Further the “criminal population” ie those with a conviction or arrest is something like 30 times bigger than this. So on the law of averages 1 person in 30 has had their collar felt. When you then go on and consider that this represents only a small number of those breaking the law (say 10%) you suddenly find yourself at 1 in 3 people has done something illegal at some point.

So the oportunity for blackmail is there as is possibly the inclination to commit a crime if the odds are favourable.

It is a well known senario of the sales person changing jobs with the customer list and only slightly less of the customer list being made available to the competition.

Then there is user issues, just how many people’s identity details have been lost or compramised via data that has gone missing?

The known figuers for last year in the UK suggest it’s around 10% of the population per year (I personaly have had my medical records lost three times in two years).

So on balance there realy is no sensible way you can trust the employees in an organisation is there?

But to get back to the real issues.

I’m not sure which part of the world you are in, but in some places there is a minor problem of the law be it civil or criminal, of which civil is by far the most hurtfull to an organisation.

Most organisations “need” (or feel they need) to be connected to the internet these days and malware is an everyday (minute?) occurance as is insecure applications. A lot of malware uses non plaintext to send out information usually on a recognised port (HTTPS etc) to reduce the probability of detection

My vote is plaintext only across the chokepoint to the outside wall unless there is a clear business case, which is accompanied by an appropriate way to lock it down.

Internal “sensitive” data is encrypted but also contains plain text “canary data”
Set your IDS to look for all non recognised protocols and data formats as well as the canary data on outbound traffic and record everything.

If the IDS trips shut the data stream down and investigate.

Any other viewpoint is going to get an organisation into problems such is the way judges are interpreting legislation.

My viewpoint may seem draconian and I don’t always get my way. But hey I don’t like having legal people crawling up my a** nor do the company directors now it’s happened to them.

Oh and ask any IT Admin who has had to go through a full electronic “discovery process” including cross examination, what it’s like then have a think about it, before you suggest trust and openness as a security measure where unlimited liability is involved.

Myoukochou January 3, 2009 2:44 PM

@Clive: Security (in my opinion) is about appropriate, economical spending of resources to defend against probable threats; including the proper management of trust (or lack thereof).

If you disagree that “blocking encrypted traffic” is (as Bruce would say) “security theatre” which fails to effectively counteract a realistic challenge in your threat model, cite your sources.

Don’t forget to take into account the probability and consequences of false positives (compression) and false negatives (steganography/protocol masquerading).

Your threat model regarding data leaks is more likely USB keys, laptops, iPods and phones walking right out of your front door, as widely publicised last year; proper use of encryption could have mitigated much of that.

However, perhaps we’d better take this elsewhere (my blog, maybe), as this isn’t on-topic in the context of SSL certificate forging. 🙂

Clive Robinson January 3, 2009 5:35 PM

@ Myoukochou,

“Security (in my opinion) is about appropriate, economical spending of resources to defend against probable threats; including the proper management of trust (or lack thereof).”

How do you define “appropriate” or “economical”?

Esspecialy when it comes to the legal proffession second guessing you?

Likewise the “proper management of trust”, when the two definitions of trust are effectivly the oposits of each other (personal trust -v- trusted systems)?

Both “appropriate” and “economical” are expressions without measurand and are only relative to perspective which changes constantly.

Take for instance the business measures relating to accounting and electronic data prior and post Sarbanes-Oxley. What was appropriate and economical prior was most certainly not post, your definition of security does not take this change into account except vaugly in passing (probable threats).

And as you should be aware there is a whole host of other legislation you need to take into account depending not only on your jurisdiction but also on who you do business with and their jurisdiction (the UK has amongst others the ECA and RIPA).

Your view of security appears from what you have said to be a “technical one” not one of “liability minimisation”. And from the technical perspective a “plaintext only” policy or ideal does not make that much sense (and I could probably give you a run for your money on arguments from that angle 8)

However the changes in legislation and the fact that the legal profession in the US are now using discovery as a very effective way to enrich themselves makes a technical view point considerably less important than spiking the guns of a team of high paid legal talent using every trick they can to put your organisation on the wrong foot.

With regards your comment,

“cite your sources”

With regards to what and in what form?

On a technical issue you say in your prior post,

“They’re your computers; install (and disclose the installation of) monitoring software. Endpoint attack -> Game over.”

Unfortunatly as you will find the “concept” of an “endpoint that can be monitored” is just that on a modern PC when it comes to malware it is not an “actuality”.

Even Microsoft cannot tell you what is happening within their operating system (Unix is a lot better from this perspective but still nowhere close) which is why root kits are so effective.

Effectivly the monitoring software will only report what it “is told” at the “points it monitors”. For instance if there is malware that has effected the appropriate system dlls then your monitoring software will not only not see it’s activities it will not even be looking for it.
You simply cannot monitor each and every point the resource usage would be disproportianate (think of the effect anti-virus software has and multiply by the number of monitoring points). All you can do is “black box” the PC and monitor (some of) the inputs and outputs (VM software is a definate plus for this).

The “No encryption across the choke point” is a policy not a port blocking activity as “effect” is not “cause”. Like all policies it has mandated exceptions that are known and alowed.

With regards your point,

“Your threat model regarding data leaks is more likely USB keys, laptops, iPods and phones walking right out of your front door, as widely publicised last year; proper use of encryption could have mitigated much of that.”

As you say thats the “threat model” of “last year” this is in addition to that.

Data for internal organisational use is as I indicated encrypted. Plain text versions have “canary data”. Neither should be crossing the choke point under ordinary circumstances except where alowed by policy.

Unknown exceptions are an indicator that something is wrong and needs to be investigated. Causes could be amongst others,

1, Malweare activity.
2, Inapropriate user activity.
3, Incorect policy rules or implementaion.

Detecting and shutting exception data streams is a pragmatic response (damage limitation).

What you regard as “false positives” indicates that there is an activity that needs to be accounted for by a user or process. If it cannot be accounted for then there is a problem. If it can legitimatly be acounted for then the policy rules need appropriate modification.

Your “false negatives” are in fact “covert channels” which are a sub class of “side channels” which are in turn a sub class of EmSec.

Side chanels are either deliberate (covert) or a consiquence of the system behaviour. Strange as it might seem they are actualy not an issue as such currently.

Either they become apparent from a side effect at a later stage or they do not. If they do not have side effects then there is nothing to investigate, if they do then the storage of all traffic gives an oportunity for such “side channels” to be identified. But you have to be aware you may embarque on a “ghost hunt”.

Having a working knowledge of side channels I’m more than aware that there is little or nothing you can do to stop them as they exist as a consiquence of the uncertainty in general purpose computing.

Myoukochou January 3, 2009 7:15 PM

@Clive: A well-thought out comment, if not one I entirely agree with. My view of security is indeed technical, not legal; I’m neither a barrister nor a solicitor.

I’m largely concerned that this type of “compliance” argument will be used more generally to argue against the proliferation of encryption. I’d like to see a little more of it around, for the same reason we put mail in envelopes.

But I think we should probably take this elsewhere if we wish to discuss this further; this is drifting off-topic. By all means contact me if you’re interested.

Norwegian January 4, 2009 6:49 AM

Just to correct a mistake in the article. Dag Arne Osvik currently lives in Switzerland and is currently at EPLF in Lausanne, he is not Swiss but Norwegian.

-ac- January 5, 2009 10:15 AM

My first thought: the perfect phish
Second thought: somebody impersonating Bruce??!?!
SSL: it’s what we’ve got. Our current user base doesn’t support the next layer of paranoia.

David Hasson January 5, 2009 2:05 PM

I disagree. The same users that furiously click through warnings also do the same for automatic upgrade notices and new fad software. Firefox 3 and Chrome both make it extremely difficult to browse to pages with broken certificates – it’s a 5 click process now.

That being said, I’d also say that everybody still uses MD5 for things other than SSL certs. I think if the crack becomes more accessible within the year, it will be the next “in” thing for kiddies to crack, and it will be more of a problem then you let on.

Davi Ottenheimer January 5, 2009 3:58 PM

“fucknozzles”? that’s a new one. thanks for sharing. goes up there with “asshat”, which it seems always used to come up in debates about SSL in the past.

Kalkin January 5, 2009 9:43 PM

I’m with Philip, above. If there’s a cert problem with some random Web 2.0 site, I just click through. If it’s with my bank, no way. I sometimes have a tougher decision to make when it’s some small web commerce site, but I usually click through… the credit card company is going to end up absorbing any losses, so whatever.

Paeniteo January 8, 2009 5:29 AM

@Kalkin: “the credit card company is going to end up absorbing any losses, so whatever”

Nice example of one of Bruce’s favorite topics: externalities. 😉

John Tomlinson January 21, 2009 9:52 AM

Bruce, In a recent newsletter, you state that SSL does not provide much in the way of security. But it depends on how you use it. In inter-application communications, SSL does not have the human element that represents a vulnerability. When an interface is created between two automated applications, if the client finds an error in the certificate, say because it expired (or someone hacked the server end), the interface will fail completely. The data will not get compromised, an error message will be written and team members will be notified. Hence, SSL is an important, solid cornerstone in the backoffice communication of sensitive information.

pfn June 10, 2009 8:09 AM

“pedantic fucknozzles” is truly a good one 🙂

In this context, the worst ever pedantic fucknozzles are the ones that worked on Firefox 3.0, which turned the perfectly good “show anyway” alert into 5 clicks on 3 different screens, with idiotic presets.
The “someone who has never fetched a real-world” page applies here too. Why on earth would I want to permanently save (this is the default setting!) a certificate that my browser warns me about and forces me to click through 3 confirmation dialogs?

On top of that, Firefox patronises you so much as to refuse to load certain sites entirely because of certificate issues (currently, for example, The page is in the favourites, it pops up as the first thing in the useless pseudo-smart “superbar”, so obviously the browser figures that you have been visiting that site regularly. Yet, it denies access to it and doesn’t even allow a “show anyway” option. What’s the end result of all that smartness?
You use Internet Explorer.

Anonymous June 10, 2009 2:18 PM

Why not just avoid all SSL/TLS, if openssl, etc are just remote control programs?

CryptoKing June 14, 2009 3:46 PM

This is absoultely a browser issue.

The only way to solve the problem is to remove support for MD5 from all browsers.

Even if everyone stops issuing these certs and all have expired it would still be possible to exploit by making a differential change to existing MD5 certs such that the signature is not altered by changes to the date field in a bid to “reserrect” expired certs.

Signatures should be at least as secure as SSL’s PRF.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.