UAE Man-in-the-Middle Attack Against SSL

Interesting:

Who are these certificate authorities? At the beginning of Web history, there were only a handful of companies, like Verisign, Equifax, and Thawte, that made near-monopoly profits from being the only providers trusted by Internet Explorer or Netscape Navigator. But over time, browsers have trusted more and more organizations to verify Web sites. Safari and Firefox now trust more than 60 separate certificate authorities by default. Microsoft’s software trusts more than 100 private and government institutions.

Disturbingly, some of these trusted certificate authorities have decided to delegate their powers to yet more organizations, which aren’t tracked or audited by browser companies. By scouring the Net for certificates, security researchers have uncovered more than 600 groups who, through such delegation, are now also automatically trusted by most browsers, including the Department of Homeland Security, Google, and Ford Motors­and a UAE mobile phone company called Etisalat.

In 2005, a company called CyberTrust­—which has since been purchased by Verizon­—gave Etisalat, the government-connected mobile company in the UAE, the right to verify that a site is valid. Here’s why this is trouble: Since browsers now automatically trust Etisalat to confirm a site’s identity, the company has the potential ability to fake a secure connection to any site Etisalat subscribers might visit using a man-in-the-middle scheme.

EDITED TO ADD (9/14): EFF has gotten involved.

Posted on September 3, 2010 at 6:27 AM58 Comments

Comments

Paul Renault September 3, 2010 6:41 AM

…and if you decide to not trust a particular certificate authority that Microsoft uses (or more to the point, that IE uses) and delete that particular certificate, Windows will just redownload it for you…(to make things easier, y’know).

Firefox doesn’t use the Windows’ list. I believe that deleting a Firefox certificate leaves it deleted (..until the next upgrade?). Does anyone know the rules for these?

vwm September 3, 2010 7:18 AM

@Paul: Looks like you can not delete the Etisalat Certificate in question – at least I did not find it in my cert-stores. So I’d need to delete the underlying CyberTrust­ Certificate, with again might cut off a bunch of legitimate sites as well.

Btw, I’d rather like to disable or distrust an untrusted certificate – so my computer can issue a serious warning – instead of deleting it and thereby causing yet-another-unknown-certificate error.

Mark R September 3, 2010 7:37 AM

If this article prompts you to check out your browser’s security settings, check whether your browser checks for certificate revocation. I was surprised to find that most don’t by default.

Cornerstone September 3, 2010 7:39 AM

If you delete a CA from the built-in list in Firefox it will re-appear in the list upon restart but the default settings will be changed to “not accept” which means you should get a prompt. I’ve now done this to a lot of the strange CAs in the list.

This whole CA thing has been a problem for a while and there’s no way to know who has Intermediate Certificates that can be used for MITM filtering. One should probably assume any organisation with significant resources can do this now if they can link into the routing.

There is dedicated hardware being sold that you just drop a certificate into and it handles the whole MITM process inline for arbitrary sites and stores the clear text data.

Firefox should be storing fingerprints of certificates for comparison with later visits but I haven’t heard of plans for that. Everyone wants to continue believing in the trust of CAs worldwide but it certainly isn’t very robust any more.

At least with SSH keys you have control over both ends and don’t hand out subkeys that would allow eavesdropping. I think this is the real story behind the Blackberry et al news lately. These governments aren’t pressuring SSL users because they know how to deal with that. They’re focusing on the ones that have no intermediate keys.

I. Care September 3, 2010 8:42 AM

This has irked me for some time.

However, I think about 99.999% of users have no idea how any of this works.

I work for a company with over 15,000 employees and maybe 5 of us know what’s going on under the hood, and only one of us cares.

Mr. Potato Head September 3, 2010 8:55 AM

The title of the article made me laugh.

Malvin: I can’t believe it, Jim. That girl’s standing over there listening and you’re telling him about our back doors?

Jim Sting: Mister Potato Head! Mister Potato Head! Back doors are not secrets!

Moritz Naumann September 3, 2010 9:24 AM

The Monkeysphere – http://web.monkeysphere.info/ – is a nice way to fix the centralized certification issue of X.509 by integrating peer based trust relationships to the system. In simple terms: If your trusted friends know and trust the admins of some webserver, isn’t this a much better indicator than some central authority whose policy you do not really know?

Getting rid of centralized certification obviously doesn’t work for companies, but there is no need to abolish CAs if you use the Monkeysphere, it just adds to your existing PKI and will not break what you had before.

Cornerstone September 3, 2010 9:30 AM

@nine,
I use that plugin but IMO Firefox should have it built in. I expect that would alarm too many people though and many users would disable it. It’s astonishing how few users download it.

John Campbell September 3, 2010 9:47 AM

I recall, in James Burke’s “Day Th Universe Changed”, a section on how cultures establish an “authority” to handle– and specialize in– specific tasks that are rather more difficult to manage in a distributed fashion.

That such systems can be subject to corruption should come as no surprise since every human has their own “slant” on things even if it doesn’t rise to the level of qualifying as an agenda.

Also, consider that the various organizations that handle certificates ARE commercial entities rather than governmental (and governmental would not help, here, since few of us trust those we live within) and, so, are trying to please their share-holders, and, to do that, they have to cut spending on customer service and reducing costs (by off-shoring) helps to do so.

The hell of this is that I cannot see any way to form a certificate-managing firm that isn’t going to be distrusted by any significant majority of “competent” customers (and, face it, when it comes to certifcates, would you consider 0.01% of the users of IE and other browsers to be “competent” within this purview?).

If there is enough money involved then “trust” will be abused. Barnum said it first.

Roger September 3, 2010 10:26 AM

I guess while we’re on the topic, could Bruce confirm that the fingerprint for his cert is:
6E:E4:60:2B:E6:B6:7F:F1:A9:2C:81:B0:30:F2:21:DF:FF:9A:2A:88

Also,
a) why does https://www.schneier.com/blog redirect to http://www.schneier.com/blog, even though all subordinate links of https://www.schneier.com/blog work fine?
b) why are all the link refs explicitly set to http, so even if you view a page on SSL, every link sends you back to plain HTTP? (Unless you manually edit it, which is tedious …)

Ben September 3, 2010 11:06 AM

How to “UN TRUST” a certificate in Internet Explorer:

If you wish to “not trust” a certificate which would otherwise be trusted, you can do this adding the certificate to the “Untrusted Certificates” store.

It will then NOT be trusted for any purpose.

You can do this from the certmgr.msc GUI.

Henrik Holst September 3, 2010 12:17 PM

@roger,

regarding b) it’s possible because wery few web page designers know that hrefs can be stated as “//www.schneier.com/blog/xxx” when using absolute paths (using // instead of http(s):// lets the browser insert the transport protocol used on the site) so they enter “http://www.schneier.com/blog/xxx” instead and then when trying to design the https site they don’t do it because they have no idea how to convert all those hrefs.

Gert, South Africa September 3, 2010 12:53 PM

For temporary MITM attacks, it would help if browsers cached certificates for domains and alerted the user if the certificate change without the old certificate (almost) expiring or being revoked. A service comparing certificates received by several users might also make sense (with the certificate for the checking service being hardcoded in the browser)

It should also handle privately signed certificates like SSH keys. Giving an error-like message on a dodgy certificate while opening unencrypted pages without notices doesn’t make sense… (It should however show a different indicator than a fully trusted site (lock with red question mark?))

Neil September 3, 2010 2:29 PM

Maybe an enterprising UAE-resident could have a look at the https certificates that they are receiving for a few choice sites to see if they have CyberTrust on them. I’d start with gmail and hotmail I guess..

Will September 3, 2010 3:32 PM

Whilst everyone might feel sorry for UAE residents…

The exact same attack is completely open on all the networks you use daily by anyone in a position of power between you and the site.

Do we really think that the NSA or whoever you think the boogie man is couldn’t get a signing certificate signed by, say, Verisign?

The problem with PKI is trusting the PKI in the first place.

And webs of trust are as strong as the weakest link, so please don’t go suggesting that solves anything http://xkcd.com/364/

Jarno September 3, 2010 4:30 PM

About the only way to SSL and other CA based systems to work at least somewhat is to have one CA per customer entity.

For example if Google would be only served by Verisign, or whatever CA they decide to use as CA, it would not matter what other CAs issue their google.com certs would not be valid.

Similar model works pretty well for domain names, if I have my domain name at one registrar, no other registrar is going to be able to do anything with it unless I transfer the domain.

Same kind of model would prevent the kinds of attacks we have seen when some CA is strong armed to issue false certificate.

antibozo September 3, 2010 4:39 PM

It’s hardly news that the X.509 infrastructure is fundamentally broken. It starts with the fact that there’s no way to constrain a CA to authenticating a subset of the DNS hierarchy.

This is why a DNSSEC-based PKI is a much better solution. With public keys stored and signed in the DNS, the scope of a PKI private key compromise is limited to the tree rooted at that DNS cut. And since people are then free to publish public keys for any email address and any hostname they like, we can have ubiquitous strong encryption and authentication, even as far as opportunistic IPsec.

Mark Wooding September 3, 2010 7:52 PM

@antibozo
DNSSEC. Umm, yes. That works — as long as you trust Verisign. Not that they’d ever sign bad certificates or screw over the DNS toplevel zones. Not often, anyway.

The problem with CAs is threefold.

  • The economics are wrong. The CA is paid by the applicant, but the service the CA provides is useful primarily to the reliers.
  • Any CA can sign a certificate for any entity; you end up with a huge list of CAs, and you have to trust them /all/ for /everything/. If any of them fails, you lose.
  • Certificates attach keys to DNS labels; if you knew the DNS name you were expecting, you can be sure you have the right key. But DNS names are hopeless in reality because you have no idea whether a DNS name, as shown in your browser’s address bar, is the `right’ one: redirects to external payment centres are all too common, for example.

DNSSEC helps a bit with the second problem. Specifically, it reduces the problem from trusting all the CAs to trusting a logarithmic-size subset of them (specifically, the owners of the zones along the path from the root to your target name). It does nothing whatever with the other two problems.

antibozo September 3, 2010 8:34 PM

Mark,

One thing I think you overlook is that the service the CA provides is an assertion about the DNS name, by a third party who has no practical or secure means of verifying that assertion. I’m sure you realize that most certificates are verified by sending an email to a domain contact address.

Certainly there is a subset of cases where domain names attach to names of “real” institutions, and an assertion of that link is needed. This is currently provided by the very expensive extended validation certificates. But many, if not most, certificates are needed for validating entities that exist only as domain names anyway, e.g. amazon.com, google.com, etc. For those like bankofamerica.com, there does still exist a need for third party assertion of something, but the current provision (EV certificates) is ineffectual at best. How many people actually verify that a banking institution is using an EV cert?

Trusting Verisign is not necessary in general in a DNSSEC world. Verisign does have the ability to forge DNSSEC assertions in the case of the subset of TLDs they manage. But in X.509, they have the ability to forge certificates for any entity. For financial institutions in a DNSSEC world, it might actually make sense to create a new TLD managed by some consortium drawn from that industry.

Your assertion about the economics’ being wrong is sort of true for server certificates. But in any case, the economics are practically moot in a DNSSEC world since there is no longer any per-certificate cost.

Cornerstone September 3, 2010 10:05 PM

Attaching keys to domain names is a step in the right direction along with getting rid of external CAs and per year costs involved. Anyone can generate a key, and any domain owner should be able to attach it to their domain to provide secure communications without third parties. At least in that case you only have to provide secure DNS lookups and not a multitude of other pathways for infiltration.

I don’t know how far DNSSEC goes towards this and how well it’s been accepted and in use. It doesn’t appear to be well known. In any case, I’m sure there are some forces that will do what they can to prevent a more secure model from evolving.

Paeniteo September 4, 2010 5:51 AM

@Cornerstone: “Anyone can generate a key, and any domain owner should be able to attach it to their domain to provide secure communications without third parties.”

Just use self-signed certificates. SSH works exactly like this.

Lisa September 4, 2010 7:08 AM

When validating any certificate, it is trivial to check the certificate depth. Why not have a browser option/plugin that allows one to automatically reject chained certificates and only allow those certificates directly signed by a registered CA in the root store?

MemVandal September 4, 2010 8:10 AM

….Using a combination of classic Certificate Authorities and DNSSEC, it will be quite hard to perform man-in-the-middle attacks like this….

-Jakob Schlyter

And who signs the cert for DNSSEC again?! It would not make it difficult, if you already are able to do MITM on SSL.

antibozo September 4, 2010 2:41 PM

Paeniteo> Just use self-signed certificates. SSH works exactly like this.

SSH solves a completely different problem, and initial host key establishment is arguably the primary weakness of SSH. If we made self-signed certificates the norm with no other way of validating them (e.g. DNSSEC) we might as well not bother with SSL at all.

MemVandal> And who signs the cert for DNSSEC again?! It would not make it difficult, if you already are able to do MITM on SSL.

The domain operator does. What is your point? I.e., how can an organization such as Etisalat, which is both an ISP and a trusted CA, and therefore in a perfect position to conduct a MITM attack, forge a DNSSEC signature for bankofamerica.com as well?

Lynn Wheeler September 4, 2010 4:26 PM

For the most part, self-signed certificates are a side-effect of the software library; they are actually just an entity-id/organization-id paired with a public key. They can be used to populate a repository of trusted public keys (with their corresponding entity-id/organization-id); effectively what is found preloaded in browsers as well as what is used by SSH.

The difference in a Certification Authority paradigm, is that the public keys (frequently encoded in self-signed certificates), from some (say browser) trusted public key repository, can be used to extend trust to other public keys (by validating digital certificates which contain other public keys). The methodology has been extended to form a trust chain … where cascading public keys are used to extend trust to additional public keys (pared with their entity-id/organization-id).

In the Certification Authority paradigm, all public keys from a user’s (possibly browser) trusted public key repository are accepted as being equivalent … reducing the integrity of the overall infrastructure to the Certification Authority with the weakest integrity (if a weak integrity CA, incorrectly issues a certificate for some well known organization, it will be treated the same as the correctly issued certificate from the highest integrity CA).

Another way of looking at it, digital certificates are messages with a defined structure and purpose, that are validated using trusted public keys, that the relying party has preloaded in their repository of trusted public keys (or has been preloaded for them, in the case of browsers).

Lisa September 4, 2010 5:06 PM

When validating any certificate, it is trivial to check the certificate depth. Why not have a browser option/plugin that allows one to automatically reject chained certificates and only allow those certificates directly signed by a registered CA in the root store?

Cornerstone September 4, 2010 8:16 PM

@Lisa,
There is already. That is what Certificate Patrol does. It makes this step more visible and pops up warnings when an unexpected change occurs. Each time a new cert if accepted it shows a warning bar at the top to remind you to actually look at who signed the cert. Once you have populated it with accepted certs, it will monitor any changes and let you know “hey, your ebay cert has changed to one issued by the “Republic of Transylvania”.

The problem isn’t that some people can mitigate the SSL failures. It’s that 99.99% of the public thinks it is fully trustworthy. Last I checked only 38,000 users downloaded Certificate Patrol.

Lynn Wheeler September 4, 2010 9:59 PM

@Cornerstone
If you are going to maintain your own trusted repository … what you are really interested in is the trusted server URL/public key pair (contained in the certificate) … the certificates themselves then become redundant and superfluous and all you really care is if the (server’s) public key ever changes … in which case there may be requirement to check with some authoritative agency as to the “real” public key for that server URL.

We had been called in to consult with small client/server startup that wanted to do payment transactions on their server, the startup had also invented this technology called SSL they wanted to use. Part of the effort was applying SSL technology to processes involving their server. There were some number of deployment and use requirements for “safe” SSL … which were almost immediately violated.

Cornerstone September 4, 2010 11:50 PM

Agreed. That is the hard part to verify. You need an alternate secure channel to verify the key fingerprint, and you have to trust that the same key hasn’t been distributed in bad faith. For small groups it’s easy to arrange this but in public use it’s dependent on trust at some level.

What unclear to me is whether DNSSEC really will mitigate this by storing fingerprints in the DNS records. You still depend on verifying the key used to sign the DNS records thru an alternate secure channel. And almost no one will do that.

elegie September 5, 2010 12:25 AM

For the Firefox Web browser, it may be possible to use the following procedure to specify that an SSL certificate not be trusted.

  1. Visit the URL which uses the SSL certificate (for this example, the https://www.example.com URL.)
  2. After the page loads, double-click the padlock icon in the lower-right corner of the browser window.
  3. In the Page Info window that appears, click the Security heading near the top.
  4. In the Web Site Identity section of the window, click on View Certificate.
  5. In the window that appears, click the Details heading at the top of the window.
  6. There should be a Certificate Hierarchy section in the window. In this section, one of the listed certificates should correspond to the domain for the Web site (i.e. “www.example.com” or “*.example.com”)
  7. Select the certificate that corresponds to the Web site domain and choose the Export… option at the bottom of the window. The certificate should be saved as an X.509 certificate (PEM) file.
  8. Open the Preferences window for the Firefox browser.
  9. In the Preferences window, select the Advanced heading at the top.
  10. Among the subheadings (“General”, “Network”, “Upgrade”, “Encryption”), the Encryption subheading should be chosen.
  11. Click the View Certificates button.
  12. In the Certificate Manager window, select the Servers heading at the top.
  13. Click the Import… button and import the certificate file that was saved in step 7. The imported certificate should be added to the list.
  14. Select the imported certificate in the list and click the Edit… button. In the dialog box that opens, the “Do not trust the authenticity of this certificate” option should be selected. After making sure that this is the case, click OK.
  15. Try connecting to http://www.example.com. The browser should give an “Untrusted connection” error.

daniel September 5, 2010 12:59 AM

QUESTION.

I’m not a real professional with SSL, however I know how users react. First and foremost, users see no difference between a browser-based lock icon and a GIF image of the same lock placed on the top, or bottom, of a page. but let’s put that aside for a minute.

Is it possible that a CA, or a sub in this case, can sign .com? If so we have a big problem. If not i’d love to understand why that’s different than a “.microsoft.com” cert (again, not an SSL pro)

thanks!

Cornerstone September 5, 2010 1:29 AM

@Daniel,
Yes. The common name can be anything. At least I tried this using my (private) CA and got a valid certificate. I don’t know if browsers will accept it. They shouldn’t – that would be silly. But I have never tested it. But they will accept *.microsoft.com if signed by my CA as I do exactly that for my own domain name. Whether *.com or not doesn’t matter since an accepted CA cert can generate and cache them on the fly. First hit, a bit slow maybe but after that it’s cached. I think such a machine would employ hardware acceleration anyway.

antibozo September 5, 2010 1:42 AM

Cornerstone> What unclear to me is whether DNSSEC really will mitigate this by storing fingerprints in the DNS records. You still depend on verifying the key used to sign the DNS records thru an alternate secure channel. And almost no one will do that.

Um, everyone will do that eventually, and the “alternate secure channel” is the chain of signatures descending from the DNS root, which was finally signed this past July. As DNSSEC rolls out to the edges, every resolver library will be able to check validity of all (signed) information in the DNS based on a single trust anchor.

I think you have some details wrong: DNS won’t contain fingerprints. DNS will store either a public key or a certificate (which can be self-signed). Either way, the public key can then be verified by following the chain of signatures from the DNS root trust anchor. It’s still a PKI, but there’s now only one trust anchor–the root KSK–and any key compromise along the signature chain only potentially affects the subtree below it on the DNS. A compromised DNSSEC private key for example.com cannot be used to sign public keys for bankofamerica.com.

This is markedly distinct from the X.509 PKI, where any chained CA certificate can sign certificates for any entity in the world. Fewer keys to protect means better security.

Cornerstone September 5, 2010 2:15 AM

@antibozo,
That sounds good. I hope it works out that way and we don’t find out 10 years down the road there was always a way to subvert the chain of trust. Do we have to trust that the root KSK is not, in fact, signed by a key above it?

averros September 5, 2010 3:24 AM

The very idea that you can trust “authority” is fundamentally broken (and not only in cryptography). Somehow, it still is the fundamental principle for most security designs.

The whole concept is an illustration to the fact that political religions affect minds in various ways not directly related to politics. An anarchist would go with web-of-trust 🙂

antibozo September 5, 2010 4:29 AM

Cornerstone> Do we have to trust that the root KSK is not, in fact, signed by a key above it?

By definition, as a trust anchor, the root KSK is simply trusted. It is preconfigured in the DNS resolver, analogously to the way CA certificates are preconfigured into browsers.

averros> The whole concept is an illustration to the fact that political religions affect minds in various ways not directly related to politics. An anarchist would go with web-of-trust 🙂

A web of trust is fine among a group of people who know one another. When you have millions of machines talking to one another, a web of trust becomes worth infiltrating, and it doesn’t take much infiltration to subvert it, at least in the GPG sort of model. Look how fast people can become friends of friends of friends on Facebook, with no assurance that there’s even a person there at all.

Lynn Wheeler September 5, 2010 7:00 AM

DNSSEC is somewhat catch22 for the Certification Authority industry. Originally, SSL was to offset various perceived weaknesses in DNS infrastructure. Improving the integrity of the DNS infrastructure, mitigates the justification for SSL.

Also, when we were doing this stuff that is now called electronic commerce (with SSL), we had to do various walkthru and audits of (SSL) CAs. Basically they require a lot of identification information from an SSL applicant, which then they have to match with what is on file at DNS as to the domain owner. As a result, DNS is the REAL trust root for SSL (with CA process somewhat obfuscating the fact). If you can’t trust key fingerprints or other details obtained from DNS … then how can the CAs trust DNS for information about the domain owner (for issuing a certificate). CAs need improved DNS integrity for the information they need, which would also improve DNS integrity for DNSSEC informatioin.

Daniel September 5, 2010 8:17 PM

By the way, Internet Explorer 6 (say what you want, but it was still the most popular browser until maybe 2 years ago… perhaps it still is) defaults to NOT check CRLs… So basically the whole certificate revocation process is completely useless. I know the regular 80/20 argument, I know the “whoever uses IE is stupid” stuff… but the bottom line is we’re talking about users who don’t know much, and it took forever to get them used to “when you see the lock it’s ok”.

Nick P September 5, 2010 10:49 PM

@ Lynn Wheeler

SSL was intended to provide a secure tunnel for other protocols, primarily web browsing. It was in no way designed for DNS and probably should never have been used for that. I find it amazing that our infrastructure is so brittle that the IT community chose to keep insecure/obsolete apps like DNS, FTP and Telnet for so long. Our infrastructure should be more adaptive or at least designed so adaption can occur. Properly-designed interfaces and modules along with live update (think Erlang or LISP) would be a nice start.

averros September 6, 2010 2:59 AM

antibozo – Facebook is not a good example because there’s practically no cost to adding new friends – anybody publishing anything on-line assumes that the information becomes public (if he is not terminally stupid, of course). Thus adding another friend doesn’t appreciably compromise privacy.

There is a significant body of work dealing with attacks on web-of-trust kinds of security systems (such as Sybill attacks, etc); it turns out that it is possible to achieve a high degree of assurance if most people are reasonably trustworthy (they are… it’s human nature).

Unlike centralized systems, which are brittle – and have the drawback of elevating to the critical positions exactly the kind of people who should not be trusted (just look at the prevalence of sociopaths among politicans and CEOs), the web-of-trust models are resilient and don’t fall apart because a few people are inept or malicious.

(And, yes, I agree, GPG model is broken as designed; but that doesn’t mean that web-of-trust couldn’t be done properly).

Ian Woollard September 6, 2010 10:20 AM

Really I think most people here are modelling this wrongly.

SSL doesn’t protect against all attacks only man in the middle attacks, by non trusted users.

If you think about it, you will realise that that still massively reduces the attack space.

And the key point is what is in the stream, and who can break into it, and what can they do to it?

It’s not like you’re likely to put high risk information into a stream to a website in outer mongolia anyway- banks won’t be on such a dubious certification authority anyway; or if they are they need to move, pronto.

There’s no panacea for information security.

Lynn Wheeler September 6, 2010 10:45 AM

The original (merchant server/electronic commerce) SSL deployment requirement/assumption was that the user supplied the URL and understood the relationship between the webserver (they thot they were talking to) and the URL. Then SSL would validate that the webserver (that they were really talking to) corresponded to the URL (countermeasures to various kinds of DNS weaknesses). This was necessary two part process to guarantee that the webserver, a user thot they were talking to, was the webserver that they were actually talking to

That was almost immediately violated when merchants discovered that SSL cut their thruput by 90-95% and dropped back to only using SSL for checkout/paying. The current paradigm has user clicking on a checkout/pay button (from an unvalidated webserver) … which supplies the URL (not the user).

Users now tend to have little or no understanding about relationship between the webserver they think they are talking to, and the corresponding URL. As a result, SSL is reduced to just validating that the webserver, that a user is talking to, is whatever webserver it claims to be. An attack is a fraudulent merchant server (that hasn’t been validated) has been able to obtain an SSL certificate for some arbitrary URL (that was created thru some front company) … which is then used for the “pay button”.

Another attack (that has happened periodically over the years), is domain name hijacking where an attacker is able to update domain ownership information at some arbitrary DNS operation … and then applies for SSL certificate (with their own public key) from any SSL CA. Relatively trivial to register a front company (for fraudulent activity) and then they will guarantee that the information on the SSL certificate application matches the domain name ownership information (on file at DNS).

Part of DNSSEC proposals has user registering public key at the same time they register a domain name. Then all future communication is digitally signed and validated with the on-file public key (as countermeasure to domain name hijacking) … which also eliminates a vulnerability to the SSL CA institutions.

In fact, then SSL CAs could reguest SSL certificate applications be digitally signed … which they can (also) vaildate by real-time fetch of the on-file public key from the DNS infrastructure (changing an error-prone, time-consuming and expensive identification process into an inexpensive, reliable and efficient authentication process). A catch22 might be the rest of the world also doing real-time fetches of on-file public keys … eliminating the requirement for SSL certificates.

antibozo September 6, 2010 11:58 AM

averros> There is a significant body of work dealing with attacks on web-of-trust kinds of security systems (such as Sybill attacks, etc); it turns out that it is possible to achieve a high degree of assurance if most people are reasonably trustworthy (they are… it’s human nature).

But you see, my point is that in a web of trust of a large enough scale to support encryption of IP traffic on the Internet as a whole, the obvious attack mode is to create a botnet that appears to be agents participating in a web of trust. In that case, there’s no assurance that most signors are even people, let alone reasonably trustworthy.

Yuliy Pisetsky September 7, 2010 9:51 AM

Honestly, this might start to become less of an issue for those that truly are security conscious. Browsers have begun to do a much better job of informing the user of who the CA is. For instance, in Firefox it takes one click to get the information about who the CA is. If you’re visiting a secure site, just click and you’ll get the name of the CA right there.

There’s still an issue, though, if an ISP can start by letting connections pass through, and then later start hijacking them. Perhaps the browser can present some sort of warnings when the CA has changed, which should be a fairly rare occurrence.

averros September 8, 2010 2:15 AM

antibozo – what you describe (a botnet pretending to be many legit users) is known as Sybil attack; there’s a bunch of methods for mitigating it, some of them were shown to effective as long as significant portion of users remain legitimate.

An article in Wikipedia would be a good starting point – it has links to some of the literature, and then the usual method of following references will get you more:)

antibozo September 8, 2010 4:17 PM

averros> antibozo – what you describe (a botnet pretending to be many legit users) is known as Sybil attack; there’s a bunch of methods for mitigating it, some of them were shown to effective as long as significant portion of users remain legitimate.

That’s a huge “so long as”. :^)

Jed December 26, 2010 6:30 PM

I’ve been using HTTPS everywhere plugin and noticing something interesting. I have two different plugins that check certificates for potential problems. I have a habit of using the handy search bar to do my searches. It seems that the search bar doesn’t encrypt searches. Over time I’ve noticed that certain searches cause my ssl certificate to change. It triggers less now than it used to. I’m thinking this is MITM but I’m not sure where it’s happening. GOSIP was the last search that triggered it.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.