Man-in-the-Middle Attacks Against SSL

Says Matt Blaze:

A decade ago, I observed that commercial certificate authorities protect you from anyone from whom they are unwilling to take money. That turns out to be wrong; they don’t even do that much.

Scary research by Christopher Soghoian and Sid Stamm:

Abstract: This paper introduces a new attack, the compelled certificate creation attack, in which government agencies compel a certificate authority to issue false SSL certificates that are then used by intelligence agencies to covertly intercept and hijack individuals’ secure Web-based communications. We reveal alarming evidence that suggests that this attack is in active use. Finally, we introduce a lightweight browser add-on that detects and thwarts such attacks.

Even more scary, Soghoian and Stamm found that hardware to perform this attack is being produced and sold:

At a recent wiretapping convention, however, security researcher Chris Soghoian discovered that a small company was marketing internet spying boxes to the feds. The boxes were designed to intercept those communications—without breaking the encryption—by using forged security certificates, instead of the real ones that websites use to verify secure connections. To use the appliance, the government would need to acquire a forged certificate from any one of more than 100 trusted Certificate Authorities.

[…]

The company in question is known as Packet Forensics…. According to the flyer: “Users have the ability to import a copy of any legitimate key they obtain (potentially by court order) or they can generate ‘look-alike’ keys designed to give the subject a false sense of confidence in its authenticity.” The product is recommended to government investigators, saying “IP communication dictates the need to examine encrypted traffic at will.” And, “Your investigative staff will collect its best evidence while users are lulled into a false sense of security afforded by web, e-mail or VOIP encryption.”

Matt Blaze has the best analysis. Read his whole commentary; this is just the ending:

It’s worth pointing out that, from the perspective of a law enforcement or intelligence agency, this sort of surveillance is far from ideal. A central requirement for most government wiretapping (mandated, for example, in the CALEA standards for telephone interception) is that surveillance be undetectable. But issuing a bogus web certificate carries with it the risk of detection by the target, either in real-time or after the fact, especially if it’s for a web site already visited. Although current browsers don’t ordinarily detect unusual or suspiciously changed certificates, there’s no fundamental reason they couldn’t (and the Soghoian/Stamm paper proposes a Firefox plugin to do just that). In any case, there’s no reliable way for the wiretapper to know in advance whether the target will be alerted by a browser that scrutinizes new certificates.

Also, it’s not clear how web interception would be particularly useful for many of the most common law enforcement investigative scenarios. If a suspect is buying books or making hotel reservations online, it’s usually a simple (and legally relatively uncomplicated) matter to just ask the vendor about the transaction, no wiretapping required. This suggests that these products may be aimed less at law enforcement than at national intelligence agencies, who might be reluctant (or unable) to obtain overt cooperation from web site operators (who may be located abroad).

Posted on April 12, 2010 at 1:32 PM76 Comments

Comments

jl April 12, 2010 2:03 PM

So if forged certificates are easy to detect, the won’t it be quite easy to (rightfully) trash the reputation of any “trusted” certificate authority if they got in bed with national intelligence agencies?

Jonathan Lundell April 12, 2010 2:24 PM

It’s hard to see how this would be limited to law enforcement agencies. Selling copies of the coerced certificates sounds like a lucrative proposition for some underpaid tech in country Abc.

lazlo April 12, 2010 2:30 PM

One particularly insidious bit of this is that all it takes is for one of the “universally trusted” ca’s to violate that trust for it to work against any service. Some among the more paranoid (and technically inclined) I know have actually looked at the list of ca’s that their browser trusts by default. I don’t know of anyone who has actually deleted anything from that list.

Just a Bulgarian April 12, 2010 2:35 PM

Hmmm, we have one of our (approved by the state) CAs’ root certs in IE’s trusted CA store. It is a private company. Since Bulgaria is probably the most corrupt country in the EU, that sounds somehow scary 🙂

Brian April 12, 2010 2:48 PM

When I first heard about this, I dug into my certificate store and started changing certs to “Always ask”. Its hardly ideal, but its an acceptable interim solution for me.

The real question is, how do you mitigate this type of attack if you have never been to the site before?

Companies could theoretically add this to their firewalls without digging into the connection (by simply checking the servers returned cert every time and caching it for the clients).

Scary in any case.

Brandioch Conner April 12, 2010 2:50 PM

There’s something that doesn’t read right there.

“This paper introduces a new attack, the compelled certificate creation attack, in which government agencies compel a certificate authority to issue false SSL certificates that are then used by intelligence agencies to covertly intercept and hijack individuals’ secure Web-based communications.”

So it is:
a certificate
issued by a legitimate CA.

But then …

“The boxes were designed to intercept those communications — without breaking the encryption — by using forged security certificates, instead of the real ones that websites use to verify secure connections.”

How so “forged” instead of “real”?

Or are we talking about a certificate for
http://www.bank0famer1ca.com?

Francis Litterio April 12, 2010 3:00 PM

Companies such as Blue Coat sell tools that other companies can install in their routing infrastructure that intercept (MITM-style) outbound SSL connections, substituting a self-signed cert for the cert received from the remote site (though it includes the public key of the remote site). This requires the company to install the Blue Coat CA cert into employees’ browsers, but IT typically has the access levels needed to do that (IE lets the domain administrator do it remotely).

To quote from a Blue Coat whitepaper: “Blue Coat can selectively allow some users to use Skype and disallow others by locking down ports and then allowing only legitimate SSL traffc on port 443. Blue Coat’s ability to proxy HTTPS makes this very secure approach possible.”

John F April 12, 2010 3:11 PM

Brandioch,

The appliance generates a signing certificate, which is signed by a real CA.

After the, the appliances generates (forges) per-site certs on the fly and signs them with the (signed) signing cert above.

Because the appliance’s signing cert has been signed by a trusted CA, the browser sees a valid certification path (trusted root -> signed signing cert -> signed host cert).

Tim April 12, 2010 3:14 PM

Brandioch: They’re real certificates, but ‘forged’ in the sense that they aren’t the same ones used by the actual bankofamerica.com.

Bring on DNSSEC!

kevinm April 12, 2010 4:47 PM

That is why we have our own PKI and do not trust external CAs. Our monitoring checks our web servers once a month to be aware of when the certs expire using this method: http://www.eriugena.org/blog/?p=50
Perhaps I should ask them to check some external servers more often and also control that the serial numbers of the certs do not change.

Security Novice April 12, 2010 4:57 PM

Don’t know about that Security Patrol Add-on for Mozilla–the date arithmetic is odd (cert issue dates of 101/11/2010 that expire in NaN days).

larry seltzer April 12, 2010 5:01 PM

I really don’t get why people are worked up about this. Obviously if the trusted CA can’t be trusted then there’s a trust issue. Isn’t this inherent in the system?

Another part of the article just deals with automation of techniques described last year by Moxie Marlinspike at BlackHat. These techniques all involved some ancillary weakness in the system, such as an ARP-poisoned router.

With respect to the state coercion, the use of terms like “forged” certificates is incorrect. What’s happening is that the CA is betraying the trust relationship with the customer by sharing private keys. The authors say flat-out that it’s questionable whether this would be legal in the US. If it’s legal for a (for example) Chinese government order to a Chinese CA, well who’s surprised at that?

So the only issue I’m left wondering about is how the important vendors, like Microsoft and Mozilla, determine who gets into which list of trusted root certificates. But even this was just as legitimate a question before.

There’s no news here at all.

James April 12, 2010 5:28 PM

I don’t believe the scenario put by the authors. The chinese dissident already established a chain of trust with the other people, she got instructions from them. Why go around this trust to the CAs? That is, why not use some other method which depends on keys that cannot be forged by others such as real encryption? Seems to me this scenario is useless since the trust has been established. Even more, if she met with the actual people then she should use some symmetric encryption with a key she was given.

The real scenario would be what it is now, you go to a website that you’ve never been and you try and trust them, or contact with a website where you have to go through the CA to authenticate and communicate. That’s hard (unless your bank has some real security where they’ve generated private keys and such and no one else outside the organization can know ).

Peter April 12, 2010 5:33 PM

How much collaboration do they need from the CAs?
I don’t know exactly how things work, but I bet some intelligence agency could get the information to sign their own CAs anytime they want to or become one themselves.

Daniel April 12, 2010 5:46 PM

Larry. That’s correct. I don’t understand why this is “scary” unless one accepts that the current environment is scary.

I posted a comment on the Financial Security blog a week or so ago noting that the idea of “trust but verify” is an oxymoron. Either you trust or you don’t; it’s a binary proposition.

Tim, could you explain how DNSSEC is going to stop this type of thing. IIRC it’s coming to USA root in a few months but I’m have the sort of vague impression it’s window dressing. After all, isn’t a trust anchor just as easily violated as a certificate?

Chris Hills April 12, 2010 6:09 PM

If you are worried about this attack I highly recommend the Perspectives extension for Firefox. It checks the certificate presented against a database filled by sensors distributed through the internet. If the certificate does not match you are warned.

http://www.cs.cmu.edu/~perspectives/firefox.html

Personally I do not believe in the current hierarchical trust model since it is easy to so easy to obtain “trusted” certificates. I would sooner trust those certificates that I can verify directly with the site operator out of band.

antibozo April 12, 2010 6:12 PM

Daniel,

DNSSEC enables implementation of an SSL-like strategy (e.g. putting self-signed certs in the DNS where they can be authenticated by DNSSEC extensions) that doesn’t involve CAs. While DNSSEC keys might be compromised and used to sign forged certificates in the DNS, these forged certificates are naturally constrained to the hierarchy of the DNS beneath the compromised set of keys. This is in contrast to CA-based signature, where any public CA can sign a cert for any domain, and therefore anyone with either an intermediate CA cert from a public CA or simply a parallel “compelled” certificate (as in the Soghoian paper), and anyone who has acquired the private key for any public CA’s CA or intermediate CA certificate, can generate new certs that are valid for any domain. This has always been broken in SSL–there’s no existing mechanism for constraining the scope of a CA certificate (root or intermediate) to a subzone of the DNS.

If I understand the scenario described here correctly, an analogous scenario might be the DNS root maintainers’ signing an additional, government-provided, KSK for, say, .com, and including that KSK in the DNS. The eavesdropper with the private part of that KSK could then construct a parallel DNSSEC chain to validate a cert for http://www.example.com, though I believe this attack would be much more difficult if it is in fact possible.

David Schwartz April 12, 2010 7:26 PM

A certificate is kind of like a passport. My passport vouches that the guy in the picture has my name, my birth date, my citizenship, and so on. My certificate vouches that the legitimate owner of the common name in the certificate generated the key in the certificate.

These certificates are “forged” in the sense that they make a false claim but are real in the sense that they were issued by a real authority. It’s as if the US government issued a passport with your name, birthdate, and citizenship but my picture.

I could then use this passport to convince people that I was you.

David Schwartz April 12, 2010 7:30 PM

To respond to Chris Hills, I strongly recommend against the perspectives extension. This extension can hide browser warnings if the certificate is “known valid”, but I can find no documentation of their policies. You would have to completely trust their network of notaries.

Tom T. April 12, 2010 8:40 PM

@ Steve Schultze and Anyone Else Who Knows:

“@lazlo: and even if you do delete one of the root CA’s (actually, you have to change the trust bits because in Mozilla at least deleting doesn’t really make it go away), that entity can always get a secret subordinate CA under another one of the many other root CAs… which your browser will silently trust…”

After reading of this issue in Steve Gibson’s podcast of April 8, 2010, http://www.grc.com/sn/sn-243.txt , I started deleting CAs from my browser, and found exactly as reported: They reappeared. Mozilla Help says you can delete them, but doesn’t address the they-come-back issue. You mentioned “changing the bits”. Please be more specific: what bits must be changed, where, through what path and interface, etc?

If possible, please include instructions for both Fx 2.x and 3.x. Thanks in advance to you or to anyone else who can post this information.

(… and now I’m wondering why Mozilla makes them so persistent and undeletable…)

Tom T. April 12, 2010 9:25 PM

@ Security Novice: The “101” seems to be ordinal date, often referred to as Julian date. I. e., “101” is the 101st day of the year, or, for any non-leap-year, April 11.

Haven’t figured out the “NaN” part yet. Might be a glitch in the sw.

Andrew Philips April 13, 2010 2:06 AM

The SSL standard could be tweaked to stop most MITM attacks – at least those that wish to hijack session creation with a user previously known to the server and trying to create a new secure logon session. The problem is that the client-server connection is protected by server-cert, only, while the password (hash) for the logon protocol is sent through the encrypted data channel, but is not used to create the encrypted channel.

Authentication should result in a Key used to protect the channel. By separating server authentication (browser proving the cert is good) from client authentication (user proving she knows the password), we’ve left the user open to this attack.

IF the connection establishment protocol were modified to mix the user’s password at the connection layer, a MITM attack would be hard — the owner of the “forged” cert would not have the user’s password/verifier to answer the client properly and the client/browser would detect this as a failure to establish a secure session.

The DH-EKE patent has expired, it should be rolled into the SSL standard.

Clive Robinson April 13, 2010 3:53 AM

This SSL issue is but a small part of a number of larger issues.

For some reason as humans when we come up against an issue of “how do we verify an entity” we adopt a “bureaucratic hierarchical trust model” as oposed to a “social associative trust model”.

Which is odd, as humans generaly use associative models in our every day lives even in the work place. And the likes of banks etc used to untill very recent times in the form of “references”, and many employers still “take them up” before offering employment.

It is even odder when you consider that we know all hierarchical systems have significant “upstream” issues, yet we continue to abdicate responsability to them.

And I suspect it is this “abdication of responsability” which is the core issue not just in the use of systems but in the very design of them.

Perhaps it is time we considered “stop using CA’s” and started looking for an alternative. Say using self signed certs and establishing multiple points of signed references, that form intermeshed chains or a web of trust.

However we know from social systems that “webs of trust” have their own issues to do with our inherant “wanting to trust others around us”.

Which again brings us back to the issue of abdicating responsability.

From a legal perspective it is better as an entity to have a single line of trust thus if it should go wrong you only have one other party to blaim to try to get damages. The logic of which boils down to “never trust your friends”…

Time to mutter “pros and cons, pros and cons” and scratch the old wooden block 😉

Then there another “elephant in the room” issue to do with trust and all our online communications, over and above issues of certificates.

It also is a heirachical issue of trust and it is you have only one “up stream” point of reference through which everything goes.

This is again something quite alien to humans, as social creatures we so routienly use multiple points of contact to establish multiple viewpoints or channels I’m surprised we blythly accept a “key hole” access to the information world, we don’t in almost any other area of human endevor.

This is because we know from (bitter) experiance that dictators and other tyrants we fear, as a first step generally try to sieze controll of all information sources. This is so there is effectivly only one “view point” being impressed upon the “people”.

For our own piece of mind we should take responsability for establishing multiple channels of information and also establishing multiple associative views when trying to establish the identitiy and reliability of an online entity.

However this in it’s self has issues as has been seen with the likes of Google and Social Network sites “slurping up” Personal Information and making it available to others…

Sometimes our “need to participate” over rides any caution we might otherwise have especialy if those we know are doing it…

Mark R April 13, 2010 7:50 AM

The approach of alerting the user when the cert changes don’t really solve the problem. When you are alerted that the cert has changed, you’re basically back in the same situation you were in the first time you visited the site – you have to decide whether or not to trust a cert you’ve never seen before. Sure, you could “crowd source” the problem and check whether others are seeing the same new cert, but if it’s government actors we’re talking about, that might not help, either.

Algol April 13, 2010 8:03 AM

Certainly a scary scenario. However, I think more about the current state of affairs when end-users have to make the decisions. Most of them struggle telling apart good certificates from bad ones. Now we have to add another kind of forgery, which to them will look just like the real thing.

enterprise MITM April 13, 2010 8:45 AM

It is even simpler in a corporate enviroment where “trusted” root certificates can be installed by domain administrator on the fly.

For a MITM the administrator has to set up simply an intercepting proxy which exchange the original server side certificate with a forged certificate signed with the corporate root certificate.

Very few users will actually check the certificate tree as long as the browser will show a green padlock icon.

Asgeir S. Nilsen April 13, 2010 8:46 AM

This reinforces my impression that the TLS trust model using certificates from an ultimately trusted unknown is broken.

The two issues certificates try to solve: remote host authenticity and session key exchange, should be separated.

The former is best handed to DNS, with extensions like DNSSEC.

The latter could be either solved by reversing the initial key exchange, having the browser generate a key pair and present to the server, or by using pre-shared secrets already supported by TLS.

Francis Litterio April 13, 2010 9:27 AM

@Asgeir: Separating the authenticity check from the key exchange won’t help. The key exchange is vulnerable to a MITM attack precisely because the certificate tells us the party at the other end of the connection is who we think they are. The weak link is the authentication, not the key exchange. If the authentication can be subverted, no key exchange algorithm can detect a MITM attack.

VV April 13, 2010 10:15 AM

@antibozo: Isn’t DNSSEC itself vulnerable to MITM attack?
As far as I understand, root authority’s key is obtained via a non-authenticated request, so a MITM could forge the whole authentication chain.

@Peter: It seems, even collaboration form CAs is not strictly necessary. There’re insecure CA certificates out there, that could probably be used to create a fake intermediate CA.
For instance, when MD5 collision was demonstrated, Verisign stopped issuing MD5 certificates but didn’t revoke existing ones. And in the “standard” set of trusted CAs there is Verisign root certificate with MD2 signature (and valid until 2028).
I wonder if some time in the future MD2 support will be dropped completely, and MD2 certificates will get “trusted by default”…

Andrew Philips April 13, 2010 11:08 AM

One other twist: I recall several years ago when RSA revoked the Chaos Computer Club’s certificate. The German hacking organization had acquired one and was using it to sign malicious applets. At the time, I questioned RSA’s move. So what if they produce malicious code — don’t accept certificates from people you don’t trust! It effectively put RSA into the position of policeman of the certificate owners. But, that’s not what a CA does! It’s purpose is to create a system that securely identifies the holder of the cert. It’s immaterial whether that individual is a saint or a sinner.

Most people want magic security pixie dust sprinkled on the computer, they don’t care whether it works.

@Clive: well said. However, it’s not surprising that we defer to the hierarchical model. Pre-agrariarn hunter/gatherer tribes work well up until around 150 people, then they split. The rise of civilization with farming required a new, different (and hierarchical) model pasted onto the older model. We modern humans respond to this precisely so we can enjoy the benefits of our culture. We give up “knowing” the truth about our leaders for improved quality of life.

Not anonymous April 13, 2010 11:16 AM

If we didn’t use a hierarchical model we could have certificate changes by signing new certificates with the old ones. If you regularly visited the same site you would see the change while the first certificate was still valid and that would give you good confidence that the new certificate was bonafide.

@lazlo

I delete all of the root certificates in my browser. So there is at least one person doing that.

kangaroo April 13, 2010 12:45 PM

@Clive: Which is odd, as humans generaly use associative models in our every day lives even in the work place.

It’s not odd. You’re just misgeneralizing. “Humans” use associative, non-hierarchical models. Bureaucracies use centralized models, despite their deficiencies, precisely because they are centralized.

GPG keys are quite usable and quite natural. Why isn’t that model used? Because organizations love the central approach where they control the keys, can charge for them, etc, so they build infrastructure preferring them. That’s not “human” — that’s a particular political model.

So “humans” end up using the centralized approach not because it’s natural — but because it’s been engineered into the systems they use. You can choose gpg, say, but it shrinks your pool of partners merely because most software makes it painful — the big corps don’t put it into their email and such programs.

Andrew: We modern humans respond to this precisely so we can enjoy the benefits of our culture.

We never “chose” this system. If you look at the rise of agriculture & the city state, at every point along the way our “quality of life” went down — in health, longevity, freedom, variety of work… It’s just that 1000 starving farmers can kick the crap out of 100 healthy nomads, particularly when the former can develop more advanced technologies. Then you have the effect of propaganda — everybody hears how exciting and cool the city is, the young people head out there, and then they end up impoverished, and unhealthy. But it’s exciting — at least until their lives are chewed up. We’re not very good at comparing long vs. short term benefits.

Even today, “most” human beings lead shorter and more brutal lives than our ancestors 15kya, since most human beings aren’t Americans, Europeans or the upper crust of the rest of the world.

antibozo April 13, 2010 1:02 PM

VV> As far as I understand, root authority’s key is obtained via a non-authenticated request, so a MITM could forge the whole authentication chain.

No, ultimately the root key is preconfigured into DNS clients and recursive servers.

There is an interim period until the root is signed, during which trust anchors for various points in the DNS are similarly preconfigured. Once the root is signed, these can be eliminated, though I’m not sure what happens if, say, the root is preconfigured, .com is unsigned, and example.com is signed. In practice, I think example.com would have to exist as a preconfigured trust anchor until .com is signed.

There is also lookaside validation as another interim measure.

And there are last mile issues while client system resolvers are not DNSSEC-aware and rely on their recursive servers to do validation for them. But eventually every client will be able to validate DNSSEC answers from a single preconfigured root key.

Francis Litterio April 13, 2010 2:10 PM

@Steve Schultze: In Firefox 3.6.3, the CNNIC CA cert appears only under “CNNIC” and not under “Entrust, Inc.” or “Entrust.net”. Perhaps the Mozilla folks removed the one shown in the article to which I linked.

Steve M. April 13, 2010 2:37 PM

Tom T: you noted that CA certs can’t be permanently be deleted from Mozilla products. This is a “feature” of NSS, described in the following self-quotation from last year. I have built the clean replacement libraries (*nix and Windows), but it’s a pain:

NSS Trickery

Like other comparable products Firefox and Thunderbird ship with a wide assortment of pre-installed CA certificates. Not only the usual ones from Verisign, Equifax, and the like but also ones from some obscure entities like “Staat der Nederlanden”, “”Camerfirma Chambers of Commerce”, “TURKTRUST Certificate Services”.

The DoD PKI policy mandates that CA trusted keystores should only contain the CA certs specifically authorized by DISA. This make sense if you think about it, as a desktop system in the Pentagon shouldn’t be trusting CA certs from foreign CAs.

Fixing the keystore should be easy, we just use the handy-dandy GUI based certificate management tool to remove the unauthorized certs, right? No so. If you try that you find that after tediously clicky-clicking your way through and deleting 100 plus certificates that they initially appear to be gone. But, as soon as you restart Firefox (or Thunderbird, etc.) they all reappear. What is happening is that the NSS shared library libnssckbi.so automatically re-adds the bundled CA certs to the disk resident keystore (the cert8.db file).

Now this is downright annoying. Presumably the Mozilla Foundation is being paid for the inclusion of the bundled CA certs and wants to discourage their removal in order to boost the commercial value of that placement, but as with the DoD policy there are legitimate reasons why end users may want to remove bundled certificates.

There appears to be no alternative to complete replacement of the libnssckbi.so library. The bundled certs are defined in the file mozilla/security/nss/lib/ckfw/builtins/certdata.txt in the source tree. The Mozilla specific build process is annoyingly awkward and different for both Linux/Unix and Windows.

It should be noted that we have essentially the same problem in a different form with Microsoft Windows, as routine Microsoft issued patches tend to reinsert CA certificates. As we don’t have the option of modifying the software culling the unwanted CA certs requires constant vigilance.

Roger April 13, 2010 3:44 PM

Are CAs not selling certificates by claiming they prevent third party decryption? Have any issued disclaimers clarifying that sessions secured by their certs may not be decryptable solely by the customer and remote browser? If so isn’t this is a breach of contract? If this is the case I look forward to becoming a party to any class action suit against corrupt CAs with whom I’ve done business.

Question is, who is responsible for this area of law i.e., who would one complain to? The FCC, DOC, FTC?

Daniel April 13, 2010 3:52 PM

Clive.

Kangaroo is correct. The problem with associative models is that they don’t scale well. They are good for small communities but not for modern states whose primary existence is based upon atomistic disassociation.

This has caused a lot of problems beyond just security. It’s a problem with jury selection in legal system as well. One of the reasons that more than 80% of crimes in America result in plea bargains is because the jury system is based upon an associative system of trust and it turns out that this system is both expensive and inefficient when applied to the demands of modern living.

Clive Robinson April 13, 2010 7:10 PM

@ Kangaroo,

“It’s not odd. You’re just misgeneralizing.”

Hmmm it’s a viewpoint issue but I phrased it that way to make a point which you appear to agree with,

“”Humans” use associative, non-hierarchical models. Bureaucracies use centralized models, despite their deficiencies, precisely because they are centralized.”

That is there is a difference between what we do as “social individuals” and “worker drones”.

“Because organizations love the central approach…”

Which is also known as “The King/God game”.

That is one or more individuals apply “might is right” to get people subserviant to them and then by initial force of arms then tradition/religion/education maintain the fiction that they are “better” than all others.

These others should thus suplicate and be gratefull for the illusion presented.

Which as you note is,

“That’s not “human” — that’s a particular political model.”

Which in theory we can change, but can we…

The first step is to discuss the alternatives to the self enforcing bureaucratic cry of “somebody has to be in charge”. And thus have the argument to follow up to the charge of “anarchy” when you make the sensible reply to the bureaucrats that “there is no reason why anybody has to be in charge”.

As you note,

“GPG keys are quite usable and quite natural. Why isn’t that model used? Because organizations love the central approach where they control the keys, can charge for them,”

And there in lies the key to the problem that is it is a “business model” based on a cartel viewpoint.

Thus the solution is to move the economic tipping point against the cartel viewpoint, and thus kill the CA marketplace (not before time).

Which brings me onto,

@ Roger’s point,

“Are CAs not selling certificates by claiming they prevent third party decryption? Have any issued disclaimers clarifying…”

Sadly most CA’s say in the small print that they carry no liability for the certificates they issue.

Thus all a certificate is, is an “entry ticket” for the entity that purchases it.

That is they are paying to play in a cartel market created by a web browser company (Netscape) that other companies lept on (Microsoft et al) and others are still supporting, even though they claim to be “Open”.

The reason that these entites get away with it is a lack of “user education” disgused as “user conveniance”.

And the result of this collusion to create a cartel market where one should not exist is the very problem with SSL Certs we are now discussing in this thread…

That is the “supposed need” has given rise to a false economy that actualy causes more harm than good.

Which brings me onto,

@ Daniel’s point,

“This has caused a lot of problems beyond just security. It’s a problem with jury selection in legal system as well.”

Yes it has and unfortunatly the bureaucrat solution is most definately the wrong one (as usuall).

The common and incorect viewpoint (the state encorages) which Daniel note’s is,

“One of the reasons that more than 80% of crimes in America result in plea bargains is because the jury system is based upon an associative system of trust and it turns out that this system is both expensive and inefficient when applied to the demands of modern living.”

It’s not the reason plea bargaining is so prevalent. It is the imbalance of the accused’s position with respect to those who are charged with enforcment by the state. The accused is not alowed in any way to lie to any LEO without suffering extream censure. However LEO’s are activly encoraged to lie to the accused in order to get a conviction. The reason being the state has little or no interest in real justice just in the two greate Political mantras of “being seen to be hard on crime” and “being seen to be tough on inefficiency” (both of which are actually compleate false hoods).

That is state deliberatly misrepresent the cost to society for political gain, which is one of the reasons the US have so many prisoners that upset the likes of Amnesty International and other Pro Justice organisations.

As for Daniel’s point,

“The problem with associative models is that they don’t scale well.”

Is in most cases it’s been properly investigated a “myth”.

For instance, the 150 limit for hunter gather groups, that get’s widley touted.

It has absolutly nothing to do with the societal aspects of the group but everything to do with resource density in any given area.

That is the amount of energy expended to gather resources together in one place goes up with population density. For an unaided human this limit’s the practical size of a societal group. As man developed ways to improve (via dogs, cattle and horses) his natural limit’s the size of the societal groups grew.

At each “break point” there is a non societal limit such as that of cities at 40,000 prior to the 1800’s due to bacterial pathogens. Which was broken by the tea drinking UK, China and Japan due to the natural antibiotic effect of tea, and laterly the recognition of the need and efficient removal of waste. Currently our cities are enabled by medical intervention in the food supply, but limited by the cost of equality, that is we are finaly starting at the 1,000,000+ population density to see some societal reasons why we cannot progress further. However these reasons have their root cause in our political and economic viewpoints and thus society can chose to remove these constraints if it so desires (and the self apointed Kings/Gods alow or get overthrown).

David Bullock April 13, 2010 10:29 PM

It’s a bit late to be adding to this thread, but as a security novice, it seems that part of the “untrustworthy hierarchy” problem in PKI is that is the SSL server presents a certificate which contains the claim “I am site X, and Jim trusts me”. This is an obstacle to a web-of-trust model, because it is wasteful of bandwidth to say “I am site X, and Jim, Mary, and 105,039 other people all trust me” on each presentation of the certificate. Wouldn’t it be ‘better’ (from a web-of-trust scale perspective) if the client could inquire of Jim “can I trust site X?”? Presumably there are trade-offs (one that comes to mind is, that Jim might not know, or might be unavailable at the time we want to know). Are there ways around these other than a hierarchical signing model? Where can I go read about them? 🙂

Tom T. April 13, 2010 10:56 PM

@ Francis Litterio:

The situation apparently is not so bad as I thought. Although the certs come back in the list, they come back with all of the trust checkboxes unchecked, as described in your link and photo. So presumably, Firefox will no longer trust them. Thank you very much for the link.

@ Steve M:

You have built the clean replacement libraries? Could you post them for download somewhere? It would be a great service to all of us.

Also, there are still some diehards using Fx 2.0.0.20, if you can provide the compatible library for that.

It’s hard to understand why MZ won’t allow users to choose which certs they wish to include or delete from the list. If you can publish an empty library to which we could add only our own chosen certs, it won’t solve this problem completely, but it would be a huge first step, and you’d be awarded much good Karma.

Steve Schultze April 14, 2010 8:22 AM

@Francis Litterio: No, nothing has changed. Mozilla does not ship with the CNNIC Entrust Sub-CA. Instead, when you visit a site, the TLS session can send you intermediate certs that are silently used and cached in your cert store (as long as they chain to a trusted root)… that’s when you can go back to the list and see it. Frightening, eh?

David April 14, 2010 9:39 AM

@Daniel: “Trust but verify” doesn’t work if trust is binary. It does work if there’s only a certain level of trust. I have a certain amount of trust of places where I use my credit card, but I still look at the monthly statements.

Clive Robinson April 14, 2010 11:34 AM

@ David Bullock,

“Wouldn’t it be ‘better’ (from a web-of-trust scale perspective) if the client could inquire of Jim “can I trust site X?”? Presumably there are trade-offs (one that comes to mind is, that Jim might not know, or might be unavailable at the time we want to know).”

The first problem is what do you mean by “can I trust site X?”. What you mean and what Jim thinks you mean may be poles apart. This is the current problem with CA’s all their issued certificates realy mean is “the cheque did not bounce” nothing more. There is no guarenty offered by the CA that the certificate holder even filled the forms in let alone that they filled them in honestly and absolutly no guarentee that once a certificate has been issued that the entity is going to behave honestly let alone anything else.

Also you forgot that Jim might not want to tell you what he does or does not think about site X..

As Jim may not wish to tell you who he associates with and thus trusts / does not trust for a whole heap of legitimate reasons.

However all that being said, as unlikley as it seams you are probably no more than seven steps/links away from direct knowledge of any entity on the Internet anywhere in the world. And probably no more than two for most major sites etc.

Thus establishing two or more independent chains of links to any chosen entity is the (not so) minor task of joining the links together to make the chains with no crosslinks.

The problem is getting co-operation from each set of link nodes to scan their “known (and trusted) associates” list.

“Are there ways around these other than a hierarchical signing model?”

Yes there are a number of ways other than trust webs, one of which is “trade refrence posts” where people can post if they have had good or bad experiance. These work similarly to the E-Bay etc system of rating. However as we know this has flaws as well.

Likewise online virtual life systems, where people have been blackmailed upon creating and releasing their alter ego’s.

At the end of the day it apears that there is no system that cannot be abused in one way or another, and that the old fashioned “build your reputation” process is still one of the better ways. However this flies in the face of the “instant gratification” life style that people are being encoraged into.

Then there are a whole load of underlying issues to do with “identity” and “roles” sumed up best by the old cartoon of the two dogs sitting infront of a PC where on says to the other “The great thing about the Internet is nobody knows your a mut”.

Greg April 14, 2010 7:33 PM

Thanks for this Bruce, and I think what would be supremely more useful is if the Certificate Patrol add-on made use of crowd sourcing and made publicly available the oldest known valid cert for any organization so that it can be compared to the one a visitor just loaded on first visit to a site.

Woofle April 15, 2010 11:31 PM

@ David Bullock
“…a web-of-trust model…


Are there ways around these other than a hierarchical signing model? Where can I go read about them?”

Assuming your crypto-iggorance is even greater than mine (an achievement all by itelf!)
Try Phil Zimmerman’s (the PGP bloke) Introduction to Cryptography
ftp://ftp.pgpi.org/pub/pgp/7.0/docs/english/IntroToCrypto.pdf
(~ 1 MB pdf)

Read the section on Trust. It discusses Hierarchical Trust, and the PGP-style web of trust (you’re post indicates you probably know this much already). It’s a light read and non-technical. At the end of the book is a list for further reading.

Note that this dates from 2000. However, the principles are still the same – it’s the technical stuff that has moved on.

In principle (and simplistically), you can imagine a hybrid solution where web sites/people (not necessarily “authorities”) will sign and publish a server’s public key. An individual could check a site’s key/certificate at a few of these that he trusts. If money must be made from such a venture it could be garnered from suitable advertising – it is not necessary that the site being verified pay. So it is possible to avoid a conflict of interests (greed is entirely another issue).
… or maybe I’m just talking about PGP keyservers…. Get rid of the formal, bureaucratic PKI and use the existing PGP / GPG infrastructure.

Anyways – the formal PKI as being foisted upon us is unnecessary. As someone once observed (it might have been Zimmerman): PKI is a proper subset of the”Web of Trust.”

Craig April 16, 2010 7:14 AM

“Trust” is the keyword here?
Can it ever be achieved, I doubt it, but we need these purchase systems in place to function in the modern world and most governments aren’t interested in our personal purchases over the internet, alas that still does not make it acceptable.

antibozo April 16, 2010 9:15 AM

One thing that is often overlooked in these discussions is that there are two distinct problems:

  1. How do I know that I’m talking to the real bankofamerica.com and not a MITM?
  2. How do I know that bankofamerica.com is operated by Bank of America and not a domain squatter?

Problem 1 is solvable in a number of ways. DNSSEC-based PKI is the most natural, since the actual question boils down to whether the domain name does in fact identify the system I’m talking to.

Problem 2 used to be solved (though not well) by the CAs, back when they were charging $800/year for a certificate. Their authentication practices have gradually degraded over the last 10 years or so to the point where someone who can attack email can generally get a certificate for the subject domain. So now a select few CAs issue extended validation (i.e. “actual” validation) certificates to restore the former state of affairs, both in terms of “actual” validation and price.

As long as there’s a gap between our Internet naming system (currently DNS) and our real-world naming system, problem 2 will still need to be solved.

Clive Robinson April 16, 2010 12:04 PM

@ Craig,

“… most governments aren’t interested in our personal purchases over the internet,”

That depends on where you live and how much you are spending.

For instance in the EU the money laundering laws basically say any transaction over a certain amount or any transaction that the bank etc may have reason to be suspicious of being used for money laundering.

The problem for a bank or other “professional” organization is the “reason to be suspicious” there is no definition or guidance, therefore any transaction can be viewed as suspicious. In the UK if you go to your bank or building society and ask to take out more that a few hundred pounds you will automatically be asked why you need the money in cash… and you will see the counter staff make a note on the system of what you say…

Also not so long ago the UK Gov purchased copies of “loyalty card DB’s”, various reasons have been suggested. One such is that the UK Gov was going to use the information to set property taxes in a give post code (they started putting this into place in N.I. until it got bad press).

Put simply in the UK the Gov both central and local are not getting enough money from businesses and the like. Therefor they are targeting those who cannot hide their assets and are going to bleed them dry. We have already seen court cases where Gov representatives feel quite happy about lying to juries in order to get a conviction under POCA and then they asset strip the individual so that they cannot either defend themselves in court or mount a recovery of rights action when it has been found that a government representative has lied to a court…

This is simply because not only does the UK Treasury make money out of it the investigators and legal representatives are on a substantial percentage as well.

In a recent case a business man was wrongly accused of various crimes, he had all his financial records and assets taken away from him under POCA, the prosecution deliberately with held the financial records from the defense team for as long as possible. When the defense applied to get access to the defendants assets so that a firm of forensic accountants could go through the records prior to the case. The prosecution told the judge (who was stupid enough to believe them) that the books could be analyzed “with a calculator” in short order. Of the three charges brought against the man he was found innocent by the jury of two of them. Between the trial and sentencing a firm of forensic accountants pro bona investigated his books and issued an opinion that the man was innocent. Effectively they said that rather than him owing money to the government they actually owed him a substantial amount of money…

What did the judge do, well he expressed grave misgivings and under pressure of the prosecution still sent the man to jail…

He now has to wait for the appeals process to grind around and at his age he may not actually last that long in jail…

As you say,

“alas that still does not make it acceptable.”

No it does not, but I fully expect such laws and behavior to spread because it is just so seductive to the politicos, they get cheap convictions and all the assets, without having to admit it is just revenue raising…

Clive Robinson April 16, 2010 12:25 PM

@ antibozo,

“One thing that is often overlooked in these discussions is that there are two distinct problems:”

You missed the most important,

0, How do I trust the computer I’m using.

It’s game over from that point.

There have already been instances where malware on a PC has re-written the browser display window in an online banking system to hid the fact that money has been taken from the account by the malware operators….

The only way that trust can be established is via multiple fully independent side channels and non mutable tokens that two way authenticate all transactions, not just the session. As long as malware operators can do an “end run” around the secure channel or the authentication application you cannot believe what your eyes tell you (they could have put malware in the display hardware for instance).

Then there are other issues such as relay attacks around the token.

Establishing trust in the information world where physical world constraints such as distance/locality, energy constrained force multipliers and duplication cost have no meaning is going to prove an interesting field of research.

Currently the only way we have a chance is to force the information through a physical token that is provably unique and provably not subject to alteration by a third party.

antibozo April 16, 2010 6:41 PM

Clive,

SSL/TLS is a protocol between computers. Obviously things between the operator and the protocol have ample opportunity to lie to the operator. I am not “missing the most important” by not bothering to address the obvious.

It’s not clear to me how a token can prevent the display from lying to or eavesdropping on the operator. Maybe you’re getting a little lost in the trees. You seem to have completely missed my point.

[Notice, BTW, how I didn’t put an @ in front of your name, because it is not necessary.]

Clive Robinson April 17, 2010 5:53 AM

antibozo,

(as you wish on the norm of the @)

“I am not “missing the most important” by not bothering to address the obvious.”

Obvious to whom?

Most people (and many on this blog until quite recently have) assumed incorrectly that AV / SSL etc protect their online transactions, the truth is they come nowhere close as they are all subject to “end runs” around them by being attacked at a lower level. The sad truth is the current design of MS OS’s and most *nix cannot in anyway prevent this “low level shim” issue. The touted solution of “code signing” is a joke at best, and not at all funny, and people need to be told this (frequently) for them to appreciate the implications.

With regards,

“It’s not clear to me how a token can prevent the display from lying to or eavesdropping on the operator.”

If the display shows,

“457289578 Transfer 1,000USD to IBAN……. XYZABC345”

The first part is the unique transaction number picked randomly by the bank and the last part is a crypto checksum based on an account specific shared secret, when the user types the whole string into the token the token will say the checksum is incorrect.

You have moved the end authentication of the transaction beyond the Internet and the computer to the token.

Likewise if then checksum is good the token produces a string the user types in to the browser to accept the transaction. This string likewise based around the shared secret and the transaction identifier etc.

With regards the eavesdropping, the string displayed on the PC screen can be as easily ASCII armored cipher-text as in plain-text so as long as the key is based on a function of the shared secret that changes with every transaction or session then that covers the eavesdropping.

Anything other aspects of the token (such as it should be completely stand alone and non-mutable) puzzling you?

antibozo April 17, 2010 3:12 PM

Clive,

What about everything up to and after the point of authentication, e.g. what account the money is about to be transferred to, and how much?

Maybe if you have a token with a camera that reads QR codes you might be able to have the token display the actual transaction details before authentication. But good luck getting someone to correctly type in a string long enough to accomplish that.

That’s authentication/display/eavesdropping. What about data entry?

Your concerns about end-to-end security, while theoretically valid, are completely secondary to the topic of this post, which is the fundamental insecurity of SSL within the practice of CA behavior.

Clive Robinson April 17, 2010 9:31 PM

antibozo,

I’ll deal with your points in reverse as that will keep the length of the post down.

“Your concerns about end-to-end security, while theoretically valid, are completely secondary to the topic of this post, which is the fundamental insecurity of SSL within the practice of CA behavior.”

It does not really matter if SSL can be made secure or not, or if the CA’s etc can be trusted not to be subverted or not, it really is not of that much relevance any more when it comes to phishing or MITM or most other attacks. Whatever solution they come up with to fix SSL/CA’s will be “little more than putting a BandAid” on a broken bone.”

This is because the attackers where ever possible will go for the weakest link in the chain and the likes of the banks etc will only make tiny incremental changes. So the game favors the attackers hands down currently (and is likely to continue to do so for quite some time to come).

It is a point I have been making since the 1990’s along with,

You need to fully authenticate a transaction in both directions, authenticating a session is insufficient. And the authentication has to be done where an attacker cannot get at it (ie an external non mutable token).

This is because as you strengthen each point along the communications chain the attackers will either attack the next weakest link or more simply just do an end run around the whole security chain (which they have started to do).

Ultimately that means they will attack either end (bank and client) of the security chain if they can. It is going to be easier for the bank to secure it’s systems and put in place the required mechanisms to provide a degree of protection for their end of the chain. However there is no conceivable way this can currently be done at the client end with the likes of MS and *nix OS’s on commodity hardware.

As has been shown for real it is not that difficult for an attacker to get at the PC keyboard and screen drivers of millions of PC’s at a fairly moderate price (see discussions about the recent additions to ZeuS botnet software and the price it’s for sale at). Which means you require a token to authenticate beyond this point in the communications chain. Which is what the external non mutable token is all about.

Which is why the likes of IBM (USB based) and Chronos Technologies (camera based with 2D bar code) make tokens to move the authentication of the transaction off of the PC and onto the token. Which answers your question about,

“What about data entry?”

My own viewpoint is that such systems as the IBM and Chronos systems have dangers of being vulnerable if they are not fully immutable (which given the track record on FMCE and other low cost production methods is something you need to guard against and no, code signing just does not hack it).

With regards to,

“What about everything up to and after the point of authentication, e.g. what account the money is about to be transferred to, and how much?”

The only parts that need to be fully authenticated is, what the bank understands the full transaction to be, and the user either authenticating it or rejecting it.

For authentication the part where the user puts in the details does not have to be secure because the bank sends this back as the “full transaction” details in between the transaction number and transaction authentication checksum.

However for confidentiality then the user would enter the information into the token that would encrypt the data and provide the information in the form of ASCII armored cipher text.

If you want to know more about this then have a look at my postings in previous of Bruce’s blogs, it has been gone into in some considerable depth on several occasions.

antibozo April 18, 2010 4:32 AM

Clive> For authentication the part where the user puts in the details does not have to be secure because the bank sends this back as the “full transaction” details in between the transaction number and transaction authentication checksum.

Which is the part (i.e. the “full transaction” details) the display can lie about, after the input path lied to the bank while the user was typing in those details.

The token needs to present the details to the user or the user doesn’t know what he is authenticating with the token. Yes, you can work around this by having the user use the token to sign the transaction details as they are provided to the bank, but this is for authentication, not privacy.

In any case, the idea of “external non mutable tokens” is just begging the question. Governments will just force IBM to build a backdoor into their tokens, or produce lookalikes and swap them for the real thing. If you want real end-to-end security, the user has to do the ciphers with pencil and paper. And even then you haven’t solved key establishment. So let’s return to reality for a while, shall we?

In practice, SSL is broken in a number of ways, but it is not the problem. The problem Bruce posted about is the CAs’ practices. The problem I wrote about is the CAs’ lack of proper authentication strategies when issuing certificates, and the oft-overlooked distinction between authenticating a domain owner (easy) and authenticating the correspondence of a domain with a real business (hard).

The current problem in Internet banking is malware with keystroke loggers attacking Windows machines and stealing people’s static passwords. Authenticating sessions with plain old RSA tokens would solve that problem, as would, of course, getting rid of Windows–at least for the time being.

Robert April 19, 2010 2:00 AM

@antibozo
I have to agree with Clive on this point. Your changes only fix a small portion of today’s problem, but do nothing to address the real system level authentication problem.

The only long term fix, in my mind, is secure external tokens (customers end) and mutual authentication of the end-points.

Unfortunately, the truth is, that the secure tokens business has become a race to the bottom of the cost curve. The buyer’s are totally ignorant of the differences between good, bad and outright ugly cryptography and refuse to pay for the chip area necessary to implement properly secure microcontrollers.

Clive Robinson April 19, 2010 7:40 AM

antibozo,

Upto this point it has not been clear if you do not understand what I am saying or if you are not taking note of what I am saying for some reason.

Your last post strongly suggests the latter. Normally I would not respond further, however for the sake of other readers I will clear up your nitpicking points.

First off,

You say,

“… but this is for authentication, not privacy.”

In response to my comment you quoted that clearly says,

‘For authentication …’

If you want to do just privacy then you can use the shared secret to derive a symmetric encryption key and pass ASCII armored cipher text back and forth between the token and the banks system. Provided one or two precautions are taken you can with some encryption systems dispense with the level of authentication you would otherwise need. However I would very much advise against it. From a system design point of view it would be far better to do both the authentication and the encryption as this simplifies what each part of the system does and thus aids not just testing but assurance.

Secondly with regards to your statment,

“In any case, the idea of “external non mutable tokens” is just begging the question. Governments will just force IBM to build a backdoor into their tokens, or produce lookalikes and swap them for the real thing.”

The US or UK Governments like most other countries have no need to have a “back door” they can simply use the various laws they have at hand (Patriot / RIPA et al) to get the details directly from the bank without requiring a warrant or other third party safe guard.

Likewise providing the token is not a general purpose crypto device which the token does not need to be there is no requirement for “007” behavior.

So as you say,

“So let’s return to reality for a while, shall we?”

Thirdly, you say,

“The problem I wrote about is the CAs’ lack of proper authentication strategies when issuing certificates, and the oft-overlooked distinction between authenticating a domain owner (easy) and authenticating the correspondence of a domain with a real business (hard).”

Actually neither problem is easy or hard they are both effectively not possible to achieve. The reasons for this are varied, with regards the “domain owner” is it is not currently possible nor is it ever likely to be possible for you to prove who an ‘entity’ is, nor if you think about it is it desirable that it be possible. Likewise linking a “real business” (entity) with any particular transaction or action. At a fundamental level it is an issue to do with hierarchies of trust and if false credentials can be produced, and this is not a solvable problem in a meaningful way. It is why you should read the small print on the CA sites very very carefully and their limits of liability, you will realize that the whole system like other bureaucratic shams is so broken it cannot possibly be fixed. Just look on it as being another form of confidence trick just like pyramid selling.

Fourthly, you said,

“The current problem in Internet banking is malware with keystroke loggers attacking Windows machines and stealing people’s static passwords.

Authenticating sessions with plain old RSA tokens would solve that problem, as would, of course, getting rid of Windows–at least for the time being.”

Your are out of date with regards to this. I suggested you go and look the latest additions to the ZeuS botnet software. It includes a remote shell so an attacker can effectively be you sitting at the keyboard after you have started an authenticated session. Thus “Authenticating sessions” will not work.

As I have said a number of times,

‘You need to fully authenticate a transaction in both directions, authenticating a session is insufficient. And the authentication has to be done where an attacker cannot get at it (ie an external non mutable token).’

Clive Robinson April 19, 2010 7:44 AM

@ Moderator / Bruce,

Can you check the IP address range “antibozo” is using and see if it corresponds to other posts under a different name?

There is something kind of familiar about the “style” of post being made.

Regards,

Clive.

antibozo April 19, 2010 11:35 AM

Clive,

The part I was referring to did not begin with “For authentication”. It began with “However for confidentiality”:

Clive> However for confidentiality then the user would enter the information into the token that would encrypt the data and provide the information in the form of ASCII armored cipher text.

I don’t think you understood the attack I described. You haven’t described your key establishment in detail, but if the user merely encrypts the data to the bank using a public key belonging to the bank, Mallory, sitting between the user’s keyboard/display and the operating system, can replace the details in transit. When the bank responds with the transaction details and an authentication challenge, Mallory displays the original details. To mitigate this you need to either authenticate the information presented to the bank with a signature from the user’s token, or have the token display the transaction details to the user, e.g. by encoding them in a QR code along with the challenge and a signature from the bank, as I suggested earlier.

Your other responses are generally oblique but I’m weary of banging my head against the wall. I’ll restate one last time and then you can have the last word.

  1. Begging the question: if the existing investigative process were sufficient in governments’ view, there would be no need for them to have had the CAs issue special certs in the first place, so it’s silly to suggest that tokens would somehow be immune to this desire. Maybe there are banks somewhere in the world that don’t fall under U.S./U.K. subpoena authority?
  2. Key establishment, both initial and re-keying in case of key compromise. How do we get these magic tokens, exactly, and get the keys onto them that are known only to the two parties involved?
  3. Isn’t there some sort of Internet activity that requires privacy and/or secure authentication other than a bank transaction? Should I secure the download of every message over IMAP/SSL by entering a challenge on my immutable token? Do I need a different token with every email provider, WordPress blog, etc.? Or is there, perhaps, a place for session authentication and hierarchical PKI?

What’s funny is that there are number of ways in which I agree with you, but you’re so keen to argue that you decided to attack my comment (the bipartite authentication problem) for no apparent reason.

Chris K April 30, 2010 6:57 AM

To counter the problem of a government compelling a CA to provide a signing key (or a copy of the CA’s key) web sites could get their certificates signed by two or more CAs in different countries and browsers check the certificate was signed by the same list of CAs on each visit. As long as no government can compel all the CAs to provide fake keys then the attack won’t work.

A corporate email system should be protected by the browser only accepting the corporate certificate loaded onto the PC when it was originally configured in the office.

Banks etc should be required to print their key fingerprints on the various bit of paper they send out. That bypasses CAs altogether as long as users check the certificate the first time they visit the bank’s web site.

Uniken May 11, 2010 7:22 AM

It is worrying that most banks today still rely only on using SSL, Digital Certificates, OTPs and HTTPS for providing (supposedly) ‘secure’ access to their Internet banking customers!

While SSL might provide an encrypted channel between the customer’s browser and the bank’s server, it does NOT authenticate the website that the customer is going to. The browser only validates whether the SSL certificate is valid or not – it does not authenticate whether the certificate indeed belongs to the Bank (except in the case of EV-SSL). Hence, the customer might see the “lock” icon in his/her browser and still may be connected to a malicious website that can steal their credentials.

Even the One-Time Password (OTP) Tokens, that generate a single-use password which the customer has to enter in addition to the regular Internet banking password, are prone to man-in-the-middle attacks. Regardless of how the OTP is generated (hardware device / software program / mobile SMS), its limitation is that the customer still enters this OTP on an unauthenticated page and over an insecure channel. Plus, it is cost-prohibitive to deploy, maintain & renew the hardware tokens.

It’s time we moved beyond such redundant measures and stopped fooling ourselves into a fake/induced sense of so-called ‘security’. We need to significantly upgrade our technology and protect customers from not only Man-in-the-Middle, Man-in-the-Browser and Phishing attacks, but also key-loggers, trojans, screen-scrapers, and all known kinds of spyware and malware (resident on the desktops, in browsers and on the internet).

This might surprise you but two of India’s largest PSU banks (Bank of India & State Bank of India) are already in the process of implementing this cutting-edge technology called REL-ID that does all of the above & more!

Gordon Youd October 2, 2015 4:25 PM

If digital data streams could have an analogue signal imposed onto them and then the other end removes the analogue signal leaving clean data behind. Only the sender and receiver knows the start point of the data in the analogue stream.
I think a hacker would have a hard time removing the data.

Phil Lello March 13, 2016 11:51 AM

An important point to remember – even if using a closed-source vendor OS, a rich enough company can get source code to the OS to do custom builds – certainly this was the case with Windows in ~2000.

Ultimately, there’s always a requirement for a leap of faith that a system hasn’t been tampered with in a way you’re unaware of and unable to detect.

I’m surprised that a TLS MITM is legal, or at least hasn’t been challenged under DMCA and similar; given all the legislation-stuffing that occured to make DRM removal illegal. I’m sure it can and will be argued both ways; according to the current entry on wikipedia:

“The Digital Millennium Copyright Act (DMCA) is an amendment to United States copyright law, passed unanimously on May 14, 1998, which criminalizes the production and dissemination of technology that allows users to circumvent technical copy-restriction methods. Under the Act, circumvention of a technological measure that effectively controls access to a work is illegal if done with the primary intent of violating the rights of copyright holders.”

It is logical to assume (and therefore probably wrong in a legal context) that TLS MITM for the purposes of caching is no different from caching spinning-optical-disk-of-choice as an ISO on a hard-disk.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.