Schneier on Security
A blog covering security and security technology.
« Job Opening: TSA Public Affairs Specialist |
| Unredacted U.S. Diplomatic WikiLeaks Cables Published »
September 1, 2011
Forged Google Certificate
There's been a forged Google certificate out in the wild for the past month and a half. Whoever has it -- evidence points to the Iranian government -- can, if they're in the right place, launch man-in-the-middle attacks against Gmail users and read their mail. This isn't Google's mistake; the certificate was issued by a Dutch CA that has nothing to do with Google.
This attack illustrates one of the many security problems with SSL: there are too many single points of trust.
EDITED TO ADD (9/1): It seems that 200 forged certificates were generated, not just for Google.
EDITED TO ADD (9/14): More news.
Posted on September 1, 2011 at 5:46 AM
• 74 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
SSL. Quis custodiet ipsos custodes?
Do you have an idea on how to resolve this than using 2 way authentication and such ?!
The only way I can think of is the site authenticating to the user by providing him some secret pre-agreed upon data to prove
they really are who they say they are but this won't prevent MiTM.
Interesting detail is that this Dutch CA, DigiNotar, also takes care of the technical part of the official CA of the Dutch government. From what I've heard, the trust in DigiNotar is largely gone. This is a big big major issue in the Netherlands.
Other interesting thing is that Comodo and Verisign also had their trust-issues. But because they're so big, the browser makers decided not to remove those. Because DigiNotar is just a small Dutch CA, kicking them doesn't have much effect outside this small country called the Netherlands.
Big questions is, why do we still 'trust' Verisign and Comodo?
@Hugo: And also, why do we trust browser makers to do the right thing? They're able to onesidedly cripple an entire country based on circumstantial evidence without (in case of Mozilla, who hardcoded a filter into the actual software) giving the user any choice in the matter at all.
This has already been discussed in the squid threat.
Here the link to a press statement issued by VASCO the parent company of DigiNotar:
And here a claim they have been hacked for more than 2 years.
They lost their trust by not responding to the news. Not even a we don't know what happend and are investigating statement.
Also the detected hack in July was very badly handled. No disclosure at all. That is not how you keep trust.
I already sorta knew not to trust SSL certs for identity verification, but the question is: What should I use? How do I know who I'm talking to?
@Anth : I'm just currently looking at convergence and perspectives firefox extensions. They trust a website's identity by checking it against notary servers that regularly inspect said website's certificates and keeps histories of it. Or something like that. Looks good from what I've just read.
Is this story true?
I googled for "untrustworthy Google certificates" and the search engine couldn't find anything.
@Anth: "How do I know who I'm talking to?"
For your first connect, simply assume you are talking to the right server (= add its certificate).
For subsequent connects, you know that you are talking to the same server you talked to before, as long as the cert does not change.
@lke - But you can just MITM the notary service so it's not much help.
"Convergence can be configured to require trust consensus amongst multiple notaries, preventing any single notary from having the ability to compromise security."
@orbitz: As I understand it, you'd have to MITM multiple notaries, perhaps even the majority of them. The extension polls several of the notaries and takes the majority answer. This doesn't eliminate the risk, but it does mitigate it somewhat.
Of course, polling several notaries probably means the technique doesn't scale well. I've installed the Perspectives plugin and have noticed that it already takes a few seconds to poll -- long enough to get quite frustrated trying to use SSL sites.
To follow up to myself, Perspectives just rated the SSL version of this very site a big red X because only three of the eight notaries returned results before the timeout.
Put the fingerprint of the public part of the SSL certificate in DNS and sign the zone with DNSSEC.
It won't fix all the trust issues, but it will go a fair way to preventing MITM attacks using illegally obtained SSL certificates. There is already a RFC for doing this with SSH keys, I don't see why SSL certs should be any different.
There are some new and interesting ways of dealing with MITM on WiFi and other radio links suggested.
One radio link solution is for Alice to randomly generate her half of the oblivious key and send it bit by bit over the network with the packet delay such that each packet would be like a pulse position modulation (PPM). If the PPM from Alice Bob receives does not match her half of the oblivious key then somebody is interfering with the link.
The hard part will be making the same thing work across a non wireless network, but I've a few thoughts I'm mulling over.
In the end it boils down to: You have to trust _someone_. As bad as the SSL concept may be, the alternatives aren't much better.
Perhaps the best way really IS to trust the CAs until they prove untrustworthy, and then eliminate them. As long as people can be motivated with money, all auditing is a farce.
I think along the same lines as Paeniteo. What is broken is not SSL, but the way browsers deal with certificate "authority". They should treat an unchanging self-signed certificate as more trustworthy than one is by any CA that keeps changing and changing.
There are interesting options to add other sources of trust to certificates, e.g. DNSSEC (DANE, DNSSEC-stapled certificates), "public key pinning" - www.imperialviolet.org is a nice source of information about these.
In the case mentioned here, "public key pinning" should have made users of Google Chrome immune to the attack from its start, and apparently it also contributed to the attack being detected.
> trust _someone_
The problem with the CA system is that you have to trust someone whose business model is saying yes to as many people as possible.
It's like a credit rating agency that is paid by the banks who are trying to get get their CMOs rated AAA
What now needs to happen to retain minimal value in "official" certificates is that DigiNotar dies. The blocking of all their certs in Firefox is a good thing and a step in the right direction.
I do however expect that they will not die and that most people will not understand the issue. This will remove the last tiny bit of better security in comparison with self-made certificates.
Certainly the current situation of the browser allowing ANY stored root cert to be used in a SSL handshake is not good.
Perhaps the browser could allow you the option of associating a specific root cert with your high value online services, e.g. bank, Paypal, etc. That way if any smaller CA gets hacked it cannot affect these services. It then allows your online services to change their public keys from time to time without affecting you.
I would also like in addition to this something along the lines of Paeniteo’s suggestion. Although I would prefer to distinguish the actual public key from within the cert, as it then allows the same key to be re-certified.
IMHO the right way to do deal with this and a host of other Internet security problems e.g. logins is to use a personal dedicated hardware gadget that performs your SSL. It could then also provide the features mentioned above in hardware. That way even if your PC gets hacked you can still connect securely to a service. I have discussed such a device elsewhere in this blog:
That's only half of the story. The funny part is their press statement:
TL;DR: They claim it was stolen. Why should anyone trust a CA ever again if they got hacked and their certificates could be in anyones hands. Basically their "excuse" is an even worse PR-Desaster than what most suspected until then (namely that they sold the certificate to Iran, thinking "what could possibly go wrong?")
Actually the Dutch Government have their own root CA that is trusted by Firefox, but it's also signed by DigiNotar. Removing the DigiNotar CA from the trusted list has NO effect on the government's certificates. The problem was that Mozilla wanted to go further and blacklist every certificate that DigiNotar signed for which would've blacklisted those government certs too if they didn't add an exception.
The point is that Mozilla is going too far, removing the certificate from the trusted list is enough.
Time and time again Root CA's prove they cannot be trusted with the security of the entire internet. Simply because one mistake puts the entire internet at risk.
If you want to be more secure, be your own CA, remove all root certificates from your system, all of them. And evaluate the certificate chain of each site you wish to authenticate with. Seriously, if you login to a site you have to store or memorise a password (unique password, right?...?), how far is the stretch to store the certificates of those sites onto your password safe.
If you're abroad simply connect to a secure storage service where you keep your list of trusted certificates. Memory the service own certificate hash or just part of it. this site... '4-beecaa'. Good luck forging a fake certificate for that one.
Alternatively keep that list on your usb stick.
Thanks for the links! I didn't know about these. I'll probably start using one and encouraging others to do the same.
@ Nick P
Also check out the Certificate Patrol add-on for Firefox.
"Your browser trusts many certification authorities and intermediate sub-authorities quietly, every time you enter an HTTPS web site. This add-on reveals when certificates are updated, so you can ensure it was a legitimate change."
Today DigiNotar, tomorrow another CA. The entire CA-based PKI model is broken beyond repair and should have been EOL'ed a long time ago. It's been discussed more than once on this blog.
DANE+DNSSEC could be a good alternative, but TOFU with an HSTS extension looks interesting too.
DNSSEC is not a solution to the CA-mess because it gives us even less ability to address a lack of trust. In the current CA model, if I no longer trust Diginotar, I simply remove it from Firefox, and that is that. With DNSSEC, if you no longer trust ICE (domain seizures), or Verisign (fraudulent CA certs, wildcard *.com domains, etc.), or the politicians in your country, well, sorry, but there's nothing you can do.
@ Mark Currie,
"Perhaps the browser could allow you the option of associating a specific root cert with your high value online services, e.g. bank, Paypal, etc."
For a "known to be good" certificate it is a sensible idea, though it would (from a security point of view) be better to store the actuall certificate and use that. However the catch 22 is "known to be good", this requires a secure secondary channel which is what the CA idea was supposed to get around.
" That way if any smaller CA gets hacked it cannot affect these services."
True unless the smaller CA is the signer of the original Bank etc certificate (hence store the "known to be good" certificate not just the root CA cert".
"It then allows your online services to change their public keys from time to time without affecting you"
No it does not, you either need the secure secondary channel or the bank etc needs to sign the new certificate with the old certificate. The latter option is problematical from a number of perspectives (CA Business model being just one).
With a bank a reasonably secure side channel for verification etc is not realy an issue as they can put it on your monthly paper statment etc. This works because you will in all likely hood have a medium to longterm relationship with them.
However for by far the largest use of SSL certs there is either no secure side channel or no longterm relationship. Thus the whole idea of the "Trusted CA" to start off with.
The whole faux structure of the CA system only worked with "Trusted" CA's, the porblem was from the very begining, is now and will always remain that the CA's cannot be "trusted" in almost every respect...
At the end of the day it's almost a simple economic problem. The CA has costs they seek to minimise both now and in the future, the biggest of which is "liability". So just about every CA says in it's terms and conditions "we accept no liability". The CA is in a competative market (theres getting on for a thousand or so these days) thus the customer who sees the certificate as a bussiness cost will go to the lowest priced CA who will give them what they want. The CA knows this and thus minimises the cost of their internal systems to maintain some vestige of a margin. All of which means it only takes an attacker with a very modest level of resources to break the system of trust at some point...
[Incidentally it is the same issue with "code signing" as was seen with Stuxnet, somebody got into the chain of trust and subverted it. The exact method used is not public knowledge, but it is suspected it could have been an agency overnight cleaner so the same probably applies to many if not all CA's].
The real question is not "how do we fix the CA model" but "what do we replace it with"...
We know key managment is a very hard problem especialy in a hierarchical system where trust becomes more problematical the closer to the top of the hierarchy you are. So much so that in effect a secure hierarchical system cannot be aforded or implemented for more than a very very small hierarchy (maybe for a hundred or so individuals but not much more).
So we either accept a hierarchical system is insecure (just like paper and plastic ID Cards such as the majority of drivers licences) and adjust our security expectations accordingly or we find another way.
One other way is to actually look how humans realy deal with trust issues and in most cases it works by "reputation and recommendation" at the start of a "relationship" and the level of trust is built up over time.
The catch is that "relationships and trust in an effectivly unbounded ecology" are a whole virtually unexplored area of security (I think somebody should write a book on it ;)
"One radio link solution is for Alice to randomly generate her half of the oblivious key and send it bit by bit over the network with the packet delay such that each packet would be like a pulse position modulation (PPM)."
I'm not sure how this PPM would really work in a QAM on OFDM system like (WiFi) unless you can find some frequencies/link conditions where Solitons exist.
Some of the MiMo and (proposed DiDo) schemes use RF-link and channel model properties that are almost impossible to fake. For example a fixed location MITM transponder will have a relatively constant time invariant Tx => Rx channel model, whereas any normal smartphome transmitter is moving around sufficiently that the Rx QAM constellation will need continual channel model adjustment to correct for carrier dependent Rayleigh fading.
It is also possible to code Tx QAM at say QAM1024 on certain channel sense carriers but actually transmit important data at a more reasonable QAM16. With sufficient FEC resources used the QAM1024 can be correctly recovered meaning that very small movements of the Tx side can be detected. I've used this to detect heart rate of the caller, particularly if the caller is holding the Tx unit close to his head (i.e. cellphone use)
Back in March of this year, Dominic White talked about how to improve SSL security in the Firefox browser with regard to certificates. Among other recommendations were to remove unneeded root CAs (it might be preferable to edit such CA certificates so that they will not be trusted) and to configure the browser to require OCSP validation for certificates that specify OCSP validation.
On the Mac OSX platform, it may be possible to use the Keychain Access utility to specify that OCSP validation be used, at least to a certain extent (it is not necessarily enabled by default) and to disable root CAs that the user does not want to trust.
It's interesting how little revenue DigiNotar were making from being a root certificate holder - their report to the stock-market indicates that this part of their business earned less than 100,000 euro over 6 months.
I can't see how it can be profitable - given the procedural, audit and management overheads.
Why isn't it a bigger income earner?
Surely being only one of about 50 'trusted' root-certificates on the entire planet would have to be worth more than that?
I'm not going to complain about the browser makers blacklisting these certificates. They're known bad; I'd rather the denial of service than being able to click through. Legit sites will get new certs if they're inconvenienced.
Here's a puzzle for you though: notaries and trust-on-first-use are vulnerable to man-in-the-middle attacks. We need pre-established keys (CAs) to protect against this. But wait!
The "pre-established" keys are being downloaded over the same compromised channel... if Iran was willing to MitM Firefox downloads with a trojanned version, the CAs and blacklists are worthless.
Given that they had a certificate for addons.mozilla.org as well, they could also trojan a non-compromised Firefox through its automatic updates. *Every Firefox install in Iran could still be compromised!* And there's no way to re-establish the tree of trust - short of trusted couriers...
"I'm not sure how this PPM would really work in a QAM on OFDM system like (WiFi)"
One of the joys of "over simplification" to get an idea across.
The idea is that only Alice knows in advance what her half of the oblivious key is going to be, a Man In The Middle will not. She then sends the key in a time dependant manner bassed on the it. It should thus not be possible to forge or replace the key by jamming etc.
The exact manner it's done is a bit more complicated, I posted a link over on last Fridays squid page but to save the mouse clicks,
I live in the Netherlands, and I personally don't see any problem in distrusting the DigiNotar root certificate, if the "Staat der Nederlanden" (state of the Netherlands) certificate is excluded. My bank doesn't use DigiNotar, and I can't think of any other critical Dutch services I use. The only real issue is are the government institutions, which use a different root certificate but are also run by DigiNotar. Clearly, distrusting DigiNotar altogether would be problematic.
The main problem is that the only alternative to trusting the certificate used by a website is not using the website at all; in the case of the Dutch Tax services, that's not really an option; I believe companies aren't allowed to file taxes off-line anymore, and for citizens, it is strongly discouraged.
One of the aforementioned firefox plugins might help, however, one of the certificates in question is for "addons.mozilla.org", so you can't even be sure you get the real plugin, unless of course you check if the certificate isn't signed by DigiNotar. The main problems remains; you want to download a plugin to improve the system of trust, but you don't have a way to verify the plugin isn't modified. Also, you add several other trusted parties to the equation; the authors of the plugin and the admins of the servers. This extra layer probably won't hurt security directly, but it isn't a complete solution to the fundamental problem.
I think a partial solution would be to have a high-value certificate signed by several CA's, such that a single breach can't produce an illegitimate certificate.
Perhaps the government should publish the public fingerprints of their certificates in a few national newspapers? That way, we could have a reasonably secure secondary channel to verify the certificates. Of course, the vast majority of the population wouldn't have a clue how to actually verify the certificates, of just wouldn't bother to do so. It might help quick detection of fraud, though.
I don't know how the current system could really be fixed, without using any trusted third parties. Perhaps we could use some kind of zero-knowledge proof, where the bank or government institution has to prove they have a secret key that is unique to each person or entity.
Say, the bank or government institution generates a secret key, and transmits the public key using a reasonably secure (meaning, difficult to manipulate, not difficult to eavesdrop) secondary channel (e.g., snail mail). The user has to input this secret into an application or separate secure device, and the server connected uses this secret key combined with the valid certificate to proof their identity.
This wouldn't rely on a trusted third party to guard the certificates, but it does require a secure secondary channel and an effort from the user.
A completely different option could be a legal solution instead of a technical one; make sure the CA's are liable for f*ckups such as this one, such that they would be obligated to pay all damages, including the replacement of all valid certificates by a different CA, and make all board members personally liable for damages. That way, they have a very strong incentive to prevent this from ever happening again.
@Jay: good point, I hadn't thought of that; you don't actually need to download any new plugins to be stung by the forged addons.mozilla.org certificate.
Even with a trusted secondary channel, this can't be fixed now. There is a possibility a malicious update was distributed, planting a trojan on the machines. I admit that it's a little far-fetched, but it's theoretically possible.
One could argue that every single machine on the planet with automatic updates from any of the compromised URLs enabled should be wiped clean and re-installed from trusted media.
I think this really demonstrates why the current system is fundamentally flawed; a forged certificate for any automatic update service could the The Ultimate attack vector to a lot of machines.
I wonder if anyone is searching for evidence of recent DNS poisoning outside Iran? It's scary to think the antiquated DNS system might be the only thing that saved the rest of us from compromise.
Why is Comodo, which seems to have delegated issuance rights to casually trustable third-parties, still in the root stores? Does revoking on their own action really help protect against the next untrustworthy partner?
Just throwing a idea out there, but couldn't you have the ssl handshake, were you,re public key is mixed with private/public key pair to the packet you are sending over the wire(database of random packets that would type of make the same request), and another password, with each packet having a new password being generated.
You would have the 128b key value(first part), and the second pub key group might or should make valed looking packets, but to get the real value for constive packets they would need to know the first password that doesn't change, any other password in the first block + one from the second would make a valid looking packet, but different combination from 1&2, if its not the right 1st pass.
"Here's a puzzle for you though: notaries and trust on-first-use are vulnerable to man-in-the-middle attacks."
Yup it's kind of an intractable problem without a secure out of band side channel. Which as I noted above we don't have most of the time, unless it's a Bank or some other organisation we have a long term trust relationship with, that has a mail out etc.
Even the currently proposed augments/replacments for CA's still have this "first contact" problem.
A big chunk of the SSL problem is actually in the web browsers where "ease of use" has got us into the situation and like any narrow cul-de-sac, we will have to reverse out carefully or do a full U-turn.
One solution might be "super servers" like DNS root nodes that can be "baked-in" to browsers who's sole purpose is to verify securly the certificate of a more local verification host. Thus once bootstraped up a PC could use either a simple hierarchical system or a more secure web of trust system.
Whilst we are riping things out perhaps we could also turn the crypto around so that the heavy lift of the PK falls onto the client end not the server as it currently does.
I can't see how it can be profitable - given the procedural, audit and management overheads. Why isn't it a bigger income earner?
DigiNotar's primary income-earner isn't via this root but from a second one used for Dutch government business, see this post for a bit more on this. So this particular CA was just a sideline on their real business, which is quite lucrative.
Thanks for the MIT link, I read through it but I'm not sure I agree with the assumptions they seem to conveniently ignore the so called "Near Far" problem and how this effects the system AGC setting.
But I shouldn't criticize the MIT paper, too much, because they are talking about the same basic concept, as I am. The message is that there is unfakeable information in the RF link errors which can be used to enhance security. Unfortunately in most PHY's this layer1 channel link information is completely removed before that "data" is handed onto the higher protocol layers. So you have the absurd situation that the RF PHY can "know" that it is communicating with a MITM but ignores it, because the typical 7 layer comms protocol stack really does not have a method for relaying to the security processor that the link is "hinky"
@ Clive Robinson & RobertT
Thanks for the links and info. I think I'll look into this scheme a bit further. If history is an indicator, there are novel applications that we haven't thought about yet. Hey, maybe a port knocking scheme that doesn't use expensive crypto, but provides the same benefits via the new technique. May use timing or internal binary ordering of a couple of small packets to open the channel. You heard it here first, folks! ;)
@ Clive Robinson
“True unless the smaller CA is the signer of the original Bank etc certificate (hence store the "known to be good" certificate not just the root CA cert"
Yes I shouldn’t keep referring to “root cert”. What you want is the lowest CA cert in the cert chain. The banks that I have seen have at most two layers. They tend to take the hit and buy from a respectable CA.
“No it does not, you either need the secure secondary channel or the bank etc needs to sign the new certificate with the old certificate.”
Yes it does. I don’t think that we’re on the same page here :-) I am actually talking about retaining the Trusted Third Party (TTP) model i.e. the service gets a new cert based on the new keys from the same CA. Else, as you point out, you would have to re-establish initial trust with the service each time. Although, I have to agree that when it comes to banking, a second trusted channel should be used to obtain the bank’s PK or self-signed cert.
Why can’t banks simply allow you to go into your branch and obtain a CD that has their self-signed cert? The browsers already support simple installation of new certs. For that matter why can’t all HTTPS services allow you the option of going into their offices to obtain their cert on a CD?
Then we would just need browsers to allow you to associate a cert with a URL/service. This is sort of what I was getting at with my original suggestion.
I still think that when you start digging down too far, the biggest problem is our own “software platform”. Whether you have a trusted cert or not, if you cannot trust your own client platform to be virus free (who can?), then that is where your next MITM threat lies.
@ Nick P,
"May use timing or internal binary ordering of a couple of small packets to open the channel. You heard it here first, folks! ;)"
Well I would use some kind of self syncing spreading code as the "hello" so that an offset in the very long sequence is valid and hopefully unique for low visability. Then switch to a time based code to prevent replay attacks (or you could put the two together). Only at that point switch to a secured channel to present authentication tokens.
You could also do it in very slow time... If you think back to the Matt Blaze paper on Keybugs, where the time intervals used were longer than the network latency so the PC was effectivly transparent then the same trick could be done with the return key and say slow morse code...
"But I shouldn't criticize the MIT paper, too much because they are talking about the same basic concept, as I am. The message is that there is unfakeable information in the RF link errors which can be used to enhance security.
And on the flip side as I mentioned the other day, I've always liked FEC for setting up a covert channel there is just so much usefull redundancy in the systems, and the comms layer gracefully accepts a rise in it's perceived error ratio without complaint or as you say comment to the layers. above.
What I want to do is look at developing a system that works across the wired network in some way, the obvious first thought is round trip delay etc but that is obvious and I'm looking for a less obvious and hopefully more secure way.
If we look at the first physical layer hop from the PC to the bridge/router, one trick would be to put out malformed data packets, where in effect you charecterise the input stages of the router and modulate them via it's error correction onto the output stream. Anyother device picking upt the output of the PC would have a different error charecteristic and would thus show up as such.This potentialy would work on a LAN but not further afield.
@Mark Currie: I agree to an extend that the users computer is probably the weakest link in the chain in most cases, but the difference is that machine is the only part of the chain that is under your control en it's security is your responsibility. The fact that most users are completely incapable of actually keeping the damn thing clean is, in my opinion, irrelevant.
Of course, with the current state of OS, browser, and application security (I'm looking at you, Microsoft, Adobe, etc.), it is difficult even for advanced users to make absolutely sure they are not infected, but you could switch to a different OS, and even run from non-writable media to greatly reduce the risk of infection.
I really don't understand all the fuss about network timing. Aside from the fact that such things tend to be rather unreliable in a packet-switched wire area network, it's just another way to send a tiny bit of data. It's not security, it's obscurity, because you imply that the attacker in insufficiently advanced to figure out that the timing actually matters.
@Clive Robinson, you could use background microwave signals(should be the same over long distance at both ends, the packets you send get compared to those signals and a key/hash or what not and if they match the data is passed up the stack to the computer.
There is the time delay the data over the wire could virtual be crap, but both sides need to pass a key to make use of the data/problem
"... you could use background microwave signals(should be the same over long distance at both ends"
That is one avenue of investigating doing it, if you have control of all the nodes in both the network and comms paths.
What I'm looking at is how to not just fingerprint in an unforgable way the existing network (which is a toughy in of it's self). But also how to do it for a one off or "first time" connection with multiple unknown relays inbetween.
If you think about it there are three basic Internet connection models (just as there are payment models)
A, Seldom or less (Grazing) - ie once or twice.
B, Occasional (metered) - upto once a week.
C, Frequent (unmetered) - upto continuously.
However from the security aspect you always start at "first time" and work up through, the levels.
Effectivly at the "first time" connection you have no prior knowledge of the internet host you are connecting to or the pathway inbetween. Thus in effect a Man In The Middle attack cannot be detected by the application layer because the stack between the physical and application layers obsficates the information.
What the MIT paper is about is resolving this issue with WiFi and other radio based "broadcast" systems where jamming and relative signal strength differences are clearly visable at the physical layer.
This is because the two communicating parties Alice and Bob are in range of each other and also in range of Eve doing the "Madam in the middle" and have a channel constrained by the laws of physics (not the whims of man) between them.
As RobertT notes there are various conditions where either a simple or complex system would not work. What the MIT system conceptualy solves is (some but not all) of the problems.
[The one he highlights is in effect an anolog of the one that caused a large number of red faces in the Quantum Key Distribution (QKD) fraternity. In effect you "pump" energy into the receiver front end, and as it cannot instantly go away it has secondary effects that Eve then uses to her advantage.]
However there is one simple condition where the broadcast model no longer applies even though it appears to be to the operator. That is when Alice and Bob cannot communicate directly due to range or some other issue and they have to unknowingly go through a relay. That is Alice and Bob's ranges don't overlap, but that between Alice and Eve and Bob and Eve do. The relay can be either active or passive, and the difference is important as it's the laws of nature for pasive and the whims of man for the active.
For a passive system you are effectivly replacing part of the "free space path" with a "transmission line", you see these used where TV signals won't get into a Valley. Basicaly you take two high gain parabolic or Yagi style antennas (actually backfire are best) one pointing at Alice and one pointing at Bob and put a length of transmission line between them.
An active system however could work at just the RF level by using switched antenna amplifiers or at a layer further up the stack. In all cases though the path is not a true transmission or free space radio path and thus has charecteristics such as receiver bandwidth and various delays that can be measured (al be it with difficulty).
Thus although the MIT system would work on early ethernet (coax based) where all stations were on the same cable, it would not work on later ethernet (ie twisted pair and above) because the connection was nolonger a pasive broadcast system because you have to go through a relay (hub, switch or bridge etc).
What I'm looking at is ways to charecterise the path in some way from node to node.
Conceptually one way is to send a malformed network packet with a time delay on the rising or falling edge of a data bit. If you also send an incorrect checksum you can use the defects in the receiving node to move the bit edge so it thinks it has received a valid packet as the Checksum is valid. However another node will have a different time delay and thus it will see the packet differently. Thus both Alice and Bob can determin the front end charecteristics of the correct gateway. Sadly there is a catch with this in that many systems have automaticly adjusted filters to try and suppress such problems.
@ Clive R
I'm really thinking about wide band wireless comms systems where proximity to other objects to the antenna causes nulls and peaks in the frequency response of the channel.
Think about the link as an S-parameter measurement where S21 is the combined channel model. Assume that Tx measures and encodes the data of an S22 measurement, while at the same time Rx measures S11. Now the S21 must incorporate aspects of the S11 and S22 frequency response.
In wide band modulation like OFDM the bit loading of the carriers is determined by the Rx port (based on the FEC results and the channel throughput estimation) this channel estimation MUST incorporate aspects of S11 and S22 now only the intended Tx can correctly know the correct S22 and the MITM will almost certainly have an S22 different to the intended base station.
The combined channel model will therefore be instantly recognized as a Fake MITM channel.
@Clive Robinson and all
8.94 g·cm−3 copper
10.49 g·cm−3 silver
1/0.910938291(zero msb) = 0.0000000000000000000000000000000001097769201141200024492109092821100/kg/kg
360/above = 3.2793778476e-31/m/sec
copper = 0.00894*1000*(100*100*100) = 8940000/cu/kg/kg
silver = above = 10490000/su/kg/kg
Knowing length of copper wire say 100meters
360/8940000 = 4.0268456375838926174496644295302e-5/360/sec
360/10490000 = 3.4318398474737845567206863679695e-5/360/sec
360/122792975519149890882786200 = 2.9317637957544000000000012413772e-24
all copper path = 0.00004026845637- 0.0000000000....293176379575440000 = 0.00004026845637 give or take, per electron
copper and silver path, areas size guess
4.02684563758 - 3.4318398474 = 0.59500579018/m/sec
0.59500579018 - 0.0000000000....293176379575440000 difference in speed, routers and relays will have fixed contacts
sizes give or take
MORE amount more time delay, or less.. 4hours sleep :(
I agree that the only real solution is site authentication to the user (so finally moving to mutual authentication).
In my view, anyway, the proposed approach "... user by providing him some secret pre-agreed upon data to prove they really are who they say ..." needs to be improved having something that is specific for each user (so to nullify the scaling advantage the attackers currently have), possibly changes from one user session to the other, and is really usable and understandable by end-users (X509 certs are too technical for normal users).
Some suggestions can be find in http://www.w3.org/2005/Security/usability-ws/...
Thanks for that, have you read some of the comments on that page?
Oh I dropped a note over there about the Apple MAC OS bug where EV certs signed under a root cert that's been disabled in the keychain still work.
Issue only known about for 30 years. Why suprised? The same is true for an intermediate link in a kerberos chain...
X.509 is "allowed" at web scale *becuase* it allows governments to usurp control, in the emergency. Successful crypto is socio-political as much as technical.
If the Iranians think the web is being as a subversion technique against their government, they are entitled to consider that an emergency and use "Emergency Provisions". That is what they are there for. yes, this has the a side-effect of showing to the american public the *inherent*, built-in and intentional vulnerabilities of all chain-based trust models to "abuse/assumption of authority". Folks are seeing the true nature of their own government, which typically acts in concert with "preferred" vendors (and consultants) in a mostly covert manner in matters of web-scale crypto.
Mind you as I lay here (waiting for a doctor/nurse to come around and disturb my) contemplating, I ask my self of this sorry and shambolic affair why this has not happened ten years ago...
The problem is what to replace the current hierarchical PKI of CAs with, abusing secure DNS might be a stop gap measure but even that is likely to get attacked in some way at some point. Not because it is particularly vulnerable but that the "don't confuse the user" logic in browsers is so easy to abuse.
However the real underlying problem of "how to establish trust on first contact" is not going to go away, we either have to "bake it in" in software or put to much trust in a single entity, which lets be honest will probably get "owned" in some way by a nation state or other well resoursed attacker.
Other disolved trust or reputational systems look good on paper but they still suffer from "first contact" MITM and other issues.
"I ask my self of this sorry and shambolic affair why this has not happened ten years ago..."
You never can't be sure this didn't already happen. If not for Chrome checking the google certificate the current case may have been undetected as well.
As you noted we need a clearer UI to address this issue. With browsers you see only that the browser trusts the connection, but not in whom. When you install a signed application on Windows it will show you the fact it is signed and the company name (i still need to verify the company names come from the certificate or from the application resource). It does not show the root of the certificate. Just doing that would make it much harder for the attacker to use rogue certs.
"Other disolved trust or reputational systems look good on paper but they still suffer from "first contact" MITM and other issues. "
I think that the first contact problem can't be solved at all. The best you can have is some difficult to attack side channel to establish the trust. But then you have to trust that side channel.
On the other side it is equally impossible for a state to establish a complete control over all channels.
What most of us only need to worry about is on a much smaller scale, like using a hotel internet access in China for your company mail. And that can be improved with a better UI. The average user must be able to immediately see if something is wrong even when the browser thinks everything is ok. And not behind two and more clicks. An improved standard UI will also help with educating the user, at least some of them. Hey it teached somethings with everchanging company passwords too, even it was the wrong lection aka all passwords have a timeout (guardian vs wikileaks)
Of course someone ist already a target of the state they live in and is closely watched he has a real problem on the first contact. If the state is the US not even checking the certificate will help as the FBI will just use a NSL to get the private key from google etc.
I wonder how many server keys have already been stolen along with all that credit databases.
I told you in the past that Iranian government is controlled by a groups of gangs and mobs that can hire your security engineers who are working in high tech security companies. So they can sell Heroins and drugs and use this money for buying any information and tools. They do not sell oil for such stuff. We are in economy crisis so all companies are willing to sell information to government of Iran for more money.....I am an Iranian. So believe me.
There's a BBC article that summarizes the MiTM attack, although it is still a little confusing:
The confusing part:
At the same time, it was noticed that a sizeable portion of the Dutch company's certificates were mysteriously going to users in Iran.
By August, 76.5% of DigiNotar validations were in the Netherlands. 18.7% were in Iran and 4.8% elsewhere in the world, according to security firm Trend Micro.
Shouldn't they be saying 'a sizeable portion of validations by means of the certificate(s) were occurring in Iran'?
Or am I missing something?
While much online debate has centred around the role of the Iranian authorities, there is no firm evidence to support such a theory.
Otherwise the article has a good explanation of how the Iranian Gov't would be able to execute the MiTM attack: all Internet traffic to and from Iran is funnelled through a Iranian Gov't proxy server. However, in such a case wouldn't the requirement be for only 1 certificate for each target end-site (Google, Yahoo, Microsoft etc), since the 1 certificate would work for all clients in Iranian cyberspace accessing that target-site?
Here is the preliminary report of the security audit at Diginotar (by IT security firm Fox-IT). I find Annex 5.3 rather puzzling: if I were a professional Iranian hacker I would not leave such a puerile message on the hacked server.
This weekend my 7yo daughter left a plastic turd on my pillow with a note 'I did this. Greetings, Teddy Bear'. :D. The message on the server reminds me of that note: it's like someone is trying to put the blame on someone else.
The report that @Bas D pointed to indicates that the attacks have been going on for more than three weeks before someone raised the issue in public - on page 8 it says "On August 4th the number of request rose quickly until the certificate was revoked on August 29th at 19:09. Around 300.000 unique requesting IPs to google.com have been identified. Of these IPs >99% originated from Iran, as illustrated in figure 1". I find this scary and also somewhat surprising, as any Chrome 13 or newer would have flagged the problem. But then Iranian users may not be able to get modern browsers, in part because of U.S. export restrictions (how ironic).
@clueless: what you're missing is that BBC is a news organization. News organizations often get technical stuff wrong. You see this when they talk about something you know about.
Good news organizations have staff that is knowledgeable about economics and the stock market. Anything else, you get a layman's understanding at best (if the layman had to explain things he doesn't know by a deadline)
The story goes on:
According to (see update at bottom)
the hacker claims to have access to 4 more CA's which he initially didn't name. He also claims to be behind the Comodo hack and the break in at the StartCom CA (Israel).
He also claims to have access to GlobalSign (not clear if that is one of the 4?)
Yes, but I never know, maybe the Beeb has got it right and I don't understand.
@ all the greybeards out there
Is it possible that this is a false flag operation?
Why is the IP address displayed for the Yahoo server (i have Flagfox on my browser) coming up as if it is based in Iran?!?!?! Does this have to do with the recent cert hacks?!?!? Anyone else getting this??
Hostname news.yahoo.com ISP Yahoo! Europe
Country Iran, Islamic Republic of Country Code IR (IRN)
Region Unknown Local time* 08 Sep 2011 00:02
Why is the IP address displayed for the Yahoo server (i have Flagfox on my browser) coming up as if it is based in Iran?!?!?! Does this have to do with the recent cert hacks?!?!? Anyone else getting this??
Hostname news.yahoo.com ISP Yahoo! Europe
Country Iran, Islamic Republic of Country Code IR (IRN)
Region Unknown Local time* 08 Sep 2011 00:02
City Unknown Latitude 32
IP Address 126.96.36.199 Longitude 53
I told you have this is not a hack. Someone or groups of people who are working in these security companies are working for Government of Iran for gaining money. Posted by: Bas D at September 6, 2011 9:49 AM also found that it's like someone is trying to put the blame on someone else.
Please look forward those people who are working in these companies and also work for Government of Iran.
DigiNotar and Comodo belong to government of Iran.
Hi as I am an Iranian and so I sent three emails to this hacker's email (email@example.com) and he replied me with some bad and dirty words. Just Iranian boys say such bad and dirty words. So this is a man and not a woman. But emails headers show that he is one of the Iranians who are in Russia maybe for education and I am sure that he has hired some Russian Hackers to do that. This link below is my email with full header.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.