Schneier on Security
A blog covering security and security technology.
« Cipher Beer |
| The Problems of Too Much Information Sharing »
February 3, 2012
VeriSign Hacked, Successfully and Repeatedly, in 2010
Reuters discovered the information:
The VeriSign attacks were revealed in a quarterly U.S. Securities and Exchange Commission filing in October that followed new guidelines on reporting security breaches to investors. It was the most striking disclosure to emerge in a review by Reuters of more than 2,000 documents mentioning breach risks since the SEC guidance was published.
The company, unsurprisingly, is saying nothing.
VeriSign declined multiple interview requests, and senior employees said privately that they had not been given any more details than were in the filing. One said it was impossible to tell if the breach was the result of a concerted effort by a national power, though that was a possibility. "It's an ugly, slim sliver of facts. It's not enough," he said.
The problem for all of us, naturally, is if the certificate system was hacked, allowing the bad guys to forge certificates. (This has, of course, happened before.)
Are we finally ready to accept that the certificate system is completely broken?
Posted on February 3, 2012 at 10:49 AM
• 44 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
I'm unclear that the certificate system ever worked.
What can we replace it with?
Monkeysphere is trying to use PGP/GPG to do it. I also think WoT is the only existing reasonable model.
Pointing out errors and mishaps and then sitting back, smirking about it, doesn't do diddly squat in solving the problem.
Isn't the pgp web-of-trust itself a mockery, as at keysigning parties NxN cross-signatures (not trusting anyone else but oneself)?
My company (a very large one w/ over 100,000 employees world-wide) uses certificates and/or RSA SecurID dongles to establish VPN access to company networks. Gee, both of these systems have been compromised. What do I think? I think our security department has bought into the entire security theater concept, thinking that by employing these methods, we have a secure system. To my mind, they just make it more difficult for us to do our jobs, and place an enormous burden on our hardware, software, and networks. Yes, security is important, but these days one quickly reaches a point of seriously diminishing returns for the investment of $$, time, and human resources these methods require.
Suggesting that people should think about and discuss alternatives is a lot more helpful than pointless and snarky commentary.
In the UK, a think a public company with all the oversight and data access that our legal system provides could manage UK-centric certificates.
Maybe a WoT starting with the largest, safest of the above?
Distributed trust solve some of the problems - my browser trusts all major certs, including those issued by corrupt governments to anyone who wants one.
But ultimately you are still going to have to trust some major certs (MSFT/Apple, Google, EFF, Amazon, your bank) in the same way trust verisign. If these are hacked how long does it take everyone in your web-o-trust to notice, and when they fix it how long to flush the bad trust?
Certificate authorities are trying to do two jobs:
(1) Prevent MITM attacks, (verify that you are connecting to the correct server for the site)
(2) Identify the owner of the server (verify that the server belongs to the correct company)
Of course, they do a bad job at (1) and fail completely at (2).
The web of trust also tries to solve (2), relying on the user's ability to check each other's identity. I think this model doesn't scale well enough, and I can't even think of how non-technical users would enter the WoT.
But we don't really need to solve (2) - solving (1) would be good enough for most websites. And there's some nice approaches for that: http://convergence.io/ or http://perspectives-project.org/
To those who wonder what to replace digital certificates with: start with DANE and DNSSEC. http://tools.ietf.org/wg/dane
No, you can't know that the entity you're talking to is "really" who they are supposed to be. But you can be sure they have control of the DNS that's getting you to wherever you're going anyway. And once we have this in place, we have the hope of building on it rather than the broken certificate system.
I haven't heard of convergence before, and the site is a little thin on tech info. Do others have thoughts about it? Bruce, do you take it seriously, if you know about it?
That is true, but then that suggestion shouldn't be a leading question, should it? Perhaps something along the lines of "Is it proven, and should we accept, that the certificate system is broken?"
One would think that would be a better approach, and maybe it wouldn't spark snarky and pointless commentary...
What about solutions like PAKE (J-PAKE and SRP)?
These solutions seem to offer a move from the CA PKI type of infrastructure. The only problem that I think PAKE still has is establishing the shared secret.
No, I am not ready to admit that it is *completely* broken. Every transaction on the internet has some risk. SSL, against a CA-signed certificate, significantly reduces that risk. There has been a small increase in risk now for that scenario. Your tolerance for risk, and the consequences of security of failure should be taken into account with the risk. So, for my facebook account, assuming I initiate the connection to facebook then there is a very very small increase in risk here and the consequences of failure are fairly minor. If I was a Iranian anti-government activist, the increase in risk would be higher (I have a specific advesary who is well placed to MiM and won't hesitate to do so) and the consequences in failure are severe.
The idea that a system is completely broken is a misleading as the idea one could be completely secure. SSL was already over-rated.
I wrote a paper in French in December about different complements/alternatives to the PKIX system (TOFU, Convergence, CATA, Sovereign Keys and DANE)
It is available online here. (PDF)
So there you go. If you want to disrupt security operations in an airport terminal simply hand TSA a bag and walk away when they're not looking.
Even the NY/NJ Port Authority is getting in on the rhetoric: "Unfortunately, these glaring TSA security failures at our nation's busiest airports continue to undermine security, which imperils and needlessly inconveniences the traveling public," Paul Nunziato, president of the Port Authority Police Benevolent Association, said in a statement to CNN. "It is a shame these security breaches have become daily events."
I wonder how many breaches occur that is NOT disclosed to the public.
Symantec bought the certificate business but has kept the VeriSign brand name. Not sure if that happened before or after the breach. See http://www.verisign.com/
VeriSign kept the DNS and other business. They have now issued a press release on the 2010 breaches although it doesn't add much to the SEC filing from last year.
How is it that Symantec's stock is up today? The implications of a breach of this magnitude should weigh in on their stock???
@NobodySpecial: The easy solution:
Allow WoT users to sign "flags" when they distrust someone, and everybody who trust the person who issued the flag will get alerted.
They can then too choose to flag the person/keypair in question after verifying that it's likely that a breach has occurred.
The clients should check for these flags at minimum once a day.
The breach occured before Symantec bought the PKI and SSL activities of Verisign.
If the DNS activities of Verisign have been breached, then DNSSEC based solutions are not safe anymore. And there's no replacement.
Alternatives to CAs: Convergence (http://http://convergence.io/). It's just getting started but already has significant backing.
Great comments, one thing I haven't seen mentioned is the SSL MITM commercial Interception devices such as Netronome and other such devices that are products used by governments when surveilling ALL internet communications.
When certificate authorities get hacked, and certificates issued, your client may automatically trust MITM SSL connections from these devices.
Convergence is designed to prevent this from occurring, however Convergence does not currently have an algorithm or system that allows users to make intelligent decisions, i.e. allowing users to choose from geographically dispersed notaries.
As for telling users the region in which the notaries reside in, and allowing users to choose the notaries on the basis of that information, the work is already underway. It just isn't shown in the GUI, but as of December, the code already is there. Another smart thing is letting the user choose a minimum and a maximum number of notaries to query at each time, to chosen randomly from the list of enabled ones. This too has already been patched in the development code.
@bj: Filing that reported breach was made October. I suspect any reaction by Wall Street hit the stock price long ago.
I take your point; I should have given a more productive comment.
I think that the primary problem with the certificate system is an extremely high false positive rate. At times the majority of my use of certificates have shown suspicious certificates, even when I was positive that said certificates were correct.
And when I'm not certain, there's virtually no way to check. To someone largely uninitiated in the certificate system, a certificate is nearly incomprehensible; I have no idea of which signing authorities to trust and which ones not to trust (unless they make major news reports), and typically my browser will override any determinations I make about certification authorities on update.
Certificates started as nearly useless, continued into extreme user unfriendliness, and now is in a state where it's very user-friendly but nearly useless. A major hack into a major signing authority is, in my view, just a minor point on a fundamentally broken system.
Or, to restate my original post, I'm unclear that the certificate system ever worked.
I think the bigger issue is have we come up with something better that can replace it yet? The problem with throwing out a system is that the mass perception is that the entire concept is flawed, not the specific implementations that were flawed. If it got scrapped tomorrow, then in two years someone came up with a significantly better way of doing it, you'd have alot of downward momentum to combat before people would accept it.
Patch the lifeboat as best you can, but don't stop rowing for land.
Three things could be done relatively easily (in a technical sense) to improve the current certificate system:
1. Compartmentalization. At present, the entire certificate system is only as strong as the least secure and/or most corrupt CA trusted by major browsers. CA trust needs to be restricted jurisdictionally, geographically and by industry. Certificates issued to, for example, an American bank using US servers need to be different and readily distinguishable from certificates issued, for example, to a sole proprietorship in Latvia using a Latvian server.
2. More government involvement. We don't have private drivers licenses or private banking licenses. SSL certificates should be no different for high-risk sites. Certificates for firms in high-risk industries should be issued by the applicable regulators directly as a part of the appropriate business licenses. Users should be trained to expect e.g. federally regulated US banks to have a certificate issued by the Federal Reserve.
3. Require high risk industries to provide their customers with client certificates.
I work for Symantec and wanted to clarify that Verisign, Inc. was compromised, not the Trust Services (SSL, User Authentication (VIP, PKI, FDS) and other production systems acquired by Symantec. Symantec was NOT compromised by the corporate network security breach mentioned in the Verisign, Inc. quarterly filing.
@LPM: Convergence is NOT the solution. In fact, it's even worse than PKI if you ask me. Here are some of the issues I have with Convergence:
Because a connection with a notary is requied for an SSL connection, you need a lot of notaries. This makes it very easy for a country to setup their own national notary and force its people to use that notary. How many trust would you put in a Chinese or Iranian notary?
Instead of having to connect to a notary for every SSL connection, the Convergence system uses a cache. A local cache which holds the key to a secure connection. What an ideal place to insert your malicous key!
Maintaining your own notary list requires knowledge most people don't have. Relying on your browser vendor for this makes Convergence just another PKI.
Unlike a CA signing server, notaries need to be connected to the internet. This makes them more vulnerable than a siging server.
Becoming a CA requires applying to a lot of rules and passing a lot of audits. Anyone can setup a Notary. How do I know who to trust? What makes a Notary trustworthy? Because no money can be made with setting a Notary, no company will be interested. If you think this is a good thing, think about it again. It's the companies who have the money to setup a large amount of notaries and to secure them. Would you trust a notary setup by some ICT amateur who is just doing it for fun when you want to visit your bank's website?
When setting up an SSL connection to a webserver, notaries need to setup a connection as wel to retrieve the SSL certificate. These extra connections put more pressure on the webserver.
With a PKI, you can issue SSL certificates to users for SSL client authentication. With Convergence, you can't.
Just some thoughts...
An addition to my previous post:
With PKI, an attacker need to attack both the CA and the DNS that serve the victim's website. With PKI, you an attacker only needs to attack the DNS. In other words, a PKI does not only protect against man-in-the-middle attacks, but also against DNS-spoofing. Convergence doesn't.
Convergence works by converging on an opinion. If you select multiple notaries around the world they should all point to the same cert. A "consensus" opinion is reached by the convergence client based on the responses from multiple notaries. A single "untrustworthy" notary is not a point of failure.
Maintaining a notary list is not a task left to the user. It is maintained by the client. The convergence system does need some additional work in the sense that users do need to figure out what notaries they trust. My previous comment spoke to this, that users need a way to choose (make intelligent decisions about which notaries to select) from geographically dispersed notaries for the system to work properly.
Convergence isn't perfect, but at least it has flexible trust. The Status Quo is that users cannot choose whom to trust at all without removing CA certs from the browser trusted certificate store and breaking half the internet.
The fact that anyone can run a notary is a positive feature not negative. A single notary cannot collude with himself to malicious ends, especially if your client is configured to bounce or proxy notary requests through a notary.
Basically, if the entire world (notaries configured) see a certain certificate, and the one you are seeing is not the same then you can be pretty sure that you may not be communicating with who you think you are communicating with. What can one notary do in this scenario?
Besides, the whole point is that you can have "fluid trust relationships." If you no longer trust a notary, you can remove it without breaking the web.
Last but not least, Convergence trust notaries use network perspective to validate your communication by default, but can be extended to use whatever methods the notary operator would like. This might include DNSSEC, BGP data, "SSL observatory" results, or even CA validation.
The reporting on this story has been dreadful. The actual news is one paragraph in Verisign's 10-Q that was filed in October but nobody seems to have read until this week.
I wrote a blog entry that quotes the actual paragraph. Read it and see what you think really happened, as opposed to what third hand news reports are guessing.
The certificate system was broken from the beginning. There were just too many parties that could issue certificates and all had the same value. When you have enough parties in some venture, there will always be a few corrupt ones and a few careless ones. For that reason, nobody that really understands security ever trusted certificates without making sure they were from a CA they trusted directly.
That is also one reason many people have been saying for years that with regard to most customers, the security value of an "official" certificate is about the same as that of a self-signed one and the latter may even have higher value as it does not inspire false trust.
As Bruce and others have said VeriSign hav "kept schtum" about what the attacks were when they occured and what sort of data was exfiltrated.
Which raises an important question, if not forced by the SEC requirments to put something in their 10K filing would VeriSign have ever "fessed up"?
I think their lack of "coming clean" speaks volumes and raises a secondary perhaps even more important question. Which is "Just how long have VeriSign been hacked?" Conceavably it is from day one, thus we could say that absolutly every thing they have ever touched is irrevocably tainted... And indeed from a security perspective it is the attitude we should take untill the managment chose to make "full disclosure", unfortunatly it also includes Symantec products now. Which raises the question of "due dilligance" because if VeriSign did not make full disclosure (ie "knowingly" withheld material facts) to Symantec then Symantec shareholders may well have significant legal recourse against VeriSign, it's executive and non executive directors and those involved in advising it in the divestment of it's CA business...
Which is why I said I had Deja vue on reading about it, because we have been here before which is just one of the reasons for the SEC 10K filing requirments...
However that major point asside we find ourselves yet again talking about crypto certificates and how they have never worked and very probably can not ever work in the way they are mainly currently used.
You only have to look at the T&C's of the CA's from day one, where they specificaly exempt themselves from liability on the use of the certificate to realise that the whole CA business model was a compleate crock or confidence trick from the security perspective.
The fact some twenty years down the road we still have the CA system and no viable replacment should raise all sorts of alarm bells in peoples heads.
We have looked at the hierarchical model and web model for providing anonymous trust and both have significant failings. When we look closely at all the sugestions of ways to replace the current hierarchical model used by the CA's they are all found wanting in some way.
That is we have a very very fundemental problem, all the models we use for commerce and other interaction on the Internet has an implicit assumption that an anonymous trust process is possible. However we don't currently have one, nore does it look like we know how to accomplish it.
If we look at real life human models we realise that as individuals we use a "reputational model" similar to a web, but we have hierarchical models forced on us by bureaucrats, as a legacy from the faux "God Head" of Kings and Priests, where power is accumulated at the top or center where it can most easily be abused or misappropriated.
We need new models but we seem to be stuck in a rut. As it turns out the reason for this is it's so ingrained in our way of thinking we have not actually tried to change. And if you have a hunt around you will find that realy the only text on the fundementals of this issue in a readable form and coherent whole is Bruce's new book.
I don't like your first two ideas. The feds get hacked too. Also you don't need to be smart to be a fake fed. But I do like #3 up to a point. The problem lies in securely getting the cert to it's intended recipient. Passing it online is a bad idea, but snail mail is really easy to spoof and steal. That's why I didn't tell my bank to start issuing certs last time they asked me to update my "secret questions." Also I was afraid they might try to use a windows installer if they took me up on it. That would really screw with Linux and Mac users until we convinced them that it's better to use the utilities built into our operating systems. Also even if they made the naked certificates available on the disk right away, in a week or two ignorent windows users would start installing fake disks.
That's why the DoD is it's own CA.
It occurs to me that a "layered" authentication system would be a much more secure approach than any individual technology.
What would such a system look like?
It should be highly configurable and extensible framework both on the client and the server side.
A server operator should be able to implement SSL certs from 2 or more authorities to the same domain. This would reduce the problems of a CA compromise. Of course a mechanism needs to exist to inform the client the number of Certs the server provides and who the CA's are.
WOT and other technologies could be implemented as well for additional confidence. Moreover,
Just brainstorming but if there were a "4th" party authority, it would verify that a particular domain was contracted with one or more CA's, and no others, so when one of the other 200+ CA's gets hacked and issues a cert for that domain, it is flagged invalid. This would be a good place to get additional details about a domain such as "This domain provides 3 certs, one each from CA1,CA2 and CA3".
What's the point of doing this ?
Well, good security involves layers of protection. Improving security involves reducing the impact of a failure in any particular layer. It seems like the Certificate system is a single layer, fixing it means adding more layers, but doing so in a way that doesn't add complexity (easier said than done) but does add flexibility.
The main problem with the https certificate system is that it was designed under the wrong assumptions, to protect against an extremely improbable attack profile (namely, attacks directed specifically against first-time visits to any given site) apparently without regard for a much more likely attack profile (attacks against subsequent visits to a site with which the user has an existing relationship), against which there are much better defenses that are also easier to implement and cheaper to maintain.
If you compare the way https uses certificates with the way ssh uses them, you'll see what I mean. If an ssh server suddenly presents the user with an unexpected certificate, the user gets a scary warning message and his client refuses to connect, no matter where the new certificate comes from or who signed it. Thus, there is no need for the user to trust Verisign.
Granted, the ssh system of handling certificates could not be just directly applied to https without modification, because the usage profile is different, and so some additional provisions would be needed. Just for example, a mechanism would be needed whereby a site could keep the user's browser informed when a new certificate is installed. The most obvious solution to that involves using the old cert to sign the new one, retaining both, and allowing browsers that haven't seen the new cert before to request that it be verified. The new cert could be installed significantly before the old one expires -- say, with a 10% duration overlap -- if you want to be paranoid about expired certs. The amount of overlap could even be increased if the new cert is signed by a different CA than the old one. There are several other issues as well, but all of them have reasonable solutions -- far more reasonable than the recently introduced "extended verification turns the site name green" nonsense (which does not in any way mitigate the greatest risk in the current system, namely, compromised CAs).
The existing cert-signing infrastructure could be retained, as a supplemental protection, against the less likely attack profile. First-time visitors to any given site would be as protected as they are now. Visitors who have come to the site before (with the same browser on the same computer), would be significantly better protected.
The problem is, retrofitting it now would probably require that https be replaced with a new protocol, probably on a new well-known port number.
It is time for a new paradigm.
We must begin to think about protecting data itself, independently of the network(s) on which our data resides.
True security across today's cloud, servers, GIG, whatever you want to call it demands that we protect the content, not the network. We still face DoS and Viral issues, but self-protecting data is the way forward.
On related news: TrustWave admits to having sold *.* delegation certificates to a vendor of SSL MitM snooping appliances.
Seems like it's really time to replace the CA concept...
Let me ruffle some feathers and make a claim that *nothing* is broken. Life is not perfect and one can die in a freak accident anytime. As for the internet, most of us can transact .. with little collateral damage. Banks just need to keep the amount of money that escapes to a controllable amount, and everybody is happy.
If the transaction has serious consequences for you. then you need to take more care than just worrying about the CAs. Your interface is the biggest vulnerability, and you may want to start by ditching your "smart" phone .. and then your PC. This may mean driving to town to do your banking .. but watch out for cars too. And meteors.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.