Schneier on Security
A blog covering security and security technology.
« Investigative Report on "Buckshot Yankee" |
| Snow Cone Machines for Homeland Security »
December 16, 2011
The EFF's Sovereign Key Proposal
Posted on December 16, 2011 at 7:04 AM
• 25 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
What's your take on it Bruce? Surely the CAs have to be replaced with something - do you think Sovereign Keys would be the way to go?
They had me until they mentioned Tor.
SSL/TLS is fine. Can't protect ignorant people from their own ignorance. If they won't take the time to read and understand the warnings from certificate mismatches, they will someday suffer the consequences.
Markus, here's my take:
CAs don't need to be replaced. People who care about security need to learn to examine their CAs and choose which they really trust. Right now, most people have no idea how many dozens of CAs their computers and phones automatically trust, and who the organizations are behind them. Userland oftware, in general, does need need to be improved for access to CAs.
I'm afraid that it's too much to ask for end users to investigate their trusted CAs by themselves. And in case of the recent CA breaches the compromised CAs were used by some governments, so their citizens would have had to trust those CAs anyway to do their taxes and things like that.
I'm with you in the case of Tor. That requirement is probably too much.
The blame-the-victim approach to certificate validation reminds me of the Hitchhiker's Guide bit about planning offices and demolition notices. You know: "The lights had gone?" "So had the stairs!" and "For pity's sake, Mankind, the demolition orders for the Earth have been on file at Alpha Centauri for over a century now!"
User interface is always a factor for security, since the users are a weak link. A good interface can help mitigate that.
Tor is an EFF pet project of yore, and I find it interesting that they seem unwilling to admit its general faults even today. Blacklists of Tor endpoints seem to be de rigeur for anyone combatting spam or other abuses. The architecture even encourages that.
IMHO, CAs need to be eliminated altogether. Unfortunately this proposal doesn't quite do that. It also depends on a handful of centralized servers to maintain the "append only" list, when they might take a hint from bitcoin and use a more distributed P2P model for maintaining the list.
I avoid getting certificates because I don't trust ANY central authority to not abuse the power or get forced into abusing the power. Just look at the recent domain shutdowns for evidence.
I wonder if something like Bitcoin could be constructed to replace CAs. I can't quite figure out how it would work, but I'm imagining a distributed network built around proof-of-work and consensus. A rogue actor couldn't subvert the network without owning at least half of its computing power....
mikeash, I think that's the reason they mention Tor - the hashed "hostnames" used for .onion addresses give similar proof of ownership as your Bitcoin idea.
Distributed trust has been an "ideal / problem", in that a reliable secure system is the ideal, but the problem is nobody knows how to do it...
I think we should actually sit back and have a good long think about it. Phil Zimmerman tried to get. around problematical hierarchical trust models and all their problems by trying to establish a web of trust.
The problem with this is that the internet is so large and sites come and go so frequently, that the average user will not have used nor ever will use the vast majoeity of sites. Likewise they will not more than a small handful of sites more than a few times and only one or two regularly.
Thus whilst they will know people who will trust the sites they use regularly (facebook, google, etc) the chance that they no someone who trustss a site they are only going to use a couple of times is slim. Thus the basic idea behind the web of trust does not work to well because you are back to the issue of an unknown site to you and your aquaintances, thus the site has not established any trust with those you trust.
But there are other problems, web browser designers appear to go out of their way too take the managment of CA certs away from users and hide it in some unfathomable way to users. Because of this they then tend to include just about anybody who tells them they are a CA.
The problem is users thus cannot easily manage certs and thus don't want to learn how to manage certs, so "click through" or "no go" are the only two options available to them.
Sovereign keys are not going to solve either of these issues in an understandable or ordinary user amenable way.
Then there is as others have noted TOR...
I'd rather trust DNSSEC currently.
>Tor is an EFF pet project of yore, and I find it interesting that they seem unwilling to admit its general faults even today. Blacklists of Tor endpoints seem to be de rigeur for anyone combatting spam or other abuses. The architecture even encourages that.
Actually, I'd say that's a reflection of the success of Tor. Naturally the Tor projects goal of "true" anonymity, conflicts greatly with the standard practices of abuse control on the web.
I know a bit about CAs, because I worked for one briefly. It would be virtually impossible for an "ordinary user" to investigate these companies adequately.
The one I was at had done lots of things right, and plenty wrong. For example, just consider physical security. The servers were in a small room, behind a steel gate - which was left open most of the time. And the back wall of the room was ordinary drywall, backing on a publicly-accessible hall. The company is small, and the office is empty at night.
How could anyone investigate this? Or know that the checks you did an hour ago are no longer meaningful, since burglars just broke in and compromised the servers?
@Clive - another problem with web of trust is that sometimes you have to use sites you don't trust.
So if a driverdownload or fileupload site requires you to trust it to download a file you will be trusted by lots of will then automatically trust it. Unless every user of these sites is careful to clear the trust as soon as they leave
I've been using convergence by moxie. marlinspike instead of CAs for awhile now. Excellent answer to SSLs problems but doesn't work for a lot of bank sites who buy dozens of different certs. I liked his blackhat 2011 talk on tracking down the guy who invented SSL who said they just threw in the adhoc Cert authority idea because at the time there were maybe a dozen sites that needed it. Now we are stuck with a very outdated and insecure system
My take on CA:s is to do it using some form of web of trust, while not entirely skipping CA:s. I've mentioned it a few times before too.
It basically goes like this: Sites can have many signatures unlike how they only can have one SSL cert, and their public keys can be signed by just about anybody. That means that sites can just get a signature from an ordinary CA like how they'd get a cert today, or they can get several signatures from both tech people and others.
To increase the security, just having a few signatures from unknown people would not make the browser do anything different visually from if it were plain HTTP. The user could choose to take a look at the details of the key if he wish. The key of a site is remembered, if it changes too quickly the user is notified. This can be combined with Convergence.
If the site has a signature from a CA, the browser behaves as it does for SSL certs today.
The browser also have several subscriptions for blacklists - if a CA recently have been breached, they could be blacklisted. If the browser fails to access *any* of these subscriptions, the user is alerted (an attack is likely). There should be about 20 servers with blacklists for this that each browser subscribes to (the total number should be in the thousands).
If the only organizations that has signed a particular site key is are CA:s that recently has been breached and that hasn't has the blacklist status cleared, the user is alerted (possible attack). If there's signatures both from CA:s for which no breach is known and breached CA:s, the user is notified about it, but there's no big alert (the other CA:s might just be breached too if this is a targeted attack, but there's no evidence yet - if there's multiple breached CA:s and few not-known-to-be-breached CA:s that has signed the key, we can assume targeted attack and alert).
Particular domains could also get blacklisted, in case a bank were to get it's certificate hijacked.
So by default people have the ordinary CA:s in the list and many blacklist subscriptions. They could also have organizations like EFF (could maintain blacklists). They could also add their "tech friends". It would be easy to remove CA:s. You could also just remove trust from CA:s without completely removing them (you'd see that they've signed the site keys, but it wouldn't make the browser trust the connection).
The biggest issue, and this is not specific to my solution but quite universal, is that you need to have several organizations verify our identity as a site owner. That means getting many signatures, which would essentially cost just as much as several SSL keys instead of just one.
You can NOT get around this if you wish to avoid having to trust a single company only for the connection to each site.
Sovereign Keys is essentially defining a new style of CA. The SK notary must provide the site owner with a proof which is indistinguishable from a certificate in the way it's used: it's provided to the connecting user who must (a) validate it and (b) decide whether or not they trust that notary. The set of notaries is open, collusion can't be detected before the fact, and collusion detection requires the user to take some independent action.
While key continuity approaches like Convergence, Perspectives, or Certificate Patrol have some issues, they're much more likely to be successful. They require no explicit user action to maintain trust, and they convey to the user exactly what he wants to know: Is this the same site I used yesterday?
KCM approaches to trust will require that current practices of using large, constantly changing key sets and exposing internal resource URIs will have to change, but those are dubious practices at best anyway.
I'd rather trust DNSSEC currently.
Whilst using stuff like Certificate Patrol and Convergence - as I'm sure plenty of other folks on this blog do - I'd be very happy with DNSSEC being taken forward too. On the other hand, I'm a bit worried what SOPA will do to it. Last thing we probably need is the US House Judiciary Committee having a say in the (re)design of stuff like DNS and DNSSEC.
Dnssec is useless for sites like wikileaks. i also believe convergence uses it as one of 4-5 verification tests
I'm no fan of web of trust like suggestions. They all require too much maintenance on the side of the user. Heck, anything the user has to do / can do wrong is too much.
I believe the CA approach can be rescued if we reduce it to the bare minimum:
What does owning a signed certificate for abc.tld mean? That you're the owner of abc.tld.
Following this semantic registering a domain should naturally mean to associate a "master key" (that authenticates you to the registry) to the domain. The registry itself can simply publish (and certify) which key owns the domain. The owner of the domain can then be a CA to its own TLS keys. The domain key would only change very infrequently (~5yrs). Thus managing the top level CA would be very easy: Issuing new domains / certificates requires no verification at all (only that the domain is really new), and as issuing a new key for a known domain is basically identically to changing the owner of the domain this should require some offline legal interaction (which would be required any way) that authenticates the action. The validity of this domain keys can also be easily certified by external institutions: -keep a list of all domains, -if a domain is registered twice raise an alarm or ignore the second (in that point this idea is identical to the eff suggestion).
This approach would have several advantages over the current CA oligarchy:
- certificates would be as painless and cheap as registering a domain
- instead of TLS being as weak as the weakest of a few hundred CAs an attacker would now have to compromise a specific TLD registry and every predefined notary that certifies a given domain was not registered before.
- the whole control over a TLD domain stays in its country, Iranian CA can only sign Iranian TLD, American three letter organizations can only enforce certificates on American TLDs
I'd still like people to use distributed tools like convergence, but not as a central part of the infrastructure but as a fail-safe to find breaches as fast as possible.
As far as I understand the EFF proposal it is an decentralized and less formalized twin of the above that ignores the registries as the (in my opinion obvious) notaries of possession of domains. I don't really see why we should ignore their information and formal power over the DNS.
I can't figure out what happens if your key is compromised and the attacker publishes a new SK. Now your TLS is broken forever because you can't revoke?
hey, is this supported by the novell alliance? f the eff.
In the existing TLS authentication system, there are lots of ways for attackers to obtain certificates perfidiously. In the Sovereign Key design, the attacker needs to not only perform one of those attacks, but must also possess a time machine ...
EFF is irresponsible to overstate the security of their proposal. No, they haven't invented security that can't be circumvented without time travel. Yes, claiming so is harmful.
EFF should know better.
the solution is simple, don't trust the internet.
we need a forum for open, free publishing of information that anyone can use. that's the internet. it's how, and why, it was built.
it's madness to put, eg bank accounts, epayments, antivirus distribution networks on it. it's madness to bootstrap your PC from data you got from it too.
we do those things with it because we don't have a secured network for that. we use file signatures, encrypted connections, and cross our fingers. but in reality, this is very, very, very weak. weak^n, where n is a large to infinite number.
however, we are seeing that the traditional providers of telecommunications have largely been in the business, forever, of providing priviledged backdoor access to those communications, to the highest bidder. it's common sense really, and anyone who claims ignorance is probably just working for, or benefitting from, those parties at one level or the other -- or has done and prefers to forget the experience.
so who to trust? nobody. we need to collectively, build new network focal points, using robust, audited and freely-auditable technologies in transparent fashion. we need NGOs whose reason-de-etre is to audit the claimed performance of chips, etc. those chips etc need to be designed to be simplest, and lowest-common-denominator designs, unencumbered open designs.
the software likewise needs to follow these practices. we have long surpassed the point of fulfilling the basic requirements of device/application software to accomplish the basic tasks most users are familiar with. much current effort goes into (apart from ensuring the backdoors function) window dressing, bells, whistles, and re-inventing the wheel for branding/IP purposes.
this has got to stop. that effort must be redirected into the open, collective provision of sound technological utilities, which users are encouraged to explore and mould to their needs. the current scenario of all-out global high-tech espionage and mindshare war must be defeated.
The EFF talk about Sovereign Keys at CCC talks about using DNSSEC as a way to verify the ownership of the keys
I think the SK idea has merit and would be better than the current situation at eliminating hacked-CA-MITM-attacks
@ Jonathan Wilson,
I think the SK idea has merit and would be better than the current situation at eliminating hacked CA-MITM-attacks
Yes and no, once you have a MITM in place you then have to ask about "all the channels" available to the MITM and how they can be abused/used to their advantage.
Adding extra channels to an already insecure process is not of necessity going too make anything more secure, in fact you could argue that "as all software has bugs, you have added additional bugs to be exploitable".
Whilst I appreciate the desire to resolve CA issues quickly, I'm also aware the "knee jerk solutions" have a habit of hanging around long long after their "best before date" thus creating "legacy issues". These often end up becoming unresolvable exploits except by adding more and more bugs, and sso the "hamster wheel of pain" goes round faster and faster.
CA's are arguably at many levels the wrong way to go about ensuring "trust" simply because the only trust in the model is a "faxed letterhead" and an "online payment system". If the trust level was upped then either people would stop using CA's or use CA's at the lowest price. All of which is a classic "free market race for the bottom". Even getting Governments involved via legislation is not realy going to work because all of the issues we see with other online security and privacy.
Which sugests we should go back to the old human "tried and tested" reputational models, but if you have a long hard think about it that is not going to work either.
The issue is very much a "Gordian knot" currently, unfortunatly there are few if any swords with the edge to gut through this one.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.