Schneier on Security
A blog covering security and security technology.
« The Keys to the Sydney Subway |
| Hogwarts Security »
September 2, 2005
There's a discussion on Slashdot about the security of code signing, and particularly my comments on the topic in the book Secrets and Lies.
Posted on September 2, 2005 at 7:18 AM
• 17 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
Has anyone encountered code-signing deployed in the real world as anything other than a racket to exclude disfavored software writers?
Here in the mobile phone space it seems to be entirely a business move rather than a security one. It seems possible that it could be of security use somewhere, but I've never seen it.
Absolutely, I've encountered it. Much of open source software is signed (e.g. all of a RedHat distribution), and there's no real racket there. RedHat isn't the author of most of what it distributes, but if it's compiled by RH and included in its distribution, it's signed.
No, I haven't, but I think the reason is the cost of codesigning certificates. While I see code signing, email signing, and other uses of PKI as a great idea, too many companies are seeing it as a way to make money. This attitude puts code signing out of the realm of pheasability for a lot of small coding shops and independant developers, unless they do self-signed certs. This is why the opensource community uses GPG signatures to verify the authenticity of their code.
The only solution I see to get code signing certs used as they were intended is to make them free, while still verifying personal and business personas.
This is exactly what the http://www.cacert.org is trying to do. I am not sure of other similar CAs that might exist, but I do know that there are others in the works.
My CAD$0.02 on signed software:
The signing RH does is a good thing. It lets me enforce that auto-updates will only install the software released by Red Hat and no one else. This prevents an attacker from compromising a download site and putting a trojaned binary there.
It also works because I had to install the GPG key from my CD. By defaults no automatic updates will flow. I can shoot myself in the feet, but I'm not given any ammo.
Microsoft's windows update is similar. But they trust at least their own certificate by default. I understand, but have not verified, that they also trust many certificates that are signed by "good CA". In other words, anyone with a few hundred dollars can get VeriSign to say that they are who they say they are. Authentication is not authorization, but it seems it's all Windows cares about.
The slashdot poster was a proponent of code signing, but he was coming from the perspective of a big IT shop with complete and exclusive control of the allowed certs in their desktop. When you do that, signing is useful but who else does it?
I think codesingnig is a good thing.
As method to check integrity of code against a key can be used in security concepts.
The drawback is, without concept there isn't additional security, only marketing.
If I have a trusted pgp key and with it check the integrity of source code, binary, etc then it is an example for improved security.
If I trust every signature without validaiting the key this is a drawback. I trust on something without additional security.
Codesinging is a method. How we use it depends on us.
I've seen it used as a fail safe to prevent code that has not gone through proper QA from running on servers. Basicly only the guys in QA who can give final aproval have the keys to sign the exe/dll. And the server is configured to only run code with the QA signature. (easilly done in 2003) Then, this will prevent code from bypassing the QA process.
You are correct with Microsoft's position on Codesigning. In the root store you will notice many certificates from various Root Authorities. Microsoft puts companies into their store after $$ exchanges. The risk MS runs then is if that company is trustworthy enough to validate and issue certs to responsible parties and not some hacker (Verisign has made this mistake once in the past and has had to revoke certs. If you look in the Untrusted certs you will see two MS certs issued by Verisign).
Codesigning from a business perspective is a good thing in that it identifies the software. (IT does not guarantee that the software is not a virus, simply indicates that the software was signed, unaltered after signing much like a notary agent)
The bad part of Codesigning is that it does affect the single devs/smaller companies to have to be "approved" and/or purchase a cert to deploy their software to certain OS. OF course you don't have to have your code signed, as the OS will prompt the end user with a warning of unsigned software
Code signing is no different from any other digital signature process. It instills a sense of confidence that the information signed is what was intended by the signer. It does not insinuate anything else about the content, such as the software being more secure. Having said this, one must trust several attributes: the signer (i.e. company, person, server, etc.), the signing process (i.e. the signing key is secured by the owner/signer), and that the data has not fallen victim to some form of attack that permitted a modification without affecting the signature (rare, but mathematically probable). Within this context, code signing simply validates the source (provider) and that your are receiving what was intended. It implies nothing more. If trust is assumed in the above three attributes, then the signing process can be used as a decision making tool – to install the software or not, “Do I trust the vendor/source?��? Frankly, signing, in any form, is meaningless overhead without proof of valid and acceptable key management practices. If you’re using code signing to make determinations on trusting the software, your efforts are wasted without investigating the integrity of the underlying process and technology used by the signer. This is the fundamental reasoning of PKI root signing auditing processes. Trust never should be implied, it must be validated. Otherwise, you are operating on degrees of assumption and not empirical evidence.
Yawn. We all know systems rely on trust, and it takes some degree of real experience or expertise to establish a system with beneficial trust relationships.
So signed code is only as strong as trust in the signature, and a signature is no panacea...I don't understand, are you promoting your book here or just saying "look, my stuff's on Slashdot"?
Cliff was nice enough to enumerate your arguments. I might have just abbreviated them all to:
1) Safety from technical trust relationships requires technical (sophisticated) knowledge
2) A signature does not guarantee safety; see argument one
3) Signatures to the nth degree do not guarantee safety; see argument one
and so on...
You come across as a bit of a dick in your postings here. What's with the attitude? If you've got something against Schneier, why don't you spend less time on his blog?
What is needed is third party Trusted Build Agents .
The Twelve Step in TrustABLE IT
 Governments, organizations and individuals are becoming increasingly concerned about software compatibility, conflicts and the possible existance of spyware in the software applications they use. If you have access to the source code, then you can check it and compile it for yourself. This is not an option for closed source proprietary applications, and not everyone has the resources to check each line of source code. One solution for these issues is to employ a trusted third party, separate from the application developer, who is tasked with maintaining a trusted build environment, to build the binaries from source code. The Trusted Build Agent (TBA) would hold the source to each build in escrow, releasing the source code for only open source licensed code. Competing businesses providing a TBA service in a free market would compete with each other in not only price and level of certification, but also on the ability to detect hostile, vulnerable, incompatible or just plain buggy source code. You could request a trusted build from multiple TBAs test the ability to detect defects. Defects would be reported back to the application developers, along with any patches and suggestions that provide a fix. To a lesser extent, most Linux distributions and other operating system vendors that build and redistribute open source licensed code already provide this role.
Thanks for the feedback. What's up with the personal attacks instead of discussing the subject? To the best of my knowledge I have never attacked Bruce, nor would I ever want to, just the topics. Alas, you can please some of the people all of the time...
interestingly enough FIPS-140 requires that the code be integrity checked as part of self test. The major problem with that is of course if you have a crypto implementation you are asking the code to check itself.. it's like asking someone if they are a liar or not..
"it's like asking someone if they are a liar or not"
This is mostly to protect against "random" errors due to software/hardware malfunctions (which, of course, can be a result of an external attack).
"Has anyone encountered code-signing deployed in the real world as anything other than a racket to exclude disfavored software writers?"
Lotus Notes has had code signing embedded in it for nearly a decade, and digital signatures as a key component of its infrastructure since version 1.
The built-in code signing subjects scripting language activities to an Execution Control List. There's pretty extensive granularity on this feature, although it's obviously limited only to programming capabilities of the environment itself.
This discussion misses the main benifit of code signing, revocation. Users may have no way of deciding which author to trust, but they're safer if a malicious package fails to install.
The code signing discussions I have seen pretty much assume an "application" is a relatively static collection of "executable" code. What if the code is written to be adaptive and self modifying? What if the code and the data are not really separate and distinct? What if the "code" is not in one body but spread over a large number of dynamically loaded and/or translated subparts such that the configuration at any particular point in time is highly variable? The worry I have about code signing is the implicit limitation of what we see as valid software and software practice.
Also, what does this mean about Free Software? It is part of the contract that I can modify it at anytime for my own convenience. If my system is configured to only honor say the version from a recognized Linux distributor as fully able to do what it was designed to do then clearly I have a problem on my very own system.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.