August 15, 2002
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org.
Copyright (c) 2002 by Counterpane Internet Security, Inc.
In this issue:
There's been more written about Microsoft's Palladium security initiative than about anything else in computer security in a very long time. My URL list of comments, analysis, and opinions goes on for quite a while. Which is interesting, because we really don't know anything about the details of what it is or how it works. Much of this is based on reading between the lines in the various news reports, conversations I've had with Microsoft people (none of them under NDA), and conversations with people who've had conversations. But since I don't know anything for sure, all of this could be wrong.
Palladium (like chemists, Microsoft calls it "Pd" for short) is Microsoft's implementation of the TCPA spec, sort of. ("Sort of" depends on who you ask. Some say it's related. Some say they do similar things, but are unrelated. Some say that Pd is, in fact, Microsoft's attempt to preempt the TCPA spec.) TCPA is the Trusted Computing Platform Alliance, an organization with just under 200 corporate members (an impressive list, actually) trying to build a trusted computer. The TCPA 1.1 spec has been published, and you can obtain the 1.2 spec under NDA. Pd doesn't follow the spec exactly, but it's along those lines, sort of.
Pd has been in development for a long time, since at least 1997. The best technical description is the summary of a meeting with Microsoft engineers by Seth Schoen of the EFF (URL below). I'm not going to discuss the details, because systems with an initial version of Pd aren't going to ship until 2004 -- at least -- andthe details are all likely to change.
Basically, Pd is Microsoft's attempt to build a trusted computer, much as I discussed the concept in "Secrets and Lies" (pages 127-130); read it for background). The idea is that different users on the system have limitations on their abilities, and are walled off from each other. This is impossible to achieve using only software; and Pd is a combination hardware/software system. In fact, Pd affects the CPU, the chip set on the motherboard, the input devices (keyboard, mouse, etc.), and the video output devices (graphics processor, etc.). Additionally, a new chip is required: a tamper-resistant secure processor.
Microsoft readily acknowledges that Pd will not be secure against hardware attacks. They spend some effort making the secure processor annoying to pry secrets out of, but not a whole lot of effort. They assume that the tamper-resistance will be defeated. It is their intention to design the system so that hardware attacks do not result in class breaks: that breaking one machine doesn't help you break any others.
Pd provides protection against two broad classes of attacks. Automatic software attacks (viruses, Trojans, network-mounted exploits) are contained because an exploited flaw in one part of the system can't affect the rest of the system. And local software-based attacks (e.g., using debuggers to pry things open) are protected because of the separation between parts of the system.
There are security features that tie programs and data to CPU and to user, and encrypt them for privacy. This is probably necessary to make Pd work, but has a side-effect that I'm sure Microsoft is thrilled with. Like books and furniture and clothing, the person who currently buys new software can resell it when he's done with it. People have a right to do this -- it's called the "First Sale Doctrine" in the United States -- but the software industry has long claimed that software is not sold, but licensed, and cannot be transferred. When someone sells a Pd-equipped computer, he is likely to clear his keys so that his identity can't be used or files can't be read. This will also serve to erase all the software he purchased. The end result might be that people won't be able to resell software, even if they wanted to.
Pd is inexorably tied up with Digital Rights Management. Your computer will have several partitions, each of which will be able to read and write its own data. There's nothing in Pd that prevents someone else (MPAA, Disney, Microsoft, your boss) from setting up a partition on your computer and putting stuff there that you can't get at. Microsoft has repeatedly said that they are not going to mandate DRM, or try to control DRM systems, but clearly Pd was designed with DRM in mind.
There seem to be good privacy controls, over and above what I would have expected. And Microsoft has claimed that they will make the core code public, so that it can be reviewed and evaluated. It's about time they realized that lots of people are willing to do their security work for free.
It's hard to sort out the antitrust implications of Pd. Lots of people have written about it. Will Microsoft jigger Pd to prevent Linux from running? They don't dare. Will it take standard Internet protocols and replace them with Microsoft-proprietary protocols? I don't think so. Will you need a Pd-enabled device -- the system is meant for both general-purpose computers and specialized media devices -- in order to view copyrighted content? More likely. Will Microsoft enforce its Pd patents as strongly as it can? Almost certainly.
Lots of information about Pd will emanate from Redmond over the next few years, some of it true and some of it not. Things will change, and then change again. The final system may not look anything like what we've seen to date. This is normal, and to be expected, but when you continue to read about Pd, be sure to keep several things in mind.
1. A "trusted" computer does not mean a computer that is trustworthy. The DoD's definition of a trusted system is one that can break your security policy; i.e., a system that you are forced to trust because you have no choice. Pd will have trusted features; the jury is still out as to whether or not they are trustworthy.
2. When you think about a secure computer, the first question you should ask is: "Secure for whom?" Microsoft has said that Pd allows the computer-owner to prevent others from putting their own secure areas on the computer. But really, what is the likelihood of that really happening? The NSA will be able to buy Pd-enabled computers and secure them from all outside influence. I doubt that you or I could, and still enjoy the richness of the Internet. Microsoft really doesn't care about what you think; they care about what the RIAA and the MPAA think. Microsoft can't afford to have the media companies not make their content available on Microsoft platforms, and they will do what they can to accommodate them. There's often a large gulf between what you can get in theory -- which is what Microsoft is stressing in their Pd discussions -- and what you will be able to have in practice. This is where the primary danger lies.
3. Like everything else Microsoft produces, Pd will have security holes large enough to drive a truck through. Lots of them. And the ones that are in hardware will be much harder to fix. Be sure to separate the Microsoft PR hype about the promise of Pd from the actual reality of Pd 1.0.
4. Pay attention to the antitrust angle. I guarantee you that Microsoft believes Pd is a way to extend its market share, not to increase competition.
There's a lot of good stuff in Pd, and a lot I like about it. There's also a lot I don't like, and am scared of. My fear is that Pd will lead us down a road where our computers are no longer our computers, but are instead owned by a variety of factions and companies all looking for a piece of our wallet. To the extent that Pd facilitates that reality, it's bad for society. I don't mind companies selling, renting, or licensing things to me, but the loss of the power, reach, and flexibility of the computer is too great a price to pay.
Seth Schoen's meeting summary:
TCPA Web site:
Crypto-Gram is currently in its fifth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Protecting Copyright in the Digital World:
Vulnerabilities, Publicity, and Virus-Based Fixes:
A Hardware DES Cracker:
Biometrics: Truths and Fictions:
Back Orifice 2000:
Web-Based Encrypted E-Mail:
Here's the problem: when someone surfs to your Web page, they can read the HTML source. Of course, there's no way to solve that problem: if they don't read the HTML source, they can't display the page. But encryptHTML claims to be a fix.
It gets funnier. In Mozilla, when you choose the "save page as" entry from the "file" menu, the browser simply saves the file as plain HTML. Since the decryption script had already been run, the HTML has already been decrypted.
Security. What security?
The author of the program knows about the problem, and stopped working on the software a year ago. It's hard to tell if the company hosting the webpage, Cedium, knows about this. The link to download the software stopped working in the past month, so I don't know what's going on. This may be one of those abandoned Web sites that litter the net.
Entertaining, in any case.
"Implementation of Chosen-Ciphertext Attacks against PGP and GnuPG"
K. Jallad, J. Katz, and B. Schneier, Information Security Conference 2002 Proceedings, Springer-Verlag, 2002, to appear.
We demonstrate a social-engineering attack against PGP and other compatible e-mail encryption systems. The attack works like the:
1. Alice sends Bob an encrypted message, encrypted in Bob's key. Eve intercepts that message.
2. Eve uses the intercepted ciphertext to create a different encrypted message. She sends it to Bob.
3. Bob decrypts the message. The results are gibberish.
4. Eve somehow convinces Bob to send her the gibberish plaintext.
5. Eve reconstructs the original plaintext (from Step 1) from the plaintext she receives in Step 4.
A bill introduced into Congress gives copyright holders -- that's the RIAA, the MPAA, and similar guys -- the right to break into people's computers if they have a reasonable basis to believe that copyright infringement is going on. Basically, the bill protects organizations from federal and state laws if they disable, block, or otherwise impair a publicly accessible peer-to-peer network.
There are two things going on here. The first is to wonder why copyright infringement needs special laws allowing vigilante justice (if someone broke into a bank's computers and stole $1B, the bank wouldn't be legally allowed to retaliate by disabling the attacker's computer), and the second is to ponder the nature of counterattack.
The best defense is a good offense, and that's what counterattack is. Passive defense is making yourself harder to hit. Active defense is fighting back. Counterattack is turning the tables and attacking the attacker. It's by far the most effective means of defense, but it's also the most error prone.
In almost all of civilized society, counterattack is not legal. If you catch someone burglarizing your home, it's not legal for you to follow her home and shoot her. If you're being blackmailed by someone, turning around and blackmailing him back is just as illegal as the first crime. I can't think of any exceptions to this. Law enforcement is the sole purview of the police, an organization that has what I have previously described as "a state-sponsored monopoly on violence."
The exception to the above is warfare. In war, the rules about counterattack -- and preemptive attack -- are different. In war, attack and defense are so jumbled up that counterattack is the norm. In war, the difference between an offensive weapon and a defensive weapon is the direction it's pointing. But that's not what we're talking about here.
Counterattack is wrong, both legally and morally. Vigilante justice is wrong, both legally and morally. Victims of attack are allowed to defend themselves, but they're not allowed to take the law into their own hands and attack back. That's why we have police.
None of this is new or controversial, so why are copyright holders even talking about this? This bill would make it legal for the MPAA, the RIAA, and its ilk to break into computer systems they suspect (with no standard of evidence) are guilty of copyright infringement. It will allow them to perform denial-of-service attacks against peer-to-peer networks, release viruses that disable systems and software, and violate everyone's privacy. People they choose to target would be deemed guilty until proven otherwise. In short, this bill would set up the entertainment industry as a Gestapo-like enforcement agency with no oversight.
To me, it's another example of the insane lengths the entertainment companies are willing to go to preserve their business models. They're willing to destroy your privacy, have general-purpose computers declared illegal, and exercise special vigilante police powers that no one else has...just to make sure that no one watches "The Little Mermaid" without paying for it. They're trying to invent a new crime: interference with a business model.
An opinion piece by Rep. Howard Berman:
Last month I put Cryptoco in my "Doghouse" column, on the strength of a very snake-oil-filled press release. Seems that at least one respectable cryptographer, Ivan Damgård, is involved in this company. I still think this is snake-oil -- nothing I've read says anything different -- but there may be something of mathematical interest inside all of the business cluelessness. They're patented their algorithm, though, which pretty much guarantees the death of their ideas.
Entertaining story about the New York Times and insecure passwords:
Something good from Microsoft. Their "Five-Minute Security Advisor" is a collection of tips and tricks on a variety of topics. Things like "Simple Firewall Setup for Home Office Users," "The Road Warrior's Guide to Laptop Protection," and "Configuring Your Computer for Multiple Users." There's some pro-Microsoft propaganda in the mix -- see "How Windows XP Protects Your Privacy" for a good example -- but there's also lots of useful information.
Interesting claims and counterclaims about the relationship between malicious hackers and security companies:
Yikes! This could very well be a patent on network firewalls.
This sounds a whole lot like snake oil. My hope is that the reporter has the story wrong, and that there's something interesting in the underlying research.
Clever security idea: The music industry is flooding peer-to-peer networks with bogus copies of popular songs.
The ACLU is challenging the DMCA:
Carnival Booth is an algorithm for defeating the Computer-Assisted Passenger Screening System (CAPS). The point of CAPS is to try to maximize security resources by profiling likely terrorists and spending more effort on them. "Why frisk Eleanor, the 80-year-old grandmother from Texas when you can stop Omar, the 22-year-old student fresh from Libya?" Sounds good, but the authors of this paper show that, given a reasonably diverse population of terrorists, this system is provably less secure than random searching. Really good work, and an excellent example of applying techniques honed in computer security to the real world.
Automatic face recognition fails miserably at Boston's Logan Airport, as expected (what horrible URLs this newspaper has):
Here's an attack that could do real damage: MSN TV units are calling 911 after a malicious program changes their dial-out number. This has the potential of affecting emergency services.
I've been starting to see more talk about counterattack: reaching back and attacking the computer that's attacking you. As satisfying as it sounds, it's most likely illegal. (It's also illegal to visit the home of the person who robbed you and rob him back.)
Government-mandated background checks for IT personnel?
Eli Lilly has settled with the government in a computer privacy liability case. They leaked the names of 669 patients on Prozac.
New Rijndael cryptanalytic result. It's not an attack, but it's a newly discovered mathematical property of the cipher that may lead to one.
A new, faster, algorithm to test whether or not a given number is prime. While this has no direct cryptographic implications, it's an enormously big deal. Whether an algorithm like this even existed was an open question until now.
Here's an huge hole in Microsoft IE's SSL security. Anyone with a valid VeriSign certificate for any Internet site anywhere can forge a fake certificate for any other Internet site, and Microsoft Internet Explorer will accept the bogus certificate as valid. The X.509 standard specifies that certificates are supposed to include a bit saying whether the corresponding public key can be used to sign other certificates. VeriSign certificates have the bit disabled, of course. But Microsoft IE doesn't check the bit, and assumes that any certificate can sign any other certificate, and any certificate signed by any certificate is valid. What this means is that if you're a Microsoft IE user, the cryptographic protections in SSL don't work for you. The fact that Microsoft isn't all over this problem tells me that their take-security-seriously initiative is a whole lot of hot air.
Bruce Schneier is the subject of a lengthy, and very interesting, article in The Atlantic:
Counterpane has just come off its best quarter ever. We're monitoring more companies in more countries than ever before, and are by far the largest Managed Security Monitoring company in the world. And with RipTech's absorption into Symantec, there's no other security monitoring company that can claim to be vendor independent. We now have over 70 VARs selling Counterpane monitoring, and more are signing up every week. It's kind of cool, really.
Counterpane's second-quarter results:
It's a quintessentially American solution: our nation's commercial aircraft are at risk, so let's allow pilots to carry guns. We have visions of these brave men and women as the last line of defense on an aircraft, and courageously defending the cockpit against terrorists at 30,000 feet. I can just imagine the made-for-TV movie.
Reality is more complicated than television, though. Sometimes, security systems cause more problems than they solve. Putting guns on aircraft will make us more vulnerable to attack, not less.
When people think of potential problems with an weapons in a cockpit, they think of accidental shootings in the air, holes in the fuselage, and possibly even equipment shattered by a stray bullet. This is a problem, certainly, but not a major one. A bullet hole is small, and doesn't let a whole lot of air out. And airplanes are designed to handle equipment failures -- even serious failures -- and remain in the air. If I ran an airline, I would worry more about accidents involving passengers, who are much less able to survive a bullet wound and much more likely to sue.
The real dangers, though, involve the complex systems that must be put in place before the first gun can ride along in the cockpit. There are major areas of risk.
One, we need a system for getting the gun on the airplane. How does the pilot get the gun? Does he carry it through the airport and onto the plane? Is it issued to him after he's in the cockpit but before the plane takes off? Is it secured in the cockpit at all times, even when there is no one there? Any one of these solutions has its own set of security vulnerabilities. The last thing we want is for an attacker to exploit one of these systems in order to get himself a gun. Or maybe the last thing we want is a shootout in a crowded airport.
Second, we need a procedure for storing the gun on the airplane. Does the pilot carry it on his hip? Is it locked in a cabinet? If so, who has the key? Is there one gun, or do the pilot and co-pilot each have one? However the system works, it's ripe for abuse. If the gun is always at the pilot's hip, an attacker can take it away from him when he leaves the cockpit. (Don't laugh; policemen get their guns taken away from them all the time, and they're trained to prevent that.) If the guns remain in the cockpit when it is unoccupied, we have a whole new set of problems to worry about.
Third, we need a system of training pilots in gun handling and marksmanship. Guns require training to use well; how much training can we expect our pilots to have? This is different from training sky marshals. Security is the primary job of a sky marshal; they're expected to learn how to use a gun. Flying planes is the primary job of a pilot.
Giving pilots guns is a disaster waiting to happen. The current system spends a lot of time and effort keeping weapons off airplanes and out of airports; the proposed scheme would inject thousands of handguns into that system. There are just too many pilots and too many flights every day; mistakes will happen. Someone will do an inventory one night and find a gun missing, or ten. Someone will find one left in a cockpit. Someone may even find one on a seat in a terminal.
El Al is the most security-conscious airline in the world. Their pilots remain behind two bulletproof doors, and they're unarmed. It's the job of the pilot to land the plane safely, not to engage terrorists in close combat. For that, they rely on sky marshals, crew, and passengers. If pilots have to leave the cockpit to solve a security problem, it's too late.
United States airlines are not comparable to El Al. Our flights don't travel with two armed sky marshals each. We don't perform security checks on passengers that, while legal in Israel, would violate U.S. laws. We don't have two bulletproof doors separating the cockpit from the passengers. Many politicians see guns as a quick fix to a problem that can't wait for a careful solution.
Personally, I don't think pilots should be armed. But even if I thought they did, I still wouldn't give them guns. Guns aren't designed to be used in the cramped spaces you find in airplane cockpits. They have too high a risk of doing unwanted damage if they miss. And there's too much risk involved in putting thousands of guns in airports, storing them, getting them on and off airplanes, and keeping them in cockpits. If you want to arm pilots, it would be much smarter to give them billy clubs or tasers. At least those weapons make sense for the situation.
From: "Travis Puderbaugh" <tpuderbaughamada.com>
The security of an embedded system doesn't necessarily have to be in the chip itself. In the pipeline example, the individual embedded controllers that run the valves should not (and aren't, I don't think) be connected directly to the Internet. Rather, they go through a control computer at some point. If the security on that control computer is maintained, then all those embedded products aren't in any danger. Similarly with traffic lights, they are controlled by a central computer. If the security on that central computer is adequate, then each light can be considered as secure as that computer. The danger of someone walking up to a control box and reprogramming an EEPROM or FPGA is very low, and if they can get that kind of access then they can just throw a rock at the circuit -- which will be just as damaging.
From: Bruce McNair <bmcnairatt.net>
I think the name says it all. It was sent from the gods; no man may look at it, lest they be blinded (probably a reference to the open source movement and reverse engineering lawsuits). It was supposed to protect Troy, but it doesn't protect against attacks by Trojan horses. And, if you look at it from a commodities perspective, it may be more expensive than gold...
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of "Secrets and Lies" and "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide.
Copyright (c) 2002 by Counterpane Internet Security, Inc.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.