March 15, 2001

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.

A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.

Back issues are available at <>. To subscribe or unsubscribe, see below.

Copyright (c) 2001 by Counterpane Internet Security, Inc.

In this issue:

The Security Patch Treadmill

“Well, in our country,” said Alice, panting a little,
“you’d generally get somewhere else—if you ran very
fast for a long time, as we’ve been doing.”
“A slow sort of country!” said the Queen. “Now here,
you see, it takes all the running you can do, to keep
in the same place.”
—Through the Looking Glass, by Lewis Carroll.

Last week, the FBI announced that over the past year, several groups of Eastern European hackers had broken into at least 40 companies’ Web sites, stolen credit card numbers, and in some cases tried to extort money from their victims. The network vulnerabilities exploited by these criminals were known, and patches that closed them were available—but none of the companies had installed them. In January 2001, the Ramen worm targeted known vulnerabilities in several versions of Red Hat Linux. None of the thousands of infected systems had their patches up to date. In October 2000, Microsoft was molested by unknown hackers who wandered unchallenged through their network, accessing intellectual property, for weeks or months. According to reports, the attackers would not have been able to break in if Microsoft patches had been up to date. The series of high-profile credit card thefts in January 2000, including the CD Universe incident, were also the result of uninstalled patches. A patch issued eighteen months previously would have protected these companies.

What’s going on here? Isn’t anyone installing security patches anymore? Doesn’t anyone care?

What’s going on is that there are just too damn many patches. It’s simply impossible to keep up. I get weekly summaries of new vulnerabilities and patches. One alert service listed 19 new patches in a variety of products in the first week of March 2001. That was an average week. Some of the listings affected my network, and many of them did not. Microsoft Outlook had over a dozen security patches in the year 2000. I don’t know how the average user can possibly install them all; he’d never get anything else done.

Security professionals are quick to blame system administrators who don’t install every patch. “They should have updated their systems; it’s their own fault when they get hacked.” This is beginning to feel a lot like blaming the victim. “He should have known not to walk down that deserted street; it’s his own fault he was mugged.” “She should never have dressed that provocatively; it’s her own fault she was attacked.” Perhaps such precautions should have been taken, but the real blame lies elsewhere.

Those who manage computer networks are people too, and people don’t always do the smartest thing. They know they’re supposed to install all patches. But sometimes they can’t take critical systems off-line. Sometimes they don’t have the staffing available to patch every system on their network. Sometimes applying a patch breaks something else on their network. I think it’s time the industry realized that expecting the patch process to improve network security just doesn’t work.

Security based on patches is inherently fragile. Any large network is going to have hundreds of vulnerabilities. If there’s a vulnerability in your system, you can be attacked successfully and there’s nothing you can do about it. Even if you manage to install every patch you know about, what about the vulnerabilities that haven’t been patched yet? (That same alert service listed 10 new vulnerabilities for which there is no defense.) Or the vulnerabilities discovered but not reported yet? Or the ones still undiscovered?

Good security is resilient. It’s resilient to user errors. It’s resilient to network changes. And it’s resilient to administrators not installing every patch. For the past two years I have been championing monitoring as a way to provide this resilient security. If there are enough motion sensors, electric eyes, and pressure plates in your house, you’ll catch the burglar regardless of how he got in. If you are monitoring your network carefully enough, you’ll catch a hacker regardless of what vulnerability he exploited to gain access. Monitoring makes a network less dependent on keeping patches up to date; it’s a process that provides security even in the face of ever-present vulnerabilities, uninstalled patches, and imperfect products.

In a perfect world, systems would rarely need security patches. The few patches they did need would automatically download, be easy to install, and always work. But we don’t live in a perfect world. Network administrators are busy people, and networks are constantly changing. Vigilant monitoring does not “solve” computer security, but it is a much more realistic way of providing resilient security.

The Ramen worm:

Security patches aren’t being applied:
Best quote: “Failing to responsibly patch computers led to 99 percent of the 5,823 Web site defacements last year, up 56 percent from the 3,746 Web sites defaced in 1999, according to security group” I’m not sure how they know, but is scary nonetheless.

The Eastern European credit card hackers:
<> [link moved to,,s2084921,00.html]

Many networks have not patched BIND after January’s vulnerabilities were patched:

The Microsoft attack:

Patch your apps:

Author’s note: Every time I write an essay that speaks favorably about Counterpane, I get e-mails from people accusing me of advertising. I disagree, and I’d like to explain. Much of my current thinking about computer security stemmed from years of consulting. I watched as product after product failed in the field, and I tried to figure out why. My conclusions are largely chronicled in my book _Secrets and Lies_, and are reflected in the business model of Counterpane Internet Security, Inc. I don’t extol the virtues of monitoring because that’s what Counterpane does; Counterpane provides Managed Security Monitoring because I believe it is the future of security. I see monitoring as a way to achieve security in a world where the products are hopelessly broken. Over the next several months I will publish more essays on security, and monitoring is prominent in many of them. I’m not shilling Counterpane; it’s just where my thinking is.

Crypto-Gram Reprints

Software complexity and security:

Why the worst cryptography is in systems that pass initial cryptanalysis:

Insurance and the Future of Network Security

Eventually, the insurance industry will subsume the computer security industry. Not that insurance companies will start marketing security products, but rather that the kind of firewall you use—along with the kind of authentication scheme you use, the kind of operating system you use, and the kind of network monitoring scheme you use—will be strongly influenced by the constraints of insurance.

Consider security, and safety, in the real world. Businesses don’t install building alarms because it makes them feel safer; they do it because they get a reduction in their insurance rates. Building-owners don’t install sprinkler systems out of affection for their tenants, but because building codes and insurance policies demand it. Deciding what kind of theft and fire prevention equipment to install are risk management decisions, and the risk taker of last resort is the insurance industry.

This is sometimes hard for computer techies to understand, because the security industry has trained them to expect technology to solve their problems. Remember when all you needed was a firewall, and then you were safe? Remember when it was an intrusion detection product? Or a PKI? I think the current wisdom is that all you need is biometrics, or maybe smart cards.

The real world doesn’t work this way. Businesses achieve security through insurance. They take the risks they are not willing to accept themselves, bundle them up, and pay someone else to make them go away. If a warehouse is insured properly, the owner really doesn’t care if it burns down or not. If he does care, he’s underinsured. Similarly, if a network is insured properly, the owner won’t care whether it is hacked or not.

This is worth repeating: a properly insured network is immune to the effects of hacking. Concerned about denial-of-service attacks? Get bandwidth interruption insurance. Concerned about data corruption? Get data integrity insurance. (I’m making these policy names up, here.) Concerned about negative publicity due to a widely publicized network attack? Get a rider on your good name insurance that covers that sort of event. The insurance industry isn’t offering all of these policies yet, but it is coming.

When I talk about this future at conferences, a common objection I hear is that premium calculation is impossible. Again, this is a technical mentality talking. Sure, insurance companies like well-understood risk profiles and carefully calculated premiums. But they also insure satellite launches and the palate of wine critic Robert Parker. If an insurance company can protect Tylenol against some lunatic putting a poisoned bottle on a supermarket shelf, anti-hacking insurance will be a snap.

Imagine the future…. Every business has network security insurance, just as every business has insurance against fire, theft, and any other reasonable threat. To do otherwise would be to behave recklessly and be open to lawsuits. Details of network security become check boxes when it comes time to calculate the premium. Do you have a firewall? Which brand? Your rate may be one price if you have this brand, and a different price if you have another brand. Do you have a service monitoring your network? If you do, your rate goes down this much.

This process changes everything. What will happen when the CFO looks at his premium and realizes that it will go down 50% if he gets rid of all his insecure Windows operating systems and replaces them with a secure version of Linux? The choice of which operating system to use will no longer be 100% technical. Microsoft, and other companies with shoddy security, will start losing sales because companies don’t want to pay the insurance premiums. In this vision of the future, how secure a product is becomes a real, measurable, feature that companies are willing to pay for…because it saves them money in the long run.

Other systems will be affected, too. Online merchants and brick-and-mortar merchants will have different insurance premiums, because the risks are different. Businesses can add authentication mechanisms—public-key certificates, biometrics, smart cards—and either save or lose money depending on their effectiveness. Computer security “snake-oil” peddlers who make outlandish claims and sell ridiculous products will find no buyers as long as the insurance industry doesn’t recognize their value. In fact, the whole point of buying a security product or hiring a security service will not be based on threat avoidance; it will be based on risk management.

And it will be about time. Sooner or later, the insurance industry will sell everyone anti-hacking policies. It will be unthinkable not to have one. And then we’ll start seeing good security rewarded in the marketplace.

A version of this essay originally appeared in Information Security Magazine:

An article on hacking insurance:


The Anna Kournikova worm was written using a virus-writing kit. If this doesn’t get Microsoft’s attention, I don’t know what will. And to the rest of you, just say no to Outlook.

Computer hackers could be prosecuted as terrorists under a new UK law: the Terrorism Act 2000. The Act significantly widens the definition of terrorism to include those actions that “seriously interfere with or seriously disrupt an electronic system.”
<> [link moved to,,s2084524,00.html]
The Terrorism Act:
Australians (at least those surveyed by ZDNet) agree with this:

What’s wrong with copy protection, by John Gilmore:

This article is about the Seti@home project: people fake results to improve their standings in the program. The security moral is that the “attack isn’t worth the effort” justification often doesn’t apply; people spend a lot of effort attacking things that have no monetary value.

I’ve repeatedly said that the Internet is too complex to secure. This article is about “Enterprise Application Portals,” one of the next big things. When you read this article, marvel at all the protocols and buzzwords and applications that are working in concert. “Infrastructure and access meet at the network edge, where organizations are increasingly driven to deliver pervasive, personalized content and commerce. Outside the network edge are billions of Internet devices. Inside the Network edge is the enterprise’s competitive machinery. Whatever organizations erect at the network edge must be highly scalable, reliable, available, and secure all the time.” Yeah, right.

IBM has withdrawn CPRM…
…and replaced it with something almost identical:
A good analysis by John Gilmore:

Claude Shannon dies:
<…> [link dead; try…]

Particularly amusing response to a Motion Picture Association threat letter to a researcher who has various instantiations of DeCSS on his Web page:

The U.S. General Accounting Office (GAO) has released this large report on making PKI work:

According to CNN, accused spy Robert Hanssen suspected that he was under surveillance and send an encrypted message to his handlers: “The comment came from a letter that FBI officials said was encrypted on a computer diskette found in a package—taped and wrapped in a black plastic trash bag—that Hanssen dropped underneath a foot bridge in a park in Northern Virginia, immediately before his arrest. The FBI decrypted the letter and described it in an affidavit filed in support of its search warrant.” Interesting. Hanssen wasn’t stupid, and he probably was using a good commercial encryption product. What exactly did the FBI do to decrypt the letter?
The FBI’s affidavit, fascinating reading as it is, does not seem to confirm this news story:

“The Emperor’s New Clothes: The Shocking Truth about Digital Signatures and Internet Commerce.” Worth reading.
<> [link moved to]

A program called ShareSniffer automatically searches the Internet for Windows machines with world-accessible hard drives or directories. Certainly some people may want the world to access their hard drives, but most systems found are probably misconfigured.

More about last fall’s network break-in at Microsoft. Honestly, I can’t tell how much of this is accurate.

NIST released an intrusion-detection primer for federal agencies. It is useful reading for anyone interested.

And NIST has also released the draft FIPS for AES. If you have any last comments, this is the time to make them.

A deliberate backdoor in the Palm OS. It was put there to allow debugging and testing, but the programmers neglected to remove it. Oops. <…>

A steganographic file system for Linux:

Lessons in bad user interface. Why not to include an override button on your device.

The practicalities, and ethics, of honeypots:

The future of digital music licensing schemes?

The news is not that Amazon was hacked so badly, or that it went on for four months. The news is that Amazon denied it for so long, and threatened legal action against those that first talked about the hack.

It’s too late; Microsoft fixed it. But just a few days ago there was one more Q/A between the second and third question. It read: “Will the virus impact my Macintosh if I am using a non-Microsoft e-mail program, such as Eudora? If you are using a Macintosh e-mail program that is not from Microsoft, we recommend checking with that particular company. But most likely other e-mail programs like
Eudora are not designed to enable virus replication.”

Most companies do not want to go public with security breaches:

Twelve steps to security. A good article (that quotes me extensively):

Counterpane Internet Security News

Counterpane is hiring again. Look at all the current job listings at <>

Bruce Schneier is speaking at the RSA Security Conference, Monday 4/9, at 9:00 AM, in San Francisco.

Schneier is speaking at two ISSA meetings, in Minneapolis on 3/20 at 1:30, and in Boston on 3/22 at 2:30.
Minneapolis: <>
Boston: <>

Counterpane signs Keynote and Conxion:

Counterpane and PricewaterhouseCoopers offer joint service:

Counterpane signs NetCertainty and OpenReach:

Schneier lectured in Digital Rights Management at the University of Minnesota.

Schneier has been interviewed (in Italian) here:

Harvard’s “Uncrackable” Crypto

Last month the New York Times reported a cryptography breakthrough. Michael O. Rabin and Yan Zong Ding, both of Harvard, proposed an information-theoretical secure cipher. (Yonatan Aumann was also involved in the research.) The idea is that a satellite broadcasts a continuous stream of random bits. The sender and receiver agree on several random starting point in that stream, and use the streams as continuous keys to XOR with the message. Since the eavesdropper doesn’t know the starting point, he can’t decrypt the message. And since the stream is too large to store in its entirety, the eavesdropper can’t try different starting points.

That’s basically it. The crypto isn’t worth writing about (although there’s some interesting mathematics), but the context is.

One, the popular press does not count as peer review. I have often watched in amazement as the press grabs hold of some random piece of cryptography and reports on it like it changes the world, only to ignore important pieces of research. When you read about something like this in the popular press, pay attention to the motivations of the researchers and the public relations people who convinced the reporters to write about it. Academic peer-review will happen in the upcoming years.

One of my biggest gripes with these sorts of press announcements is that they ignore the research and the researchers that come before. The model and approach are not new; Ueli Maurer proposed it ten years ago. (If you want to look it up, the citation is: U. Maurer, “Conditionally-Perfect Secrecy and a Provably-Secure Randomized Cipher,” Journal of Cryptology, vol. 5, no. 1, pp. 53-66, 1992. I discuss some of this work in _Applied Cryptography_, p. 419.) Rabin and Ding are not to blame—their academic paper credits Maurer heavily, as well as other work that went before—but none of that came out in the press.

Two, while the paper’s mathematical result is a new contribution to cryptography, it’s nowhere near strong enough to unleash the full potential of the model. I think there are better techniques in Maurer’s paper for finding public randomness, such as using the face of the moon as a public source of randomness (his paper also includes in its model a satellite broadcasting random bits). And it’s totally impractical. Maurer’s paper provides better methods for establishing a secret channel in the presence of an eavesdropper. But because Harvard has a better public relations machine, this result magically becomes news.

Three, this scheme will never be used. Launching satellites gets cheaper all the time, but why would someone have them broadcast random numbers when they could be doing something useful instead? Remember, strong encryption is not our problem; we have secure algorithms. In fact, it’s the one security problem we have solved; solving it better just doesn’t matter. I often liken this to putting a huge stake in the ground and hoping the enemy runs right into it. You can argue about whether the stake should be a mile tall or two miles tall, but a smart attack is just going to dodge the stake. I don’t mean to trash the work; it is a contribution of theoretical interest. It’s just that it should not be mistaken for a practical scheme.

Oh, and by the way, an attacker can store the continuous random stream of bits from the satellite. Just put another satellite in space somewhere, and store the bits in a continuous transmission loop. The neat property of this attack is that the capacity of this storage mechanism scales at exactly the same rate as the data stream’s rate does. There’s no way to defeat it by increasing data rate. Isn’t satellite data storage science fiction? Sure. But no more than the initial idea.


Maurer’s Research:

A demo of one of Maurer’s schemes, more practical than the Rabin scheme:

TCP/IP Initial Sequence Number Flaw

Last week the security consulting company Guardent announced a new vulnerability in TCP/IP. This vulnerability is supposed to allow hackers to hijack TCP/IP connections and do all sorts of nasty things. They have not published technical details, leading some people to accuse them of making it up. There have also been accusations of plagiarism of a 15-year-old vulnerability. The reality is a bit more complicated.

The flaw centers around the ability of an attacker to predict TCP/IP sequence numbers (called Initial Sequence Numbers, or ISNs), and to use this as a lever to break into systems. Robert Tappan Morris (the son, not the father; the one who wrote the 1988 Internet worm) first wrote about this type of vulnerability in 1985. It became an occasional hacker tool after that; Kevin Mitnick used a sequence number predictor to break into Tsutumo Shimomura’s computer at the San Diego Supercomputer Center around 1995. Steve Bellovin wrote a paper extending this attack in 1989, and it started receiving some serious attention in the security community. Bellovin also wrote RFC 1948, which recommends using a virtual time base to randomize the ISNs and thwart this attack.

A number of vendors have opted not to use RFC 1948, because of the (perceived) expense. Instead, they often used home grown methods to randomize ISNs. Guardent’s recent work is an extension of the work of Morris and Bellovin. The researchers found new ways of getting information about the sequence numbers, and showed that hosts that don’t use RFC 1948 are still vulnerable.

There’s no plagiarism. The (still unreleased) Guardent paper credits all earlier work, even if the press release ignores it. There are new attacks, and real academic scholarship. What we do have is an over-enthusiastic public relations department touting yet another incremental improvement on a well-known class of attack. Interesting, but not worth all the press ink it got.

Guardent’s announcement:

CERT advisory:

Morris’s original paper:

Bellovin’s paper:

RFC 1948:

The Doghouse:

I’ll just reprint this from their Web site: “iBallot.Com uses a number of security and encryption features that, when combined, provide a very high level of security throughout the entire voting process. The details of this process are proprietary, for obvious reasons. It does not make a great deal of sense to disclose how the iBallot.Com security system works only to have a hacker come into the system, read about the system’s security, defeat the security and tamper with the voting process. For this reason, iBallot.Com does not publish its security processes. However, with the foregoing being said, the iBallot.Com system does employ encryption and secure server technology to ensure that the entire voting process is fair, accurate and not subject to tampering.”

Encryption and secure server technology…. Boy, I certainly feel better. Good thing they don’t disclose their security; if they did some hacker might read about it and break it. Who *are* these guys?


The “Death” of IDS?

Recently I’ve been seeing several articles foretelling the death of Intrusion Detection Systems (IDS). Supposedly, changes in the way networks work will make them an obsolete relic of simpler times. While I agree that the challenges IDSs face are serious, and that they will always have limitations, I am more optimistic about their future.

IDSs are the network equivalent of virus scanners. IDSs look at network traffic, or processes running on hosts, for signs of attack. If they see one, they sound an alarm. In _Secrets and Lies_, I spent several pages on IDSs (pp. 194-197): how they work, how they fail, the problems of false alarms. For here, suffice it to say that the two problems that IDSs have are 1) failing to detect real attacks, and 2) failing to ignore false alarms.

These two problems are nothing new, but several recent developments threaten to undermine IDSs completely.

First is the rise of IPsec. IPsec is a security protocol that encrypts IP traffic. An IDS can’t detect what it can’t understand, and is useless against encrypted network traffic. (Similarly, an anti-virus program can’t find viruses in encrypted e-mail attachments.) As encryption becomes more widespread on a network, an IDS becomes less useful.

Second is the emergence of Unicode. In the July 2000 Crypto-Gram, I talked about security problems associated with Unicode. One problem is the ability to disguise character strings in various ways. Since most IDS systems look for character strings in packets indicating certain network attacks, Unicode threatens to make this job insurmountable.

Third is the increased distribution of networks. Today’s traffic is no longer coming through one firewall, but rather via the firewall and hundreds of different direct external links to customers, suppliers, joint venture partners, outsourcing companies, IPsec gateways for telecommuters and road warriors, etc. This makes it very hard to monitor the traffic.

And fourth is the sheer speed of networks. For an IDS to be effective, it has to examine every packet. This slows down an Ethernet software switch or router, but completely stalls a gigabit hardware device. Data transmission rates are getting so fast that no IDS can possibly keep up.

Some security experts are predicting the death of IDSs, but I don’t agree. Even with all of this, an IDS is still the most effective tool for detecting certain network attacks. But it is not a panacea. I think of IDSs as network sensors, similar to a burglar alarm on a house. It won’t detect every attack against the house, it can be bypassed by a sufficiently skilled burglar, but it is an effective security countermeasure.

And just as door and window alarms are more effective when combined with motion sensors and electric eyes, IDSs are more effective when combined with other network sensors. Tripwire, for example, is a network sensor that alarms if critical files are modified. Honeypots include network sensors that alarm if attacked.

The missing piece is a way to interpret and respond to these alarms. The whole point of building Counterpane Internet Security was to deal with the problem of these sensors going off. Someone has to watch these sensors 24/7. Someone has to correlate information from a variety of sensors, and figure out what’s a false alarm and what’s real. Someone has to know how to respond, and to coach the network administrator through the process. My hope is that someone is Counterpane. I think Counterpane is the company that finally makes IDSs look good.

“Imminent death of…” predictions come in a couple of forms. Most of them are sales pitches, forecasting that somebody’s product is going to kill off the victim, one way or another. (Often, most sane people will define the proposed killer as actually in the doomed class; most things that are “going to replace firewalls” are thinly disguised firewalls, for instance.) Those that aren’t sales pitches are mostly just nabobs of negativity, short-sighted people who like prophesying doom. (My favorite one of these is the first line of John Varley’s _Steel Beach_: “In five years, the penis will be obsolete.”)

Whole classes of products are hard to kill. They evolve in response for a very long time. IDSs are already evolving. They’re getting smarter, faster, and more distributed. The people forecasting the death of IDSs are looking at the pressures against them, but they aren’t proposing the kind of radical shift that would replace an IDS with something better. And until that happens, IDSs are here to stay.

Good article on the realities of IDS:

Interesting (and good) review of IDS:

Problems with IDS:

IDS and false positives:

Unicode and IDS:
<…> [link moved to]

Scholarly stuff:

802.11 Security

In February, researchers at (or formerly at) Berkeley published several security vulnerabilities in the Wireless Equivalent Privacy (WEP) protocol, part of the 802.11 wireless LAN standard. This is the standard used by most wireless LANs, including Apple’s AirPort. The job of the WEP is to prevent unauthorized eavesdropping on the wireless network.

The result of the vulnerabilities is that eavesdropping is easy. The details are not really worth describing; read the academic paper if you want to know. They are all a result of sloppy cryptographic engineering, and are easily fixable.

This “yet another vulnerability” story would normally not be worth writing about, but the real morals are not obvious and were largely ignored by the press. News stories were along the lines of: “There are problems with 802.11; they need to be fixed. We’ll all be more secure once they’re fixed.” I see a more general story: “There are problems in lots of protocols, we find and fix them randomly, and this doesn’t bode well for the future of security.” The 802.11 problems are just an example of this trend.

Security flaws like this are unnervingly common. To quote from the paper: “Design of secure protocols is difficult, and fraught with many complications. It requires special expertise beyond that acquired in engineering network protocols.” This time 802.11 was broken, but you should assume these sorts of problems occur in most other security protocols. Simply because some marketing literature says things like “uses 128-bit RC4” doesn’t mean that the product is secure. Odds are, it isn’t.

As bad as the discovered flaws are, there are usually worse security problems. WEP is nothing more than a password-access network. You type in the password, and the base station lets you on. It’s both authentication, and the encryption key. Most implementations use 40-bit RC4 encryption (completely insecure in today’s environment), although some implementations have an option for 104-bit RC4. All users share a single key, which is stored in every computer on the 802.11 network. Hence the security does not scale for large networks at all. And even worse, the key is chosen and typed in by the network administrator, which means that the effective key length is probably even smaller than 40 bits, regardless of how many bits are in the encryption key.

Protocols designed in secret, or by closed committees, are the worst. The 802.11 process was technically open, but in practice it was closed. Anyone could go to the committee meanings, if they wanted to pony up the airfare and registration fee and spend their time trying to decipher the 802.11 jargon. However, you couldn’t just grab the standard or read about the cryptography on the net. There was no free, generally available, public information.

Publishing the protocol allows for these flaws to be discovered, but doesn’t guarantee that they will be. The 802.11 protocol is an IEEE standard, and has been public since at least 1999, although any researcher wanting to read it has to pay the IEEE several hundreds of dollars for a copy. The reason these flaws were not discovered earlier is not because they’re subtle—they’re not—but because no one with sufficient cryptographic skill had read them earlier.

Flaws in these protocols are discovered more or less at random. The only reason a cryptanalysis of WEP was published is that one of the researchers became annoyed at the University of California Berkeley. The University was starting to deploy 802.11, but with some annoying usage limitations. Coincidentally, an officemate had just bought a copy of the standard for other purposes (nothing to do with security), so they took a look at it. The timing just happened to work out right, and they had a few hours to puzzle out the cryptography.

Discovering and fixing the flaws is not enough. There’s no reason to believe the WEP flaws will be fixed, or that the protocol will be secure after they are fixed. The 802.11 committee has been downplaying the vulnerabilities, but it’s putting a working group together to investigate them. The 802.11 standard governs hardware devices; upgrades are difficult and must be backwards compatible. Undoubtedly other problems remain, and any modification to the WEP will undoubtedly introduce new flaws. Unless they find a competent cryptographer to do the work, and submit the results to a rigorous peer review, the cycle will continue.

This trend is getting worse, not better. There are more and more protocols being designed to offer more and more security features. Most of these design processes are no more rigorous than the 802.11 process. Most of these processes do not include cryptographic peer-review; some of them are done completely in secret. (Some of the 802.11 people have said things like “we’re already open,” “we don’t need to fix our process,” “we *did* get peer review when we designed the standard, we asked the NSA whether it was any good.”) Many of these protocols are much more complex than whatever they are replacing, making security flaws even more likely. If these flaws are common now, they are going to be more common in the future.

The attacks:

Response from the Chair of IEEE 802.11:

An example of 802.11 security marketing:
“Increased Security Through Wired Equivalent Privacy
Wired Equivalent Privacy (WEP), an optional RC4 encryption algorithm, helps ensure the security of your data. Before data are transmitted, they are streamed through an RC4 algorithm, an efficient encryption process designed for LAN communications. Additionally, all RangeLAN802 devices are authenticated through a challenge-and-response mechanism before being allowed network access. Both wireless and wired LANs are thus fortified against eavesdropping and unauthorized access by hackers or other nearby 802.11-compliant devices.”

Comments from Readers

From: Rebecca Mercuri <mercuri>
Subject: Voting in Brazil

In your January Crypto-Gram you published a letter from Daniel Balparda de Carvalho, a Brazilian who commented favorably on their electronic voting system. The U.S. press touted the Brazilian equipment as something we should be emulating. However, I have heard from numerous Brazilian journalists and scientists who have noted difficulties with their system. On my voting Web site <> I have linked an excellent discussion of this subject by Michael Stanton. The translation “The Importance of Counting Votes” is available there, as well as the URL to the Portuguese version as originally published. I have also linked the Web site to the Brazilian Electronic Voting Forum maintained by Amilcar Brunazo Filho <>, which is a key resource on the subject and tells “the rest of the story”—although it is helpful if you can read Portuguese. I am concerned that the side of the Brazilian election that we are being shown here
in the US is not fully reflective of the true nature of the matter, and intend to continue to publicize other official reports as I receive them.

From: “Phillip Hallam-Baker” <hallam>
Subject: Codesigning

The attacks against Authenticode you print in Crypto-Gram were considered in the design. The purpose of Authenticode is to give at least the same level of security as buying shrink wrap software from a store. Shrink wrap software can be compromised, and indeed there are a small number of cases of people selling CDs infected with viruses. Authenticode has already been a great success; distributing code through the net could have been a disaster on the scale of e-mail viruses.

A hacker could probably obtain a certificate if they were persistent enough. But that certificate would be tied to the name of the shell company they started for the purpose. They would not be able to get a certificate with the name “Microsoft” or “Blizzard” or any company that was well known.

If the hackers circulated the private key to any great extent, the compromise would soon be known. The certificate would be revoked and would not be accepted by the Authenticode signing service for future code signing requests. Software that had already been signed would still pose a risk, but this could be controlled through warnings in the press.

In the future it is likely that a higher level of security will be possible in enterprise configurations. Ideally each software installation would be referred to a central service for prior approval. This is not an acceptable option for consumer use, of course—at least not without a ready means of bypassing it so that the consumer can write their own code.

There is still a risk from program bugs, of course. But the buffer overrun problem cited should be considered a security weakness of C and C++. There have been languages with robust bounds checking on arrays since the 1960s. Unfortunately, bounds checking has only recently arrived in the C world in the Java and C# variants. But it is here now and programmers won’t have much excuse not to use it.

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.

To subscribe, visit <> or send a blank message to To unsubscribe, visit <>. Back issues are available on <>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of _Secrets and Lies_ and _Applied Cryptography_, and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He served on the board of the International Association for Cryptologic Research, EPIC, and VTW. He is a frequent writer and lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is a venture-funded company bringing innovative managed security solutions to the enterprise.


Sidebar photo of Bruce Schneier by Joe MacInnis.