June 15, 2001

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.

A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.

Back issues are available at <>. To subscribe or unsubscribe, see below.

Copyright (c) 2001 by Counterpane Internet Security, Inc.

In this issue:

Honeypots and the Honeynet Project

In warfare, information is power. The better you understand your enemy, the more able you are to defeat him. In the war against malicious hackers, network intruders, and the other black-hat denizens of cyberspace, the good guys have suprisingly little information. Most security professionals, even those designing security products, are ignorant of the tools, tactics, and motivations of the enemy. And this state of affairs is to the enemy’s advantage.

The Honeynet Project was initiated to shine a light into this darkness. This team of researchers has built an entire computer network and completely wired it with sensors. Then it put the network up on the Internet, giving it a suitably enticing name and content, and recorded what happened. (The actual IP address is not published, and changes regularly.) Hackers’ actions are recorded as they happen: how they try to break in, when they are successful, what they do when they succeed.

The results are fascinating. A random computer on the Internet is scanned dozens of times a day. The life expectancy of a default installation of Red Hat 6.2 server, or the time before someone successfully hacks it, is less than 72 hours. A common home user setup, with Windows 98 and file sharing enabled, was hacked five times in four days. Systems are subjected to NetBIOS scans an average of 17 times a day. And the fastest time for a server being hacked: 15 minutes after plugging it into the network.

The moral of all of this is that there are a staggering number of people out there trying to break into *your* computer network, every day of the year, and that they succeed surprisingly often. It’s a hostile jungle out there, and network administrators that don’t take drastic measures to protect themselves are toast.

The Honeynet Project is more than a decoy network of computers; it is an ongoing research project into the modus operandi of predatory hackers. The project currently has about half a dozen honeynets in operation. Want to try this in your own network? Several companies sell commercial versions, much simpler, of what the Honeynet Project is doing. Called “honeypots,” they are designed to be installed on an organization’s network as a decoy. In theory, hackers find the honeypot and waste their time with it, leaving the real network alone.

I am not sold on this as a commercial product. Honeynets and honeypots need to be tended; they’re not the kind of product you can expect to work out of the box. Commercial honeypots only mimic an operating system or computer network; they’re hard to install correctly and much easier to detect than the Honeynet Project’s creations. And what’s the point? You’d be smarter to monitor activity on your real network and leave off the honeypot. If you’re interested in learning about hackers and how they work, by all means purchase a honeypot and take the time to use it properly. But if you’re just interested in protecting your own network, you’d be better off spending the time on other things.

The Honeynet Project, on the other hand, is pure research. And I am a major fan. The stuff they produce is invaluable, and there’s no other practical way to get it. When an airplane falls out of the sky, everyone knows about it. There is a very public investigation, and any airline manufacturer can visit the National Traffic Safety Board and read the multi-hundred-page reports on all recent airline crashes. And any airline can use that information to design better aircraft. When a network is hacked, it almost always remains a secret. More often than not, the victim has no idea he’s been hacked. If he does know, there is enormous market pressure on him not to go public with the fact. And if he does go public, he almost never releases detailed information about how the hack happened and what the results were.

This paucity of real information makes it much harder to design good security products. The Honeynet Project team is working to change that. I urge everyone involved in computer security to visit their Web site. Great stuff, and it’s all real.


The “Know Your Enemy” series of essays:


Crypto-Gram Reprints

Timing attacks, power analysis, and other “side-channel” attacks against cryptosystems:

The internationalization of cryptography policy:
and products:

The new breeds of viruses, worms, and other malware:

Microsoft SOAP:

The Data Encryption Standard (DES):


Newly declassified documents about the 1993 Clipper program. More reasons not to trust key escrow:

Nasty semantic attack—a new worm disguises itself as a virus alert from Symantec.

The problems with security benchmarks:

In “Secrets and Lies,” I wrote about phone-system hacking to reroute phone calls from one number to another. It looks like a big problem in Las Vegas:

The European Parliament recommends that people should use encryption to protect themselves from Echelon:
<> [link moved to]

There’s been a lot going on at NIST. You can read about the latest AES standard comments here:
Here is a draft of the revised Secure Hash Standard (FIPS 180-2); comments due the end of August.
You can read comments on the HMAC standard here:
And NIST is hosting a second Key Management Workshop on 1-2 November:

The FBI’s National Information Protection Center isn’t as good as you might think, according to a GAO report:
The GAO report:

McDonald’s is testing a form of electronic cash. I predict this will be successfully hacked within months, if not weeks. After all, there’s real money to be made here:

An example of a “benevolent” virus: the Cheese worm. Ask any network administrator what he thinks of something sneaking into his network and making changes without telling anyone.
Here’s another “benevolent” virus; this one attacks child pornography:
I’ve already written about why this is a bad idea:

This is a pure semantic attack. It’s a virus warning that tells you to delete a “dangerous” file from your computer, a file that is in fact an important file. There’s no malicious code, no attachment, no nothing. The victim does the damage to himself, simply because he believes the e-mail.
But there’s a variant of this e-mail that actually contains a virus:

Two good stories about insider attacks and the damage they can cause:
<…> will no longer mirror hacked Web sites:

A really good source for information on different election systems:

Another reason that documenting security is no longer optional:

This cell phone offers strong end-to-end encryption. Near as I can tell, it’s not yet available in the U.S.

Ed Felten, with the help of the Electronic Frontier Foundation, sues the recording industry over his research on their digital copyright schemes. This is a big deal. There’s lots of information at the EFF Web site for the case:
I have written a brief that will be filed a few days after this issue of Crypto-Gram is published. Look for the link on the EFF Web site.
News reports:

Someone in Ottawa set up a scanner to eavesdrop on cell phone calls, and then made the results available on the Internet in streaming MP3 format. Near as anyone can tell, this is perfectly legal.
The biggest misconception in the article is the notion that digital cellular is inherently more secure. I remember when analog cell phones were new. The industry argued that strong security was not necessary because scanners were expensive and rare. Now scanners are cheap and plentiful. Today the industry argues—like they do in the article above—that strong digital cell phone security is not required because digital scanners are expensive and rare. They are, but how soon before they are cheap and plentiful?

Counterpane Internet Security News

Counterpane has announced a reseller agreement with Exodus Networks:

Schneier has written a new white paper on Counterpane’s Managed Security Monitoring service. You can download a copy at:

Schneier is speaking at the Black Hat Briefings in Las Vegas this July:

Lately I’ve been getting sloppy at crediting and thanking other people for their help and contributions to Crypto-Gram. In the May issue, much of the essay “Defense Options: What Military History Can Teach Network Security, Part 2” was sent to me in a commentary e-mail by Anna Slomovic. And many of the points in “Safe Personal Computing” were originally made in an e-mail from Eric Scace. Both of those people deserve a special apology. Additionally, I regularly ask various people to comment on drafts of my essays, and their words sometimes end up in the final copy. People who come to mind include Ross Anderson, Steve Bellovin, Niels Ferguson, Greg Guerin, John Kelsey, Peter Neumann, Marcus Ranum, Mike Stay, David Wagner, and Elizabeth Zwicky, although there are certainly others. And many, many people send me news items for inclusion in Crypto-Gram; thank you to those people as well.

Invicta Networks

Invicta Networks announced a new security product. Normally, the announcement would drown in the sea of other random security announcements, but Invicta has a lot of star power behind it. The head of the company, Victor Sheymov, is a former KGB agent who defected. R. James Woolsey, former head of the CIA, is on the board. The insurance giant AIG is involved. So it got some major press attention that it doesn’t yet deserve.

There is very little technical information available about Invicta’s technology. The Web site is useless. I tried contacting the company, and was told that they are still filing patents and technical details are only available under NDA. I received a single four-page white paper that was long on hyperbole and short on details.

Actually, the white paper contained enough snake-oil talk to make me suspicious. “Invicta’s patent-pending technology makes the network it protects invisible to hackers. This technique provides unbreakable security against internal and external hacking, denial-of-service attacks, and instructive viruses….” “…a fundamentally new way of protecting private networks.” “…a powerful conceptual shift in cyber security….” If there’s any real science here, the marketing people have obliterated it.

There are some hints as to what they’re doing. Near as I can tell, Invicta’s “Variable Cyber Coordinates system” is a hardware security product that constantly changes the IP addresses of computers on a network. The idea is that if hackers want to target your company, they won’t be able to attack the machines because they don’t know their addresses.

Not a bad idea, actually. It could even provide some security. It won’t solve everything; remember that many huge security vulnerabilities come from all those network services you want to make available to the world (e.g., Web servers), and these have to have public IP addresses. And adding a hardware card to every one of a network’s computers won’t be cheap, and I wouldn’t throw away any other security measures just yet. I certainly wouldn’t echo Woolsey’s characterization as an “absolutely remarkable intellectual achievement.” Or Sheymov’s assertion that it “will start a new chapter in Internet history.”

I also wouldn’t agree that it’s “a completely different direction than anybody else.” In another newsletter, Crispin Cowan wrote: “DARPA (Defense Advanced Research Projects Agency) explored this idea in a red team experiment several years ago. The defenders employed the randomized address technique, without the attacker’s knowledge. The technique significantly slowed the attackers, until they figured out what was going on, at which point effectiveness diminished. Problem: the effective random search space (the size of your subnet) is small.” I received another e-mail saying that the Invicta system is similar to something called NetEraser from SAIC. Note to the U.S. Patent Office: please look at the prior art before you issue Invicta a patent.

One interesting footnote to this whole story is that a major insurance company is willing to give a 10% discount to people using this system. I have two reactions to this. The cynical one: it’s a PR ploy, they would give a 10% discount to any reluctant customer who asked for one. The idealistic one: this is another step forward to the time when the insurance industry drives the computer security industry.

Company Web site, with a tiny amount of information:

News article:

DDOS Attacks Against

Steve Gibson has written a fascinating and entertaining essay about his experiences with a distributed denial-of-service attack against his Web server. It had good analysis, conversations with teenage hackers, and general predictions for the future. I strongly urge everyone to read it.


Go on…I’ll wait.

Okay. Now read his “Open Letter to the Internet’s Hackers”:


Ignore the details of the attack, and his arguments that Windows XP will make the situation much worse. Concentrate on the big picture. I think this story has an enormous lesson for all.

Here’s a guy who is reasonably knowledgeable in computer security. He’s a computer consultant, with a Web site that is integral to his livelihood. He has had this Web site attacked, repeatedly, simply because he was *rumored* to have said something mildly disparaging about hackers *in general*. He spent a lot of effort defending himself, even to the point of trying to engage his attackers. Yet in the end, he realized that there was no defense, and he surrendered unconditionally.

To a 13-year-old!

Imagine if that happened in the real world. Imagine that you were prevented from entering your home—that some random teenagers piled junked cars at the end of your block—because they incorrectly believed that you used a phrase like “typical irresponsible teenagers.” Furthermore, imagine that your attempts at removing the cars and returning home were constantly thwarted (more cars were being deposited every minute), that the police could do nothing, and that these random interruptions continued to occur. In the end, imagine that you had to surrender your ability to access your own home to these random attackers.

The ordinary citizens of the digital world are in thrall to teenage terrorists, and nobody seems to be paying attention. How long will it take before some of these guys figure out they can extort money or other valuable goods with their ambushes? This situation is not going to magically get better. There is no technology waiting in the wings that is going to solve this problem. And as Steve Gibson said in his essay: “We can not have a stable Internet economy while 13-year-old children are free to deny arbitrary Internet services with impunity.”

I’m not suprised that Gibson could not defend himself. DDOS attacks are a network problem, not a computer problem. Most so-called “network security problems” are nothing of the sort; they’re host security problems, with the network as the conduit. DDOS is a network security problem; it’s the network’s resources that are being abused. Gibson couldn’t prevent the attacks because the problem wasn’t in anything under his control. It’s up to the ISPs to figure out how to stop such things.

Unfortunately, most of the press about this escapade has centered around Gibson’s accusations against Microsoft. He claims that Windows XP will make this much worse, and Microsoft has responded with its typical press propaganda. That’s a pity, though, because I think Microsoft is mostly right here. It’s just not true that you can’t spoof Internet packets with current versions of Windows. It’s not easy, but it’s not impossible. Yes, Windows XP will make it worse. But as Gibson points out, it’s amazingly bad right now.

The fundamental problem is that the user does not have control over his desktop—not the details of the operating system. It’s certainly true that Microsoft makes it easier for viruses, worms, and Trojans to spread and do damage—we’ve seen that with all of these viruses that can automatically spread with Outlook but not with Eudora—but we can never fix the problem until we can secure the desktop.

This won’t solve the DDOS problem, but it will at least make it harder to recruit zombies.

News articles:

Comments from Readers

From: Richard Howard <racebannon>
Subject: Military History and Computer Security

I am a network manager in the Army, and have been struggling with the question of network defense for the past ten years. I am presenting a paper in June at the 2nd Annual IEEE Systems, Man, and Cybernetics Information Assurance Workshop on this very subject.


I think defending the network and defending the hilltop are exactly the same. Just look at your examples. The attacker of a hilltop can choose when and how to attack. He generally knows the capabilities of his enemy and which weapon systems to anticipate. The longer a defender stays in one location, he constantly upgrades his position. He starts with putting men on the ground. As time permits, they dig foxholes, then fill sandbags for overhead cover, etc., etc. And, the defender can still get whacked if he overlooked something like a key piece of terrain or an obscure avenue of approach.

I completely agree with the notion of outsourcing security. It is not that I don’t want to do security, I never have enough people to get there. And, in order to do it right, the security people have to know it all from operating systems to networks to programming to web. I usually have to devote these smart people to other problems that occur day-to-day and handle security as time permits. Unfortunately, contracting out to a commercial security firm like yours is not really an option for the military especially in times of war. And, if we go to war, my bosses are going to want all of this security stuff and will be ill-prepared to deal with it.

From: Scott Tousley <stousley>
Subject: Military History and Computer Security

I think you need to consider the lessons of Vietnam, the Balkans and now Palestine. Your cited examples are largely “clean” military fights, whereas I think the network security problem is how to deal with the small percentage of malicious activity in a sea of legitimate traffic, configuration problems and smorgasbord software. How do we handle network security as guerilla warfare, which requires thoughtful defense, focused attack on known targets, and a long-term top level coherence imposed on a society living with low-level warfare alongside normal economic and social activity.

From: Dan Cieslak <danC>
Subject: Military History and Computer Security

I believe that your analogy is flawed. In conventional military defense the defender generally knows against whom to defend and the likely nature of the attack method. Network security is more like defending against guerilla warfare—the attackers look much like the civilians, are decentralized and whose main objective is creating havoc and chaos, rather than taking a particular position. While I am no military history buff, I would argue that using conventional defenses against a guerilla offense does not work (witness Vietnam). Thus, the current situation with network defenses modeling conventional passive defenses is ultimately flawed.

From: Mathias Dybvik <md>
Subject: Military History and Computer Security

> Warfare has taught us again and again that active defenses and
> counterattacks are far more effective than passive defenses.

This is not correct. The most effective defense is *not being attacked*. This can be achieved by not being seen, (or in the high tech era, “sensed”). It can also, as David Wallace pointed out, be accomplished by deterrence. The second most effective defense is, in general, to be “untouchable.” This can be achieved through rapid movement, difficult terrain, thick bunker walls, or by seeking cover in a cave. Or through keeping a critical corporate LAN separated from the Internet.

> Look at the Battle of Gettysburg in the American Civil War.
> Look at the Battle of the Bulge in World War II. Look at Leyte,
> Agincourt, and almost any piece of military history.

Very little war history is written about sucessful passive defenses. There simply isn’t anything interesting to write about if you can’t reach, or harm, the enemy! Nor are there many interesting news articles written about corporate networks that aren’t attacked.

The interesting stuff, the pivotal points in history, typically comes as a result of counterattacks. Counterattacks play a crucial strategic role, and they are the only means we can employ to turn the situation around. They are also incredibly costly, so we would want to employ such tactics under favorable circumstances; that is, we would like to have such confrontations take place when we are ready for them.

However, in order to be able to stay alive long enough to accomplish such feats, most of us would rely on passive defenses. We carry our gas masks, we move in the dark and we encrypt our communications. We also keep those firewalls configured to reveal the minimum amount of information about our networks.

> Even in the animal kingdom, teeth and claws are a better
> defense than a hard shell or fast legs.

If you looked at the number of animals that (successfully) rely on not being seen, heard or caught by predators, vs. the number that attempt to kill their attackers, I think you might be surprised.

But is there anything military history can teach Network Security? Lots!

1. Fight only the battles you can win. Don’t expose parts of the network that you can’t protect. Don’t wage a war using flawed algorithms or inferior technology. (MPAA take note.)

2. Try to be invisible. It’s extremely hard to attack something you don’t know about.

3. Even if you think you are invisible, you aren’t. Always act as if you aren’t. Encrypt internal traffic.

4. Seconds count. Your ability to respond to a change in the situation can be crucial. Reconnaissance (or a network monitoring system) can give you the information you need to respond, but it’s up to you to act on it. If you are not comfortable with this, then consider paying someone who is.

5. Technology inevitably fails, prepare for it. Always wear belt *and* suspenders. It’s a little known fact, but your firewall was actually made by the lowest bidder, and the software completed at 03:30AM by a caffeine-crazed intern.

6. Friendly fire hurts. Both you and the enemy are human. Live with it. People in both camps will do things that are radically unexpected. Is your technology prepared for an attack (malicious or accidental) from the inside?

From: Daniel Cvrcek <dcvrcek>
Subject: Military History and Computer Security

The reading of your article made me think about some issues related to computer crime. I am from the Czech Republic and there has been passed the Personal Information Security Act. I suppose that it is a well-written law, although some mistakes have been made. What I want to mention is the existence of a team consisting of nine persons that are allowed to visit and control facilities of any subject processing personal information. They do not need any court order nor search warrant as the police needs.

Some people do not like it, saying that a police state is returning. I do not agree with the statement and believe that this is part of a general movement necessary to prosecute computer crime in the future. I mean that without special teams with very strong authority able to act very fast it will be very hard to suppress computer crime performed by (semi) professionals.

I see the reverse side of this also. It is impossible to control such teams and there must be put a strong confidence on them. But is there another way? It seems to me that active defense and counterattacks follow the same idea—fast response to attacks, automatic when possible. And strong confidence in the instruments and people doing the work. Maybe, there will be certified software for the purpose in a several years.

From: Anonymous
Subject: FBI Tactics

> Impressive investigative work by the FBI. This is the kind of
> thing I like to see the FBI doing, rather than mucking about
> with surveillance tools like Carnivore.
> <…>
> <…>

Impressive? Hmmm… it sounded awfully close to the U.S. Government applying domestic laws extra-nationally. How would you like Russian laws applied to your conduct while in the U.S.?

I also recall that the U.S. Government described the holding of the EP3 personnel as “kidnapping”. It would be interesting to hear how they’d describe U.S. nationals being knowingly and deliberately lured and then arrested by a foreign government, as happened in this case. Sure sounds like kidnapping to me.

From: Wouter Slegers <wouter>
Subject: The Dutch Government and Key Escrow

> The Dutch government is forcing trusted third parties
> to use key escrow.
> <>

This is not correct. The leaked internal draft of the TTP chamber referred to states that the law enforcement agencies should have the same access to encryption managed by TTPs as with any other provider of telecommunication infrastructure. Under the new Dutch telecommunications law, any provider of general telecommunication infrastructure (telephone companies, ISP et al), when served with a wiretap order, must provide the traffic of the specified customer as readable as possible. This includes providing the encryption keys used for that traffic iff the provider has access to them (or providing the plain text communication). A TTP storing or providing keys directly used for encryption is no different.

However this is *only* relevant for TTPs that hold the keys needed to decrypt your traffic. In most setups TTPs simply do not hold these keys and can therefore not be forced to give them. And no, this does not mean the TTPs are forced to implement key escrow if they didn’t already provide it.

From: Ken Ayer <kayer>
Subject: Common Criteria

There is a discussion underway amongst those trying to work with the Common Criteria as to whether it can be applied to an entire system and whether it can be applied to a process (such as personalizing a smart card). That discussion has not yet reached an agreed upon conclusion. However, there is considerable agreement on its ability to usefully test the security of a component of a system, such as a smart card. Knowing the security of a smart card that is part of a system is a very useful step, just as knowing how strong a padlock is helps in securing a building. A strong padlock alone doesn’t mean that the building is secure, but it should mean that a thief is going to have to expend a knowable amount of effort to defeat it. Knowing that, we can turn our attention to the door itself—a great padlock on a half-inch plywood door doesn’t make much sense, but neither does a weak padlock on a bullet-proof door. The Common Criteria at least provides a possible way of rating the security of parts of a system.

Some kind of testing system is needed if you’re going to compare padlocks or smart cards, which is also a task we need to do. Every vendor (whether of a chip, card, lab or consulting service) says it’s the best, but we need a way to compare these claims. The Common Criteria provides a way to compare products and services against a common standard. The Protection Profile mechanism provides a way for users to specify their requirements and is open to refinement by users. Users can write new components (requirements) in to their Profile, which gives this system more flexibility than has been available in the past.

What Visa has done with the Common Criteria is to start a dialogue on how to clearly express security requirements, how to show that a product meets those requirements, and how to test whether the product does in fact meet them. Other payment systems, vendors, associations and government bodies have joined the dialogue, which has not been a Visa only effort for the past two years. We are making progress, though there remains work to be done. There is nothing that prevents anyone else from using the Common Criteria in the same way for their own needs. It doesn’t offer easy solutions, but it does offer a framework that can be tailored to many different needs. The only question is whether people are willing to make the effort it takes to make it work. The alternative is to continue with everyone claiming that they’re the best and only solution and no way to compare or sort out the claims. That doesn’t give better security, it just reduces it to who can shout the loudest.

From: rrobles2
Subject: Common Criteria

You mentioned a few things about the Common Criteria that fall short of its process. While it’s true that the Common Criteria offers evaluation based on Protection Profiles, these evaluations don’t necessarily need to claim compliance to any Protection Profile. Evaluation can also be based solely on a Security Target Document. This document is specific to the evaluated product.

You also stated that these evaluations “won’t tell you how to configure your CheckPoint firewall, or what security settings to run on Windows 2000.” This is not really true. According to the Common Criteria, the Security Target, which will contain a definition of the Target of the Evaluation, is publicly released. The Security Target Document also contains the key security features examined in that evaluation.

The Common Criteria is basically governed by the specific Schemes in each country. In the U.S., that is (NIAP). Requests are pasted to product developers by the procurement offices. These developers then contact a Certified laboratory to conduct the evaluation and coordinate the acquisition of a certificate upon successful completion of the evaluation.

The Common Criteria process is a very young process and if it is to have any longevity it will require that people be fully aware of its scope. I’m not really sure if you were aware of these things about the Common Criteria.

From: Paul Kocher <paul>
Subject: Digital Content Protection

I enjoyed reading the analysis of copy protection in the May 15, 2001 Crypto-Gram. While I agree that many existing copy protection systems are badly flawed, I don’t think that anti-piracy efforts or copy protection are fundamentally doomed for technical reasons.

Even though content can always be pirated with enough effort, this doesn’t make content protection a failed concept. Pirates have many inherent disadvantages. For example, they can’t build infrastructure or large organizations without being sued or prosecuted. Similarly, their customers generally don’t trust them because they are criminals.

Furthermore, copy protection systems don’t need to last forever to be worthwhile. The value of most content falls rapidly after it is released. Anti-piracy systems that are effective for even short periods of time can protect most of the business value of content, provided that they can be renewed so that new content is not vulnerable to old attacks. In general, anti-piracy technologies can be effective if they degrade the user experience of pirates’ customers or reduce piracy to manageable levels.

While it is futile to try to totally eliminate risks, it is possible to manage them. The fact that CSS and other copy protection systems are badly designed and insecure doesn’t mean that systems with secure revocation, tracing of pirated content, risk management of output ports, and good renewability won’t prove effective. For this reason, I disagree with the assertion that “unrestricted distribution is a natural law of digital content”—I don’t see piracy as the result of a natural law any more than stealing, cheating, and eavesdropping are.

From: Mitch Wagner <mwagner>
Subject: Digital Content Protection

Let’s accept for the sake of discussion that copyright is unenforceable for digital media. I’m not convinced that’s the case, but let’s assume it to be true for the moment.

How else do we ensure that novelists get paid if they can’t sell copies of their novels? If we rely on advertising, corporate sponsorship, or patronage, then the advertisers, corporate sponsors and wealthy patrons, who now indirectly control what gets published because they own the distribution mechanisms, will be able to directly control it. Subscription won’t work in the copyright-free future, because for a subscription to work, the distributors need to control the distribution mechanism, which won’t be true in our hypothetical digital future. Paying writers for public appearances, or other incidentals, isn’t paying them to write—it’s paying them to stand up in front of a room and talk INSTEAD OF writing.

The neat thing about copyright as it stands today is that, imperfect as it is, it is a means of paying people to create by paying for the creations. Yes, copyright is now the subject of grotesque abuses by big companies, and these abuses are literally almost as bad from a societal perspective as no copyright at all. But abolishing copyright is worse.

From: David Gibson <david>
Subject: Digital Content Protection

As well as the fundamental problems with “digital content protection” there are also likely to be some more superficial but nonetheless significant problems, which may well bring down such systems before the fundamental problems do.

In particular, the fact that many of these schemes are exceedingly complicated and are likely to complexify hardware and firmware implementing them by at least an order of magnitude. Those who have written hardware device drivers will know that even for relatively simple devices such as network cards, as often as not firmware (and even hardware) contain numerous significant flaws. Mostly these are worked around in the drivers, but at the cost of considerable extra complexity there. Moreover, these schemes, unlike most firmwares, are designed to prevent software or drivers from doing things, which will limit the ability to make workarounds in software. The chances of there being widespread non-buggy implementations of these vastly more complex content protection schemes are negligible. This has two effects:

First, some of the flaws in some of the implementations are likely to compromise the system’s security. This will first of all allow people to bypass the scheme on the flawed device, and second it is likely to greatly assist in deciphering and comprehensively breaking the system even for non-buggy implementations. This is pretty much what occurred with CSS: one DVD player manufacturer’s oversight in obscuring the decryption code greatly expedited the creation of DeCSS.

Secondly, it means that devices using these schemes will be just plain flaky. Which in turn means that they are likely to face considerable resistance in the marketplace, even from users who have no interest in bypassing the content protection system.

From: Richard Straub <richard.straub>
Subject: Digital Content Protection

Preventing unauthorized people from copying or distributing intellectual property is a natural law of this world, whether it is a digital or physical product. It is just illegal in most if not all countries. That is a fact. Whether it is an impossible task—that is another issue. The entertainment industry is indeed trying to protect their property. That is natural and their right—and not against natural law. They are looking at technology to help them. I agree that this alone is certainly not sufficient, and doomed to fail.

No security system is 100% sure. We all agree. Even the knowledge on how to crack a safe can be distributed to average users and make them “professionals”—similar to the software cracking. Skills can be copied.

It is not unusual for an industry to protect/control an end-user device. There are many existing examples. Take the energy meters in your home, the taxation system built into European trucks when passing customs, the “black boxes” built in airplanes, the phones in Europe (in the monopolistic days) etc. So, what’s wrong with controlling the display device at home IF it protects your property. It might not be the way into the future, I agree.

Legal protection is an additional tool and a solution to the problem of protecting property—whether intellectual or natural property. This will not change. International legislation is certainly the key. In some countries the distribution of alcohol is under strict control—in some it is not.

I strongly agree that new business models in combination with technology and legal protection AND a thorough understanding what users want and how they want it will be the way to go for protecting the content owners property and assure them a revenue stream. Without revenue, Hollywood will no longer be able to produce movies. Those who will figure out how to leverage what you call “natural law” will make the money. Totally agreed.

Bottom line: The digital world is not so different than the existing world for natural products. Protection of property IS a natural law.

A way to look at things differently or at a paradigm shift in an example: In Amsterdam, the tram system (street cars) works as follows. The tram rolls into the tram station. You enter the tram at the back door; the front doors are exit only. Entering the back door you are faced with a ticket officer, selling you the ticket for your ride. This way it is basically guaranteed that there is no free ride; everybody entering pays. You could also call this the principle of “we believe everybody is inherently trying to hitch a free ride or tends to cheating.”

This system was in use in Zurich, probably more than ten years ago. Today, the tram system in Zurich is based on the principal “the majority does not cheat or does not break the law.” So, the tram rolls into the station. You can board at any door. There’s no ticket officer on the tram. No, it is not a free systems. You are supposed to purchase the ticket at a ticket dispenser at the tram station BEFORE you enter the tram. Are there people cheating? Of course. How do you prevent the majority from cheating? By imposing a high fine.

This works pretty well to the benefit of all parties: Flexibility and ease of doing business for the customers (apart from the feeling “they trust me”), reduced cost (personnel), and bigger throughput (no jam at the back door). Does this principle work everywhere? Not necessarily. Culture is another element to count. Some cultures see cheating as a sport, and some as a strict crime.

From: Marco Rooney <m.j.Rooney>
Subject: Semantic Attacks using HTTP address

Interesting story about the fake BBC page. I am currently using the Opera browser. When I tried to access the page it asked me if I realized that I was not contacting the BBC page, but another page with the BBC address as username. It explicitly warns me that this methods is used to mislead people. Very nice feature to prevent these uncomfortable situations. Should be a standard on any browser, really.

From: bryk
Subject: A Cyber UL

I wish to present a short counter-argument to your article about a cyber Underwriter’s Laboratory. You focused on a UL for computer networks, in part as a response to the newly-formed Center for Internet Security. While I tend to agree with many of the points you make in the article, you didn’t explore any scenarios in which a cyber-UL would make sense.

It seems to me that industry (and government) is in need of some mechanism (not necessarily a cyber-UL) that satisfies three key needs that organizations have: 1) A set of best practices, 2) A means for charting an improvement path, and 3) A measurement capability to assess/evaluate progress/status on the improvement path.

A number of efforts have tried (and IMHO, failed) to satisfy all three needs (e.g., ISO 17799, SSE-CMM). Some industries have “rolled their own” mechanisms with varying degrees of success. Others have focused on a single need and done much better (e.g., the SANS Top 10 Security Vulnerabilities is a reasonable tool). Your oft-repeated argument that security technology (e.g., CheckPoint firewall settings) changes too fast to allow standards is counter to your “Security is a process, not a product” mantra in your S&L book.

Industry needs a process-based standard that does a good job of addressing the needs above. It is easy to criticize poor standards or poor types of standards, but what is needed is thinking about what kinds of standards could be useful (and then how to go about creating them). Once a satisfactory mechanism to satisfy all three needs is available, the process of information security will transition from an ad hoc, reactive field to a more systematic field. Risks will still abound, but industry resources attacking the problem will be spent more efficiently.

CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.

To subscribe, visit <> or send a blank message to To unsubscribe, visit <>. Back issues are available on <>.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of “Secrets and Lies” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He served on the board of the International Association for Cryptologic Research, EPIC, and VTW. He is a frequent writer and lecturer on computer security and cryptography.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 2000 companies world-wide.


Sidebar photo of Bruce Schneier by Joe MacInnis.