Crypto-Gram

June 15, 2004

by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
schneier@schneier.com
<http://www.schneier.com>
<http://www.counterpane.com>

A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.

Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com.

Crypto-Gram also has an RSS feed at <http://www.schneier.com/crypto-gram-rss.xml>.


In this issue:


Breaking Iranian Codes

Ahmed Chalabi is accused of informing the Iranians that the U.S. had broken its intelligence codes. What exactly did the U.S. break? How could the Iranians verify Chalabi’s claim, and what might they do about it?

This is an attempt to answer some of those questions.

Every country has secrets. In the U.S., the National Security Agency has the job of protecting our secrets while trying to learn the secrets of other countries. (Actually, the CIA has the job of learning other countries’ secrets in general, while the NSA has the job of eavesdropping on other countries’ electronic communications.)

To protect their secrets, Iranian intelligence—like the leaders of all countries—communicate in code. These aren’t pencil-and-paper codes, but software-based encryption machines. The Iranians probably didn’t build their own, but bought them from a company like the Swiss-owned Crypto AG. Some encryption machines protect telephone calls, others protect fax and Telex messages, and still others protect computer communications.

As ordinary citizens without serious security clearances, we don’t know which machines’ codes the NSA compromised, nor do we know how. It’s possible that the U.S. broke the mathematical encryption algorithms that the Iranians used, as the British and Poles did with the German codes during World War II. It’s also possible that the NSA installed a “back door” into the Iranian machines. This is basically a deliberately placed flaw in the encryption that allows someone who knows about it to read the messages.

There are other possibilities: the NSA might have had someone inside Iranian intelligence who gave them the encryption settings required to read the messages. John Walker sold the Soviets this kind of information about U.S. naval codes for years during the 1980s. Or the Iranians could have had sloppy procedures that allowed the NSA to break the encryption.

Of course, the NSA has to intercept the coded messages in order to decrypt them, but they have a worldwide array of listening posts that can do just that. Most communications are in the air-radio, microwave, etc.—and can be easily intercepted. Communications via buried cable are much harder to intercept, and require someone inside Iran to tap into. But the point of using an encryption machine is to allow sending messages over insecure and interceptible channels, so it is very probable that the NSA had a steady stream of Iranian intelligence messages to read.

Whatever the methodology, this would be an enormous intelligence coup for the NSA. It was also a secret in itself. If the Iranians ever learned that the NSA was reading their messages, they would stop using the broken encryption machines, and the NSA’s source of Iranian secrets would dry up. The secret that the NSA could read the Iranian secrets was more important than any specific Iranian secrets that the NSA could read.

The result was that the U.S. would often learn secrets they couldn’t act upon, as action would give away their secret. During World War II, the Allies would go to great lengths to make sure the Germans never realized that their codes were broken. The Allies would learn about U-boat positions, but wouldn’t bomb the U-boats until they spotted the U-boat by some other means…otherwise the Nazis might get suspicious.

There’s a story about Winston Churchill and the bombing of Coventry: supposedly he knew the city would be bombed but could not warn its citizens. The story is apocryphal, but is a good indication of the extreme measures countries take to protect the secret that they can read an enemy’s secrets.

And there are many stories of slip-ups. In 1986, after the bombing of a Berlin disco, then-President Reagan said that he had irrefutable evidence that Qadaffi was behind the attack. Libyan intelligence realized that their diplomatic codes were broken, and changed them. The result was an enormous setback for U.S. intelligence, all for just a slip of the tongue.

Iranian intelligence supposedly tried to test Chalabi’s claim by sending a message about an Iranian weapons cache. If the U.S. acted on this information, then the Iranians would know that its codes were broken. The U.S. didn’t, which showed they’re very smart about this. Maybe they knew the Iranians suspected, or maybe they were waiting to manufacture a plausible fictitious reason for knowing about the weapons cache.

So now the NSA’s secret is out. The Iranians have undoubtedly changed their encryption machines, and the NSA has lost its source of Iranian secrets. But little else is known. Who told Chalabi? Only a few people would know this important U.S. secret, and the snitch is certainly guilty of treason. Maybe Chalabi never knew, and never told the Iranians. Maybe the Iranians figured it out some other way, and they are pretending that Chalabi told them in order to protect some other intelligence source of theirs.

During the 1950s, the Americans dug under East Berlin in order to eavesdrop on a communications cable. They received all sorts of intelligence until the East Germans discovered the tunnel. However, the Soviets knew about the operation from the beginning, because they had a spy in the British intelligence organization. But they couldn’t stop the digging, because that would expose George Blake as their spy.

If the Iranians knew that the U.S. knew, why didn’t they pretend not to know and feed the U.S. false information? Or maybe they’ve been doing that for years, and the U.S. finally figured out that the Iranians knew. Maybe the U.S. knew that the Iranians knew, and are using the fact to discredit Chalabi.

The really weird twist to this story is that the U.S. has already been accused of doing that to Iran. In 1992, Iran arrested Hans Buehler, a Crypto AG employee, on suspicion that Crypto AG had installed back doors in the encryption machines it sold to Iran—at the request of the NSA. He proclaimed his innocence through repeated interrogations, and was finally released nine months later in 1993 when Crypto AG paid a million dollars for his freedom—then promptly fired him and billed him for the release money. At this point Buehler started asking inconvenient questions about the relationship between Crypto AG and the NSA.

So maybe Chalabi’s information is from 1992, and the Iranians changed their encryption machines a decade ago.

Or maybe the NSA never broke the Iranian intelligence code, and this is all one huge bluff.

In this shadowy world of cat-and-mouse, it’s hard to be sure of anything.

Hans Buehler’s story:
<http://www.aci.net/kalliste/speccoll.htm>


Biometric IDs for Airport Employees

I’ve written many words about ID cards and biometrics: about how they don’t work and don’t improve security. It’s nice to finally write something about a biometric ID that actually does work.

Some members of Congress are pushing the TSA—the guys who handle airport security—to develop biometric IDs for the one million transportation workers at airports, seaports, and rail yards.

This is the proper way to use a biometric ID. The strong suit of biometrics is authentication: is this person who he says he is. Issuing ID cards to people who require access to these sensitive areas is smart, and using biometrics to make those IDs harder to hack is smarter. There’s no broad surveillance of the population; there are no civil liberties or privacy concerns.

And transportation employees are a weak link in airplane security. We’re spending billions on passenger screening programs like CAPPS-II, but none of these measures will do any good if terrorists can just go around the systems. Current TSA policy is that airport workers can access secure areas of airports with no screening whatsoever except for a rudimentary background check. That includes the thousands of people who work for the stores and restaurants in airport terminals as well as the army of workers who clean and maintain aircraft, load baggage, and provide food service. Closing this massive security hole is a good idea.

All of this has to be balanced with cost, however. Issuing one million IDs, and probably tens of thousands of ID readers, isn’t going to be cheap. But it would certainly give us more security, dollar for dollar, than yet another passenger security system.

Unfortunately, politicians tend to prefer security systems that affect broad swaths of the population. They like security that’s visible; it demonstrates that they’re serious about security and is more likely to get them votes. A security system for transportation workers, one that is largely hidden from view, is likely to garner less support than a more public system.

Let’s hope U.S. lawmakers do the right thing regardless.

<http://www.cnn.com/2004/TRAVEL/06/09/…>


Crypto-Gram Reprints

Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.

The Risks Of Cyberterrorism:
<http://www.schneier.com/crypto-gram-0306.html#1>

Fixing Intelligence Failures:
<http://www.schneier.com/crypto-gram-0206.html#1>

Honeypots and the Honeynet Project
<http://www.schneier.com/crypto-gram-0106.html#1>

Microsoft SOAP:
<http://www.schneier.com/crypto-gram-0006.html#SOAP>

The Data Encryption Standard (DES):
<http://www.schneier.com/crypto-gram-0006.html#DES>

The internationalization of cryptography policy:
<http://www.schneier.com/crypto-gram-9906.html#policy>
and products:
<http://www.schneier.com/crypto-gram-9906.html#products>

The new breeds of viruses, worms, and other malware:
<http://www.schneier.com/crypto-gram-9906.html#viruses>

Timing attacks, power analysis, and other “side-channel” attacks against cryptosystems:
<http://www.schneier.com/crypto-gram-9806.html#side>


Microsoft and SP2

The security of your computer and your network depends on two things: what you do to secure your computer and network, and what everyone else does to secure their computers and networks. It’s not enough for you to maintain a secure network. If everybody else doesn’t maintain their security, we’re all more vulnerable to attack. When there are lots of insecure computers connected to the Internet, worms spread faster and more extensively, distributed denial-of-service attacks are easier to launch, and spammers have more platforms from which to send e-mail. The more insecure the average computer on the Internet is, the more insecure your computer is.

It’s like malaria: everyone is safer when we all work together to drain the swamps and increase the level of hygiene in our community.

This is the backdrop from which to understand Microsoft’s Windows XP security upgrade: Service Pack 2. SP2 is a major security upgrade. It includes features like Windows Firewall, an enhanced personal firewall that is turned on by default, and a better automatic patching feature. It includes a bunch of small security improvements. It makes Windows XP more secure.

In early May, stories were written saying that Microsoft would make this upgrade available to all XP users, both licensed and unlicensed. To me, this was a very smart move on Microsoft’s part. Think about all the ways it benefits Microsoft. One, its licensed users are more secure. Two, its licensed users are happier. Three, worms that attack Microsoft products are less virulent, which means Microsoft doesn’t look as bad in the press. Microsoft wins, Microsoft’s customers win, the Internet wins. It’s the kind of marketing move that businessmen write best-selling books about.

Sadly, the press was wrong. Soon after, Microsoft said the initial comments were wrong, and that SP2 would not run on pirated copies of XP. Those copies would not be upgradeable, and would remain insecure. Only legal copies of the software could be secured.

This is the wrong decision, for all the same reasons that the opposite decision was the correct one.

Of course, Microsoft is within its rights to deny service to those who have pirated its products. It makes sense for them to make sure performance or feature upgrades do not run on pirated software. They want to deny people who haven’t paid for Microsoft products the benefit of them, and entice them to become licensed users. But security upgrades are different. Microsoft is harming its licensed users by denying security to its unlicensed users.

This decision, more than anything else Microsoft has said or done in the last few years, proves to me that security is not the first priority of the company. Here was a chance to do the right thing: to put security ahead of profits. Here was a chance to look good in the press, and improve security for all their users worldwide. Microsoft claims that improving security is the most important thing, but their actions prove otherwise.

SP2 is an important security upgrade to Windows XP, and I hope it is widely installed among licensed XP users. I also hope it is quickly pirated, so unlicensed XP users can also install it. In order for me to remain secure on the Internet, I need everyone to become more secure. And the more people who install SP2, the more we all benefit.

Original report:
<http://computertimes.asia1.com.sg/news/story/…>

Microsoft’s revised position:
<http://zdnet.com.com/2100-1105_2-5209896.html>
<http://www.theregister.co.uk/2004/05/11/…>

Details on SP2:
<http://www.mcpmag.com/columns/article.asp?…>

A similar idea:
<http://www.securityfocus.com/printable/columnists/243>

This essay originally appeared in Network World:
<http://www.nwfusion.com/columnists/2004/…>


News

Good story of social engineering used for real-world theft:
<http://lineman.net/node/view/270>

One person’s experience trying to secure Windows. One interesting point: after he does a clean install, he doesn’t have time to download all the security patches before his computer is infected by malware. Worth reading.
<http://www.techuser.net/index.php?id=47>

A good analysis of the risks of hacking electronic voting machines:
<http://www.cs.duke.edu/~justin/voting/PrezNader.html>

Avi Rubin has proposed a very interesting challenge for the security of electronic voting machines.
<http://avirubin.com/vote/ita.challenge.pdf>

And Barbara Simons has an excellent rebuttal to the League of Women Voters’ position on electronic voting machines:
<http://www.leagueissues.org/lwvqa.html>

It’s a story of a failed attempt to manufacture a Kerry sex scandal, but the interesting security angle is the concrete example of a politically motivated hacker, possibly the press: “More alarmingly, my Hotmail account had been broken into, and I couldn’t access my e-mail. Random people in my in-box whom I hadn’t spoken to in months suddenly started getting calls from reporters. My father called to tell me someone had tried the same thing with his account, but that his security software had intercepted them and tracked them back to a rogue computer address in Washington, D.C.”
<http://www.newyorkmetro.com/nymag/features/…>

On the list of terrible ideas: music protected so that you need a valid fingerprint to play it.
<http://www.theregister.co.uk/2004/06/04/biometric_drm/>

Sky marshals are easy to spot on airplanes.
<http://www.sfgate.com/cgi-bin/article.cgi?file=/c/a/…>

How the identity problem makes computer security so primitive:
<http://comment.silicon.com/0,39024711,39120567,00.htm>

An article on passwords and password safety, including this neat bit: “For additional security, she then pulls out a card that has 50 scratch-off codes. Jubran uses the codes, one by one, each time she logs on or performs a transaction. Her bank, Nordea PLC, automatically sends a new card when she’s about to run out.”
<http://www.wired.com/news/infostructure/…>

Figuring out where the illegal bioweapons laboratories are by analyzing their published academic papers:
<http://www.nature.com/nsu/040531/040531-1.html>

Fictional character from computer game almost causes national terrorist alert:
<http://www.usnews.com/usnews/issue/040517/whispers/…>
<http://games.slashdot.org/games/04/05/10/…

Spammers use fake PGP-signed messages to get through spam filters:
<http://smh.com.au/articles/2004/06/01/… >
<http://www.math.org.il/PGP-JoeJob.txt>

Interesting article on the risks of browser hijack, specifically the risks of being framed for a crime:
<http://www.theregister.co.uk/2004/05/13/…>
<http://www.wired.com/news/infostructure/…>

Fascinating article about nuclear security. Robert McNamara, the U.S. Secretary of Defense, added a security layer to the Minuteman missile launch procedure by protecting them with an 8-digit “Permissive Action Link” code. But the Strategic Air Command, fearing that the retrieval and entry of these codes might be an impediment to speedy launching of the missiles, quietly decreed that the code should always be 00000000.
<http://www.cdi.org/blair/permissive-action-links.cfm>

Story of a logic bomb from the Cold War, one that caused a natural gas explosion in Siberia.
<http://www.thenation.com/outrage/index.mhtml?…>
<http://www.fcw.com/fcw/articles/2004/0426/…>

Comparison of Indian and Diebold electronic voting machines:
<http://techaos.blogspot.com/2004/05/…>

U.S. fake ID study “found in al Qaeda cave”
<http://www.theregister.co.uk/2004/05/20/…>

Anecdote about Kinko’s internet terminals: “My sister happened to be at a function with the CEO of Kinko’s. He told her that after 9-11 (terrorists allegedly used Kinko’s as well as library terminals), they told the FBI that they could monitor all of Kinko’s terminals. Said they were proud of this.”

Historians are rebuilding the WWII codebreaking machine Colossus:
<http://www.codesandciphers.org.uk/lorenz/rebuild.htm>


Cell Phone Jamming and Terrorist Attacks

Here’s an idea that’s so amazingly stupid that I can’t even believe it’s being seriously discussed: the LA police are considering jamming all cell phones in the event of a terrorist attack.

The idea is that because cell phones were used to blow up train bombs in Spain, they should be jammed the next time a terrorist attack occurs.

Let’s think about this in terms of trade-offs. What are the odds that this will do any good whatsoever in thwarting a terrorist attack? Negligible. What are the odds that this will make response coordination harder, hamper rescue efforts, and generally increase panic after a terrorist attack? Pretty good.

Let’s not do the terrorists’ job for them. Let’s leave the infrastructure that can help us respond to a terrorist attack, whatever form it may take, in place.

<http://www.theinquirer.net/?article=15959>


Photographing Subways and Terrorist Attacks

Meanwhile, back in New York City, some transit officials are proposing banning photography in the subways “for security purposes.” Even worse, the New York Times reports that other stupid rule changes are in the works, such as banning walking between cars even when the train is stopped at a station.

This is ridiculous. It is security theater. It affects train aficionados, and does nothing to prevent terrorism. Even worse, it reinforces the culture of fear that plays directly into the terrorists’ hands.

Doesn’t anyone else remember, back during the Cold War, when we used to laugh at the Soviets for barring photography of bridges, dams, trains, and other items of “strategic importance”? It made no sense as a security countermeasure then, and it makes no sense as one now.

<http://www.msnbc.msn.com/id/5030104/>
<http://www.usatoday.com/tech/news/techpolicy/…>
<http://www.straphangers.org/photoban/>

The MTA is accepting comments on its proposal to ban photographs,
film and video in the subway and bus system.
<http://www.mta.info/nyct/rules/proposed.htm>


Counterpane News

Conversation between Bruce Sterling and Schneier on technology and national security:
<http://www.randomhouse.com/delrey/catalog/…>

Another “Beyond Fear” review:
<http://www.securitymanagement.com/library/001598.html>

Counterpane wins “Red Herring 100” award:
<http://www.counterpane.com/pr-20040519.html>

Case study: Regence Group discusses Counterpane monitoring:
<http://nwc.securitypipeline.com/howto/…>

Another article about Counterpane and monitoring:
<http://www.processor.com/editorial/article.asp?…>

Counterpane announced Managed Security Services suite for small and mid-sized businesses:
<http://www.counterpane.com/pr-20040520.html>

Watch the video webinar with Gartner and Counterpane:
<http://www.itworld.com/itwebcast/counterpane_msm/>


The Witty Worm

If press coverage is any guide, then the Witty worm wasn’t all that successful. Blaster, SQL Slammer, Nimda, even Sasser made bigger headlines. Witty only infected about 12,000 machines, almost none of them owned by home users. It didn’t seem like a big deal.

But Witty was a big deal. It represented some scary malware firsts, and is likely a harbinger of worms to come. IT professionals need to understand Witty and what it did.

Witty was the first worm to target a particular set of security products—in this case ISS’s BlackICE and RealSecure. It only infected and destroyed computers that had particular versions of this software running.

Witty was wildly successful. Twelve thousand machines was the entire vulnerable and exposed population, and Witty infected them all—worldwide—in 45 minutes. It’s the first worm that quickly corrupted a small population. Previous worms targeting small populations were glacially slow; for example, Scalper and Slapper.

Witty was speedily written. Security company eEye discovered the vulnerability in ISS’s BlackICE/RealSecure products on March 8, and ISS released a patched version on March 9. eEye published a high-level description of the vulnerability on March 18. On the evening of March 19, about 36 hours after eEye’s public disclosure, the Witty worm was released into the wild.

Witty was very well written. It was less than 700 bytes long total. It used a random-number generator to spread itself, avoiding many of the problems that plagued previous worms. It spread by sending itself to random IP addresses with random destination ports, a trick that made it easier to sneak through firewalls. It was—and this is a very big deal—bug free. This strongly implies that the worm was tested before release.

Witty was released cleverly, through a bot network of about 100 infected machines. This technique has been talked about before, but Witty marks the first time we’ve seen a worm do it in the wild. This, along with the clever way it spread, helped Witty infect every available host in 45 minutes.

Witty was exceptionally nasty. It was the first widespread worm that destroyed the hosts it infected. And it did so cleverly. Its malicious payload, erasing data on random accessible drives in random 64K chunks, caused immediate damage without significantly slowing the worm’s spread.

What do we make of all this? Clearly the worm writer is an intelligent and experienced programmer; Witty is the first worm to combine this level of skill with this level of malice. Either he had inside advance knowledge of the vulnerability—it is unlikely that he reverse-engineered it from the ISS patch—or he worked very quickly. Maybe he had the worm written, and just dropped the vulnerability in at the last minute. In any case, he seems to have deliberately targeted ISS. If his goal had been maximum spread, he could have waited for a more general vulnerability—or series of vulnerabilities—to use. The one he chose was optimized to inflict maximum damage on a specific set of targets. Was the an attack against ISS, or against a particular user of ISS products? We don’t know.

Witty represents a new chapter in malware. If it had used common Windows vulnerabilities to spread, it would have been the most damaging worm we have seen yet. Worm writers learn from each other, and we have to assume that other worm writers have seen the disassembled code and will reuse it in future worms. Even worse, Witty’s author is still unknown and at large—and we have to assume that he’s going to do this kind of thing again.

<http://www.icsi.berkeley.edu/~nweaver/login_witty.txt>
<http://www.securityfocus.com/printable/columnists/232>

This essay originally appeared in Computerworld:
<http://www.computerworld.com/securitytopics/…>


Comments from Readers

From: “Norman Bowley” <nbowley e-counsel.ca>
Subject: RE: CRYPTO-GRAM, May 15, 2004

A “Lacey” situation was considered a dozen years ago by the Supreme Court of Canada in the James Henry Wise case. While it was a squeaker (4 to 3) in allowing the evidence obtained through the tracking device, even the majority said it was right at the limit. The dissent of La Forest, however, is eerie and prophetic, “The long-term consequences of admitting evidence obtained in such circumstances on the integrity of our justice system outweigh the harm done by this accused being acquitted. This is not a case where the police are monitoring the roads for the purpose of regulating or observing what goes on there. It is a case of tracking the movements of an individual. There is an important difference between courting the risk that our activities may be observed by other persons and the risk that agents of the state, in the absence of prior authorization, will track our every move… The grave threat to individual privacy posed by surreptitious electronic tracking of one’s movement is such as to require prior judicial authorization. The issuance of a search warrant will ordinarily call for an objective showing of reasonable and probable cause, and this should generally be required of those seeking to employ electronic tracking devices in the pursuit of an individual.”

The decision can be found at <http://www.lexum.umontreal.ca/csc-scc/en/pub/1992/…>

From: “Brian Gladman” <brg gladman.plus.com>
Subject: WinZip Encryption

The view that the moral to be learnt from the reported failures in WinZip’s AES-based encryption is that ‘cryptography is hard’ could be taken to imply that these failures resulted from mistakes that were made in the security design used. In respect of one relatively minor issue I believe that this may be true.

But by far the most significant weaknesses that have been discovered were known about during the security design process and were left in place because of the need for backward compatibility. This suggests to me a different moral (again not new): adding security to an existing design as an afterthought is unlikely to be successful.

From: odlyzko dtc.umn.edu (Andrew Odlyzko)
Subject: “only ticketed passengers are allowed through security”

Two of the potential airport security developments you advocate are somewhat inconsistent. Having “undercover security officers … roaming [airports],” which you approve of, is most effective if “only ticketed passengers are allowed through security,” which you suggest should be phased out. The restriction to ticketed passengers serves not only to shorten the lines at security checkpoints, but also reduces the crowds inside, and makes the jobs of the undercover security officers easier.

From: Christopher Bardin <christopher_b85281 yahoo.com>
Subject: How to turn a disposable camera into a stun gun

I’ve been repairing cameras for over 15 years, so I’m probably better qualified to comment on the article than your average reader. While I don’t repair disposable cameras—nobody does—I have taken them apart to see what is in them. And there are several glaring mistakes in the web page to which your article linked.

First, I have never seen a camera with a built-in flash that had a storage capacitor rated at more than 350 volts. Anyone who has unexpectedly completed a circuit of 350 volts through a body part might argue with me, but I find the difference between 350 and 600 volts to be quite noticeable—though 350 volts certainly cannot be ignored.

Second, having established the considerable hazard of 350 volts, it is important to know that simply removing the battery from the camera won’t discharge the flash storage capacitor. Cameras with built-in flashes do not have discharge resistors across the flash storage capacitor because it wouldn’t make sense. The discharge resistor would have to be a high value (at least 10 Meg Ohms) to maximize the life of the battery, and also physically large because of the necessary voltage rating. Not a cheap component. Since space is at a premium and cost is always a concern, the decision is always to leave it out.

From: Dan DeMaggio <dmag umich.edu>
Subject: Step 1: Admit you have a problem

I love your Crypto-Gram and your thoughtful analysis. But I must take you to task for linking to Tim Mullen’s Security Focus article about Walter Mossberg (and implying that you agree with it).

Tim says “The solution is for the end user to start caring.” But that will never happen. Only computer enthusiasts care about computers. Only car enthusiasts care about cars. Only llama enthusiasts care about llamas. The vast majority of people in the world will never care about any of them.

Let me tell you about three products I’ve bought:

– I bought a car. The locks are not much of a deterrent, but they have kept the car perfectly secure (even in Detroit) for more than 10 years now. I take it in for a 10-minute oil change every three months (like it says to do in the owner’s manual). When it breaks down (twice in 10 years), I make a phone call and have it fixed. To me, the car is merely a means to an end. I do not care about my car.

– I bought a house. I expect the locks will keep my house reasonably secure. The complex equipment in the basement may break every few years, but a simple repairman visit will fix the problem. I care about my house more than my car, but not by much. I would not have bought my house if I expected it to be a high-maintenance source of problems.

– I got my wife a computer with Windows on it. Within minutes of plugging it in, it started getting spam pop-ups. If I mistyped a domain name, I would get a site that did so many pop-ups and re-spawns that I had to reboot the computer. Keeping up with patches would take hours per month. Even though I’m a techie, I refuse to babysit that computer. If it becomes infected, I guess I’ll just wipe and re-install.

The first two examples are “whole products”. (See Geoffrey A. Moore’s “Crossing The Chasm”.) Almost everything I was going to need came bundled. Those things that weren’t bundled were things that I knew about, things that were cheap (relative to the product price), and things that do not require much time or thought.

The third product is not a whole product. I refuse to hunt down all the services I need to turn off (but I did get a firewall). I refuse to waste my time downloading multi-megabyte patches and wait for the computer to reboot multiple times. I refuse to pay $100 to protect a $500 computer, especially because no AV software protects from all new exploits. (I know because regularly get new e-mail viruses marked “certified virus free” by AV vendors.)

I refuse to do these things because I know they don’t have to be done (and the public will never do them anyway). Linux doesn’t require any of that. I know Linux isn’t a whole product either (yet), but it’s easier to add documentation and support to Linux than security to Windows. If I were really paranoid about security, I’d (easily) migrate to OpenBSD. They’ve had one remote hole in the default install in the last eight years, unlike Microsoft’s seven exploits in one day.

Walter says “It’s time somebody [shoulder the whole burden of protecting PCs].” People want computers to be as low-maintenance as a car. Microsoft created this problem because (as a monopoly), it’s not profitable to fix bugs (it won’t generate more sales) or make things secure (ditto). Yes, Tim, it is “wishful thinking” to expect the problem to be solved for free. But it is even more wishful thinking to expect the public to care about computers.


CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on <http://www.schneier.com/crypto-gram.html>.

To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to crypto-gram-subscribe@chaparraltree.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.

Comments on CRYPTO-GRAM should be sent to schneier@schneier.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.

Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.

CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.

Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.

Sidebar photo of Bruce Schneier by Joe MacInnis.