October 15, 2001
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org.
Copyright (c) 2001 by Counterpane Internet Security, Inc.
In this issue:
- Cyberterrorism and Cyberhooliganism
- War on Terrorism
- Crypto-Gram Reprints
- SANS Top 20
- Counterpane Internet Security News
- Dangers of Port 80
- Comments from Readers
I've seen a lot of news articles about "cyberterrorism." Some have described the Nimda worm as a form of cyberterrorism. These articles admonish readers to keep their virus detectors and firewalls up-to-date to combat cyberterrorism, because we're going to see more Web defacements and viruses as parts of cyberterrorist attacks.
Let's get real here. To be sure, you should keep all your security products up-to-date. But what it's protecting against, and what we're seeing, is cyberhooliganism: the cyber equivalent of phoning in phony bomb threats and throwing dead rats at people in turbans. It's horrible, it's despicable, it causes genuine damage, and it ought to be stopped.
But don't dignify it with a fancy name, and don't minimize the real terrorist dangers. Real cyberterrorism will come when somebody pulls the plug on the Internet or the phone system, irrevocably destroys computers and data, or disrupts critical systems resulting in loss of life.
You protect against this kind of stuff by having good (and distant) backups, by making sure you won't go out of business if the Internet goes away for a week, and by increasing the redundancy and reliability of your Internet connection and of the Internet as a whole.
And while you're checking your backups (Is all your data really on those tapes? Where are they, anyway? How would you restore them if your computers had no operating system, or if you had no computers anymore?), figuring out how redundant your network connections really are, updating your virus checker and your firewall, how about checking out your disaster planning in general?
When was the last time you took a fire evacuation drill seriously?
This essay was written with Elizabeth Zwicky.
I am writing this in the midst of much speculation about anthrax-laden letters being mailed in the United States. At this point, we don't know if this is 1) a bizarre unrelated event, 2) a terrorist attack gone bad, or 3) field tests for a terrorist attack to come. I am hoping for the first, as far-fetched as it sounds, and I am worried about the third.
A few minutes of speculation should be enough to convince anyone that we cannot make the United States, let alone the world, safe from terrorism. It doesn't matter what Draconian counterterrorism legislation we enact, how many civil liberties we sacrifice, or where we post armed guards. We cannot stop terrorism within a country. We cannot block it at its borders. We have always been at risk, and we always will be.
The only way to deal with terrorism is to eradicate it at the source: to root out and destroy terrorists and terrorist organizations. In this sense, President Bush's campaign against terrorism is the correct course of action. But to be successful, a counterterrorism campaign needs two components: in law-enforcement and in politics. Merely arresting and convicting existing terrorists, without addressing the geopolitical climate that created those terrorists in the first place, will not solve the problem. This is the lesson that the British learned in Northern Ireland, and the lesson that the Israelis have not yet learned in their own country. To use Bush's "War on Terrorism" metaphor--as flawed as it is--this corresponds to winning the war and winning the peace.
We need to do both.
Much has been written about the U.S. various antiterrorist legislations, both in terms of effectiveness and its effects on civil liberties. ACLU has an excellent comparison chart of the different bills before Congress:
Commentary from CDT:
Commentary in the guise of parody:
Additional commentary on this situation and its security implications:
A few days before the World Trade Center Attacks, Sen. Hollings' office released a draft bill that represents the next salvo in the digital copyright wars. Called the SSSCA (Security Systems Standards and Certification Act), it makes it a crime to build or sell any kind of computer equipment that "does not include and utilize certified security technologies" approved by the U.S. government.
Before going into how ridiculous this requirement really is, let's talk about the situation that is leading the entertainment industry to these desperate measures.
Digital files can be copied. Nothing anyone can say or do can change that. If you have a bucket of bits, you can easily create an identical bucket of bits and give it to me. You still have the bits, and now I have the bits too. I have explained this in detail previously.
Software copy protection does not work. It doesn't work to prevent software piracy. It does not work to prevent copying of digital music, videos, etc. I have also explained this previously.
Copy prevention is easier, but still not foolproof, if you can extend the prevention mechanism into the hardware. If there is a software decoder that decrypts a digital movie when a user pays for it, he can always write a tool to extract the digital video stream after it has been decrypted. But if the decryption happens in the speakers and monitor, this is a lot harder. This general rule explains why it is easier to hack a software video player than a DVD machine. It's always possible to capture the content from the output device -- re-record the audio from the speakers, for example -- but it won't be a perfect digital copy.
The SSSCA attempts to push copy prevention to the output devices. It makes it illegal to sell computers without industry-approved copy prevention. It actually makes free and open operating systems (like Linux) illegal if they refuse to implement copy protection. It limits fair use, and basically puts the computer industry under the control of the entertainment industry.
I have long argued that the entertainment industry doesn't want people to have computers. Computers give users too much capability, too much flexibility, too much freedom. The entertainment industry wants users to sit back and consume things. They are trying to turn a computer into an Internet Entertainment Platform, along the lines of a television or VCR. This bill is a large step in that direction. The entertainment industry will use this bill to further erode fair use, free expression, and security research.
Those who think I am being alarmist only need to look at the effects of the DMCA. The entertainment industry (and the software companies that supply it) has pushed that law as far as it can. It used the law to threaten a computer-science professor in an effort to get a piece of research squelched. It has used the law to arrest a foreign programmer visiting the United States. It has used the law to prevent publication of a magazine article. It has used the law to bully the computer industry and spread fear, uncertainty, and doubt among researchers and companies. If you don't think they'll use this new law to change the way the computer industry operates, you're not paying attention to history.
One of the side-effects of September 11th is that Congress isn't worrying too much about anything else. The SSSCA seems to have been shelved for now. It's more important to be vigilant, though, as some might use the nation's distraction to sneak the bill through the legislature.
Future History from Discover magazine:
News about SSSCA:
I considered not writing about this at all, because there's little real "news" here. Nimda is another self-propagating worm, along the lines of Code Red. It only infects computers running Windows and one of Outlook, Outlook Express, Internet Explorer, or IIS. Nimda's spreading mechanism exploits several vulnerabilities; it combines the infection tricks of Code Red and SirCam, and incorporates some clever new ideas. And it's the first worm that combines server and workstation attacks, making it very difficult to eradicate.
Cleaning it up was a big deal for many businesses, made worse by the general stress levels following the September 11th terrorist attacks. (Nimda was released exactly one week, almost to the minute, after the World Trade Center attacks.)
What I found interesting is that the most-often-suggested recovery mechanisms didn't always work. Reinstalling the operating system doesn't always delete all files, and sometimes the worm can remain on the disk and come back to life. To be really sure you've deleted Microsoft worms and viruses, you've got to reformat the disk before restoring from backup. This problem shows up whenever a Windows box gets infected with anything, but is particularly acute with Nimda because it touches so many parts of the operating system.
Other than those tidbits, I didn't see much new in Nimda. If there's any lesson here, it's that the people who write these things learn from their predecessors. On the plus side, the security community learned too, and responded faster to the threat. Even so, expect the next self-propagating worms to be even nastier.
Counterpane's security alert, with detailed clean-up instructions (three different options) and defenses:
Steganography: Truths and Fictions:
Memo to the Amateur Cipher Designer:
So, You Want to be a Cryptographer:
Key Length and Security:
NSA on Security:
Fixing every computer vulnerability is impossible. Even installing every available security patch is unreasonable to expect. Realizing this, last year SANS issued their "Top 10" list of security vulnerabilities. If you can't fix everything, they suggested, at least fix these. Even if you plan on fixing everything, fix these first.
This is great stuff, and I am pleased that SANS recently updated their list. It's now a "Top 20," divided among general vulnerabilities, Windows vulnerabilities, and UNIX vulnerabilities. I urge system administrators to use the list to prioritize their security activities.
I know this isn't easy. Changing system configurations and validating software patches, especially in today's complex network and applications environments, is a risky and time-consuming process. This is one of the reasons why so many people who "know better" don't always update their system. But even if you don't install all the patches you should, remember that real-time monitoring immediately improves your security substantially. If Counterpane is monitoring you, we can notify you when attackers look for these problems.
SANS spells this out in vulnerability G6: "Non-existent or incomplete logging." According to the SANS document: "New vulnerabilities are discovered every week, and there are very few ways to defend yourself against an attacker using a new vulnerability.... You cannot detect an attack if you do not know what is occurring on your network. Logs provide the details of what is occurring, what systems are being attacked, and what systems have been compromised."
That's the point of Counterpane. The SANS document says: "Without logs, you have little chance of discovering what the attackers did." Take the next step: if you monitor those logs in real time, 24 hours a day and seven days a week, you'll find out what the hacker IS DOING. This is what Counterpane does, and why so many companies use us as an integral part of their security.
If you're already a Counterpane monitoring customer, you can immediately protect yourself from some of the SANS Top 20 vulnerabilities by increasing the number of devices we monitor. In addition to the firewalls, routers, and intrusion detection systems that you already monitor through Counterpane, consider adding authentication systems (RADIUS, TACACS, or SecurID servers; Windows domain controllers; etc.), enterprise backup servers, Windows domain controllers, and network monitoring servers to your monitoring infrastructure, by pointing their log data to the Counterpane Sentry. The more we monitor, the better job we do at catching intruders. And there's no additional cost.
If you've deployed intrusion detection systems and antivirus code, be sure that your detection signatures are up-to-date and tuned to your environment. If you've written custom signatures for your IDS, informing Counterpane about them will enable us to monitor them more effectively.
If you're not a Counterpane customer, you're on your own. Do the best you can.
Companies fear cyberterrorism:
Another semantic attack. The Yahoo News Web site was hacked, and a story changed.
Reasonably clever computer security program:
Home DSL users with bad security are getting booted off their ISPs:
Yet another CD-protection scheme doomed to failure:
Remember that Enigma machine stolen from Bletchley Park last year? They finally arrested someone, but there's still no word about the missing rotors.
New Zealand is considering DMCA-like legislation:
In the wake of the terrorist attacks, the NSA has closed its cryptography museum:
There's a survey that indicates that over 150,000 Microsoft IIS Web sites have been taken down, most likely as a result of Code Red or Nimda.
It's no wonder Gartner has advised people not to use Microsoft IIS, until it is completely rewritten, because of security problems:
The best quote is the following understatement: "...securely using Internet-exposed IIS Web servers has a high cost of ownership."
Probably in reaction to this, Microsoft says it has a new initiative to improve security. Most of this sounds like the typical PR nonsense you'd expect from Microsoft: lots of talk and no action. But if they actually ship products with secure default installations, this is a big win.
Another opinion piece suggests that Microsoft Outlook should be banned.
Zero Knowledge is discontinuing their anonymity service.
Did anybody actually think Microsoft's Passport is secure?
Even after Microsoft fixes these specific problems, does anybody actually think it will be secure?
Macro-virus protection in the Microsoft Office product line (two parts):
A year-old article with related information: Understanding Macro Viruses
Some additional details have emerged in the Scarfo case.
For those who are not following, Scarfo is a suspected Mafia member who was using encrypted e-mail to protect his conversations. The FBI bugged his computer to recover his private key and read his encrypted e-mail. Citing all sorts of weird security reasons, the FBI tried to avoid presenting details about how their bugging worked during the court proceedings. According to this new document, it seems likely that the FBI's keystroke logger is a piece of software (and not hardware). And it looks like they thought about the Title III wiretap rules, and built this keystroke logger to not run afoul of them.
The U.S. has a new Office of Cyberspace Security, reporting to the head of the newly founded Office of Homeland Security on Monday. Richard Clarke, who has served as counterterrorism chief at the White House for more than a decade, is the head of this new cyberspace security office. But since the Office of Homeland Security doesn't have any real power to do anything, I wonder how effective the Office of Cyberspace Security will really be. Pity, as the U.S. could use some good cyberspace security.
Schneier is speaking at the following conferences:
Open Group Conference in Amsterdam on 10/22:
Red Herring NDA 2000 in Dana Point, CA on 10/29:
E-CFO Conference in San Francisco, CA on 10/30:
Red Herring's Portfolio Fairfax in Tyson's Corner, VA on 11/5:
ITEC in Kansas City, MO on 11/7:
Counterpane is hosting Schneier talks in the following cities: Houston, Dallas, Minneapolis, and St. Louis. If you are interested in attending, please contact Patti Spelman at <email@example.com>.
Gartner has written a report on Counterpane. Normally these reports are confidential, and are only available by paid subscription. But this one is published on the ZDNet Web site.
Another analyst. "Forrester expects: A robust security business. Every organization will now start to take security more seriously. Companies like IBM, RSA Security, and Counterpane Internet Security will have more business than they can handle."
Interview with Bruce Schneier in PC World:
Article about Bruce Schneier's talk in London:
Carolyn Turbyfill, Counterpane's director of software, was interviewed by ecomSecurity.com:
In the months that the new self-propagating worms -- Code Red, Nimda, etc. -- have traveled the Net, they've wreaked havoc among users of Microsoft's Internet Information Server. Aside from the damage that it brings, the Code Red genus highlights a much more pernicious problem: the vulnerability of embedded devices with IP addresses, particularly those with built-in Web servers.
The Code Red-style worms propagate by going through self-generated lists of IP addresses, and contacting each address's port 80 (the standard HTTP port). If a server answers, Code Red sends an HTTP request that forces a buffer overflow on unpatched IIS servers, compromising the entire computer. Nimda tries multiple well-known avenues of attack, with similar results.
Aside from successfully infecting vulnerable servers, these worms inadvertently affect other devices that listen on port 80. Cisco has admitted that some of its DSL routers are susceptible to denial-of-service attacks; when affected routers' embedded Web servers are contacted by one of these worms, the router goes down. HP print servers and 3Com LANmodems seem to be similarly affected; other network infrastructure hardware likely suffered, too.
HTTP has become the computer lingua franca of the Internet. Since Web browsers are effectively ubiquitous, many hardware and software companies can't resist making their products' functions visible -- and often controllable -- from any Web browser. Indeed, trends indicate that all future devices on the Net will be listening on port 80. This increasing reliance on network-accessible gadgetry will return to haunt us. Other worms will cause more damage; Code Red is only a harbinger.
Sony cryptically announced in April that it would endow all future products with IP addresses; a technically implausible claim, but nonetheless a clear statement of intent. Car vendors are experimenting with cars that can be wirelessly interrogated and controlled from a Web browser. The possibilities for nearly untraceable shenanigans perpetrated by the script kiddie next door after working out your car's password are endless. This problem won't be solved by encrypting the Web traffic between car and browser, either.
The rise of HTTP as a communications common denominator comes from ease of use, for programmer and customer alike. All customers need is a Web browser and the device's IP address, and they're set. Creating a lightweight server is trivial for developers, especially since both in- and outbound HTTP data is text. Even more attractive, HTTP traffic is usually allowed through firewalls and other network traffic barriers. Numerous non-HTTP protocols are tunneled via HTTP in order to ease their passage.
HTTP isn't the miscreant. The problem is created by the companies that embed network servers into products without making them sufficiently robust. Bulletproof design and implementation of software -- especially network software -- in embedded devices is no longer an engineering luxury. Customer expectation of reliability for turnkey gadgets is higher than that for PC-based systems. The successful infiltration of the Code Red worms well after the alarm was sounded is proof that getting it right the first time has become imperative.
Given the ease of implementation and small code size of a lightweight Web server, it's particularly disturbing that such software isn't engineered with greater care. Common errors that cause vulnerabilities -- buffer overflows, poor handling of unexpected types and amounts of data -- are well understood. Unfortunately, features still seem to be valued more highly among manufacturers than reliability. Until that changes, Code Red and its ilk will continue to be serious problems.
Like sheep, companies and customers have been led along the path of least resistance by the duplicitous guide called convenience. HTTP is easy: easy to implement, easy to use, and easy to co-opt. With a little diligence and forethought, it is also easy to secure, as are other means of remote network access. HTTP wasn't originally designed to be all things to all applications, but its simplicity has made it an understandable favorite. But with this simplicity also comes the responsibility on the part of its implementers to make sure it's not abused.
This article was written with Stephan Somogyi, and appeared in a slightly different form in Inside Risks 135, Communications of the ACM, Vol. 44, No. 10.
And on ZDNet:
Code Red affected Cisco routers:
Sony to assign IP addresses to all products:
From: Edward W. Felten <feltencs.princeton.edu>
Subject: Accessibility of Airport Departure Gates
I generally agree with your observations about the apparent ineffectiveness of many of the new airport security measures. But at least one of the new rules -- the one allowing only ticketed passengers into the gate area -- does have a plausible purpose. The purpose is simply to reduce the number of people who pass through the security checkpoint, thereby allowing the inspectors to spend more time on each person passing through.
Of course, this rule could have the opposite effect, if the cost of checking whether each person has a ticket exceeds the savings due to the reduced number of people being inspected. Which way the balance comes out depends on what fraction of the people who want to pass through the checkpoint have tickets.
This cost of checking tickets could reduced by using the honor system, simply asking nonticketed people to please stay outside the checkpoint; or by checking at random and imposing a penalty on people who try to enter without tickets. The goal is not to keep any particular person out, but just to reduce the number of law-abiding people who pass through.
The United terminal at Newark has had this rule for several years. The system was fairly porous -- ticket-counter agents would give an unticketed person a "gate pass" if they were willing to wait in line and could give a plausible reason why they needed to go to the gate area. The crowds of people waiting outside the checkpoint showed that there was indeed a reduction in the number of people passing through the checkpoint.
There is another, more iffy, rationale for the ticketed-passengers-only rule. This one says that the goal is to reduce the density of people in the gate area. This makes the gate area less attractive as a target, and increases the chance that unusual events in the gate area (e.g. unattended bags) will be noticed. The drawback is that the rule increases the density of people elsewhere in the airport. It's pretty hard to tell whether this is a net gain or a net loss.
From: John Sullivan <john.sullivanthermoteknix.co.uk>
Subject: Re: New Microsoft Root Certificate Program
You quoted Microsoft:
> New Microsoft Root Certificate Program
> [...] Any new roots accepted by Microsoft are available to Windows XP
> clients through Windows Update. [...]
> When a user [encounters an unknown root cert] certificate chain
> verification software checks the appropriate Windows Update location and
> downloads the necessary root certificate.
The point you raise is valid and important -- the first two points that stuck me however are: i) The new roots are made available over the Web rather than CD-ROM, and ii) the new roots are installed at the same time as they are used, rather than being pre-installed potentially quite some time before being used.
I installed most of my software from physical media, with a set of root certs pre-installed. For updates, service packs, etc., even if these are downloaded over the Web they are signed by a chain which goes back to something on that pre-installed list. It's far from foolproof and I have no knowledge of what goes on in this huge piece of closed software, but overall I have a certain degree of trust in it.
Now, it is still possible to forge CD-ROMs. But I'm not *too* worried about that compared to the risk of hijacking a Web site. Someone could have already done so at the precise point I downloaded the service packs. The difference is that I currently have a static signed archive which I could in theory compare against other people's downloads to make sure it's not changing over time. Windows Update makes this less likely.
On the second point, if someone were to want to distribute some code, say, that needed to be signed by a root cert they had control of, if certs are expected to be pre-installed then they have to plan cert distribution well in advance of code distribution, and even then they can only hope for a percentage of machines to have the new cert by the time the code hits them. This is both less effective for them, and more likely to generate alarming messages on their targets and expose them faster. If certs are downloaded on demand, they can start code distribution at the same time as they hijack the cert distribution site. Less effort, faster results, and less chance of the target machines' owners noticing.
It is possible that some insider could have sneaked bogus new root certs into a service pack or even the original software I have already installed. I have no doubt that the Microsoft code signing and Windows Update sites are reasonably protected from direct internal abuse (someone walking up to the webmaster with a floppy and saying "Hey, can you just distribute this code to the Windows World for me"; won't happen.) I doubt a committed internal attacker with check-in privileges would find it too hard to insert a new code section into an updated SHELL32.DLL to add their own root cert. Again, I don't think that's especially likely to happen though. I think malicious outsiders are more likely, and I see Microsoft's changes as seriously weakening their protection against that.
From: Buck Hodges <ewhodgesyahoo.com>
Subject: Liability and Software
You state in your Crypto-Gram article on Code Red: "If software companies were held liable for systematic problems in its products, just like other industries (remember Firestone tires), we'd see a whole lot less of this kind of thing." I disagree that existing product liability can be applied to software with respect to being liable for exploitable defects. Firestone is held liable for tire defects that may result in injury in or death when the product is used correctly according to the instructions provided by Firestone. Is intentionally exploiting a software defect that would have no effect when software is used properly the same thing? I don't believe that it is.
The concept of holding software companies responsible for viruses and worms being able to exploit systems is not analogous to typical product liabilities due to defects and failures. Using the Firestone tire example you mention, Firestone is being held liable for a defect that has catastrophic results when the product is used properly according to the manufacturer's instructions. Firestone is not being held liable for the results of owners or others intentionally trying to exploit a defect to cause a injury or death. Obviously if people took actions to exploit the defect, many more injuries and deaths would have resulted.
Furthermore, if authors of software were held liable for defects in software, free software could only be developed outside of this country. You may intend only for rich companies like Microsoft to be held liable, but the liability would be applied to all software authors since large companies use free software, such as Apache and Linux, on their systems. If Microsoft is negligent when they deliver software with at least one exploitable buffer overrun, is not Red Hat also negligent for delivering software (some that they developed and most that they did not) with the same exploit? Are not the Apache developers also negligent for delivering software with the same exploit? Even though the user may have downloaded and installed the software for free, the authors would still have liability. *Firestone would still be liable even if they had provided the tires and installation free of charge.*
Software defects can be exploited on scale far larger than defects in physical products. Using the Firestone tire problems again, the defect or defects in those tires, possibly exacerbated by the Ford Explorer design, affected an extremely small number of owners relative to the total number of owners. Were the Firestone tire defect like a software defect, most Ford Explorer owners with those tires would have been involved in some sort of wreck and injured or killed unless they replaced their tires immediately when the problem was announced. Then Firestone would not be facing lawsuits from a relatively small number of owners but rather lawsuits by nearly every owner. The software defect in Microsoft's IIS web server that is exploited by Code Red would never otherwise be noticed by IIS owners. In other words, unlike the Firestone tire defect, the software defect would not have caused any damage or harm to owners had someone not sought to find and exploit the defect.
Would software companies be able to remove themselves from liability by effectively saying, "This product is dangerous if not used properly according to the directions provided by this manual"? That is the case with dangerous tools, such as lawn mowers, chainsaws, and automobiles. The manuals for lawn mowers warn that placing hands or any other object under a running mower will result in injury or death. Having said that in the manual and probably having printed again on the mowing deck, an owner cannot sue the manufacturer when the lawn mower does not automatically shut off prior to injury when placing a hand under a running lawn mower. Also, the owner cannot sue the manufacturer if his neighbor forces the owner's hand under the running mower; that would be a crime committed by the neighbor. Would Microsoft or other software company similarly be removed from liability if the manual stated that the product is dangerous if not used properly? Is intentionally overrunning a buffer, by either the owner or his Internet neighbor, a proper use of the software according to the manufacturer?
One more example is with regard to door locks. Most people realize that nearly all locks can be picked. Furthermore, information on how to do it is readily available on the Internet. I don't, however, remember reading a warning about lock picking vulnerabilities in the printed material that came with the last door lock that I bought. If someone picks a door lock and steals items from the house, should the manufacturer of the lock be held liable? A vulnerability in the product was intentionally exploited in order to gain access to the house. However, I don't believe that lock manufacturers are routinely sued for this defect. Instead, we prosecute the criminal or criminals that committed the crime.
Lawyers would absolutely love having the ability to bring massive class action lawsuits against Microsoft, IBM, Adobe, Apple, and others (even Red Hat) to hold them liable for the damages caused by flaws exploited by worms, viruses, and other malicious code. They would make a fortune, and we the consumers would be paying the legal bills through higher software prices.
Many of the defects (true defects -- not just operator ignorance) exploited by worms and viruses would not have any effect on the software product if the product were used properly. The abuse of defects is what differentiates this from traditional product liability. We should instead prosecute the criminals that intentionally cause mayhem and destruction.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of "Secrets and Lies" and "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide.