November 15, 2005
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0511.html>. These same essays appear in the "Schneier on Security" blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
In 2004, when the U.S. State Department first started talking about embedding RFID chips in passports, the outcry from privacy advocates was huge. When the State Department issued its draft regulation in February, it got 2,335 comments, 98.5% negative. In response, the final State Department regulations, issued last month, contain two features that attempt to address security and privacy concerns. But one serious problem remains.
Before I describe the problem, some context on the surrounding controversy may be helpful. RFID chips are passive, and broadcast information to any reader that queries the chip. So critics, myself included, were worried that the new passports would reveal your identity without your consent or even your knowledge. Thieves could collect the personal data of people as they walk down a street, criminals could scan passports looking for Westerners to kidnap or rob and terrorists could rig bombs to explode only when four Americans are nearby. The police could use the chips to conduct surveillance on an individual; stores could use the technology to identify customers without their knowledge.
RFID privacy problems are larger than passports and identity cards. The RFID industry envisions these chips embedded everywhere: in the items we buy, for example. But even a chip that only contains a unique serial number could be used for surveillance. And it's easy to link the serial number with an identity -- when you buy the item using a credit card, for example -- and from then on it can identify you. Data brokers like ChoicePoint will certainly maintain databases of RFID numbers and associated people; they'd do a disservice to their stockholders if they didn't.
The State Department downplayed these risks by insisting that the RFID chips only work at short distances. In fact, last month's publication claims: "The proximity chip technology utilized in the electronic passport is designed to be read with chip readers at ports of entry only when the document is placed within inches of such readers." The issue is that they're confusing three things: the designed range at which the chip is specified to be read, the maximum range at which the chip could be read, and the eavesdropping range or the maximum range the chip could be read with specialized equipment. The first is indeed inches, but the second was demonstrated earlier this year to be 69 feet. The third is significantly longer.
And remember, technology always gets better -- it never gets worse. It's simply folly to believe that these ranges won't get longer over time.
To its credit, the State Department listened to the criticism. As a result, RFID passports will now include a thin radio shield in their covers, protecting the chips when the passports are closed. Although some have derided this as a tinfoil hat for passports, the fact is the measure will prevent the documents from being snooped when closed.
However, anyone who travels knows that passports are used for more than border crossings. You often have to show your passport at hotels and airports, and while changing money. More and more it's an identity card; new Italian regulations require foreigners to show their passports when using an Internet cafe.
Because of this, the State Department added a second, and more-important, feature: access control. The data on the chip will be encrypted, and the key is printed on the passport. A customs officer swipes the passport through an optical reader to get the key, and then the RFID reader uses the key to communicate with the RFID chip.
This means that the passport holder can control who gets access to the information on the chip, and someone cannot skim information from the passport without first opening it up and reading the information inside. This also means that a third party can't eavesdrop on the communication between the card and the reader, because it's encrypted.
By any measure, these features are exemplary, and should serve as a role model for any RFID identity-document applications. Unfortunately, there's still a problem.
RFID chips, including the ones specified for U.S. passports, can still be uniquely identified by their radio behavior. Specifically, these chips have a unique identification number used for collision avoidance. It's how the chips avoid communications problems if you put a bagful of them next to a reader. This is something buried deep within the chip, and has nothing to do with the data or application on the chip.
Chip manufacturers don't like to talk about collision IDs or how they work, but researchers have shown how to uniquely identify RFID chips by querying them and watching how they behave. And since these queries access a lower level of the chip than the passport application, an access-control mechanism doesn't help.
To fix this, the State Department needs to require that the chips used in passports implement a collision-avoidance system not based on unique serial numbers. The RFID spec -- ISO 14443A is its name -- allows for a random system, but I don't believe any manufacturer implements it this way.
Adding chips to passports can inarguably be good for security. Initial chips will only contain the information printed on the passport, but this system has always envisioned adding digital biometrics like photographs or even fingerprints, which will make passports harder to forge, and stolen passports harder to use.
But the State Department's contention that they need an RFID chip, that smartcard-like contact chips won't work, is much less convincing. Even with all this security, RFID should be the design choice of last resort.
The State Department has done a great job addressing specific security and privacy concerns, but its lack of technical skills is hurting it. The collision-avoidance ID issue is just one example of where, apparently, the State Department didn't have enough of the expertise it needed to do this right.
Of course it can fix the problem, but the real issue is how many other problems like this are lurking in the details of its design? We don't know, and I doubt the State Department knows, either. The only way to vet its design, and to convince us that RFID is necessary, would be to open it up to public scrutiny.
The State Department's plan to issue RFID passports by October 2006 is both precipitous and risky. It made a mistake designing this behind closed doors. There needs to be some pretty serious quality assurance and testing before deploying this system, and this includes careful security evaluations by independent security experts. Right now the State Department has no intention of doing that; it's already committed to a scheme before knowing if it even works or if it protects privacy.
This essay previously appeared on Wired.com:
At a security conference last month, Howard Schmidt, the former White House cybersecurity adviser, took the bold step of arguing that software developers should be held personally accountable for the security of the code they write.
He's on the right track, but he's made a dangerous mistake. It's the software manufacturers that should be held liable, not the individual programmers. Getting this one right will result in more-secure software for everyone; getting it wrong will simply result in a lot of messy lawsuits.
To understand the difference, it's necessary to understand the basic economic incentives of companies, and how businesses are affected by liabilities. In a capitalist society, businesses are profit-making ventures, and they make decisions based on both short- and long-term profitability. They try to balance the costs of more-secure software -- extra developers, fewer features, longer time to market -- against the costs of insecure software: expense to patch, occasional bad press, potential loss of sales.
The result is what you see all around you: lousy software. Companies find that it's cheaper to weather the occasional press storm, spend money on PR campaigns touting good security, and fix public problems after the fact than to design security right from the beginning.
The problem with this analysis is that most of the costs of insecure software fall on the users. In economics, this is known as an externality: an effect of a decision not borne by the decision maker.
Normally, you would expect users to respond by favoring secure products over insecure products -- after all, they're making their own buying decisions based on the same capitalist model. But that's not generally possible. In some cases, software monopolies limit the available product choice; in other cases, the "lock-in effect" created by proprietary file formats or existing infrastructure or compatibility requirements makes it harder to switch; and in still other cases, none of the competing companies have made security a differentiating characteristic. In all cases, it's hard for an average buyer to distinguish a truly secure product from an insecure product with a "boy, are we secure" marketing campaign.
The end result is that insecure software is common. But because users, not software manufacturers, pay the price, nothing improves. Making software manufacturers liable fixes this externality.
Watch the mechanism work. If end users can sue software manufacturers for product defects, then the cost of those defects to the software manufacturers rises. Manufacturers are now paying the true economic cost for poor software, and not just a piece of it. So when they're balancing the cost of making their software secure versus the cost of leaving their software insecure, there are more costs on the latter side. This will provide an incentive for them to make their software more secure.
To be sure, making software more secure will cost money, and manufacturers will have to pass those costs on to users in the form of higher prices. But users are already paying extra costs for insecure software: costs of third-party security products, costs of consultants and security-services companies, direct and indirect costs of losses. Making software manufacturers liable moves those costs around, and as a byproduct causes the quality of software to improve.
This is why Schmidt's idea won't work. He wants individual software developers to be liable, and not the corporations. This will certainly give pissed-off users someone to sue, but it won't reduce the externality and it won't result in more-secure software.
Computer security isn't a technological problem -- it's an economic problem. Socialists might imagine that companies will improve software security out of the goodness of their hearts, but capitalists know that it needs to be in companies' economic best interest. We'll have fewer vulnerabilities when the entities that have the capability to reduce those vulnerabilities have the economic incentive to do so. And this is why solutions like liability and regulation work.
SlashDot thread on Schmidt's concerns:
Dan Farber has a good commentary on my essay.
This essay originally appeared on Wired.com:
There has been some confusion about this in the comments -- both in Wired and on my blog -- that somehow this means that software vendors will be expected to achieve perfection and that they will be 100% liable for anything short of that. Clearly that's ridiculous, and that's not the way liabilities work. But equally ridiculous is the notion that software vendors should be 0% liable for defects. Somewhere in the middle there is a reasonable amount of liability, and that's what I want the courts to figure out.
Howard Schmidt writes: "It is unfortunate that my comments were reported inaccurately; at least Dan Farber has been trying to correct the inaccurate reports with his blog. I do not support PERSONAL LIABILITY for the developers NOR do I support liability against vendors. Vendors are nothing more then people (employees included) and anything against them hurts the very people who need to be given better tools, training and support."
Howard wrote this essay on the topic, to explain what he really thinks. He is against software liabilities.
Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Why Election Technology is Hard:
Electronic Voting Machines:
The Security of Checks and Balances
Security Information Management Systems (SIMS):
Technology and Counterterrorism:
The Trojan Defense
Why Digital Signatures are Not Signatures
Programming Satan's Computer: Why Computers Are Insecure
Elliptic Curve Public-Key Cryptography
The Future of Fraud: Three reasons why electronic commerce is different
Software Copy Protection: Why copy protection does not work
A company called Metacharge has rolled out an e-commerce security service in the United Kingdom. For about $2 per name, website operators can verify their customers against the UK Electoral Roll, the British Telecom directory, and a mortality database.
That's not cheap, and the company is mainly targeting customers in high-risk industries, such as online gaming. But the economics behind this system are interesting to examine. They illustrate externalities associated with fraud and identity theft, and why leaving matters to the companies won't fix the problem.
The mortality database is interesting. According to Metacharge, "the fastest growing form of identity theft is not phishing; it is taking the identities of dead people and using them to get credit."
For a website, the economics are straightforward. It costs $2 to verify that a customer is alive. If the probability the customer is actually dead (and therefore fraudulent) times the average losses due to this dead customer is more than $2, this service makes sense. If it is less, then the service doesn't. For example, if dead customers are one in ten thousand, and they cost $15,000 each, then the service is not worth it. If they cost $25,000 each, or if they occur twice as often, then it is worth it.
Imagine now that there is a similar service that identifies identity fraud among living people. The same economic analysis would also hold. But in this case, there's an externality: there is an additional cost of fraud borne by the victim and not by the website. So if fraud using the identity of living customers occurs at a rate of one in ten thousand, and each one costs $15,000 to the website and another $10,000 to the victim, the website will conclude that the service is not worthwhile, even though paying for it is cheaper overall. This is why legislation is needed: to raise the cost of fraud to the websites.
There's another economic trade-off. Websites have two basic opportunities to verify customers using services such as these. The first is when they sign up the customer, and the second is after some kind of non-payment. Most of the damages to the customer occur after the non-payment is referred to a credit bureau, so it would make sense to perform some extra identification checks at that point. It would certainly be cheaper to the website, as far fewer checks would be paid for. But because this second opportunity comes after the website has suffered its losses, it has no real incentive to take advantage of it. Again, economics drives security.
I've repeatedly said that two-factor authentication won't stop phishing, because the attackers will simply modify their techniques to get around it. Here's an example where that has happened:
"Scandinavian bank Nordea was forced to shut down part of its Web banking service for 12 hours last week following a phishing attack that specifically targeted its paper-based one-time password security system.
"According to press reports, the scam targeted customers that access the Nordea Sweden Web banking site using a paper-based single-use password security system.
"A blog posting by Finnish security firm F-Secure says recipients of the spam e-mail were directed to bogus Web sites but were also asked to enter their account details along with the next password on their list of one-time passwords issued to them by the bank on a 'scratch sheet.'"
>From F-Secure's blog:
"The fake mails were explaining that Nordea is introducing new security measures, which can be accessed at www.nordea-se.com or www.nordea-bank.net (fake sites hosted in South Korea).
"The fake sites looked fairly real. They were asking the user for his personal number, access code and the next available scratch code. Regardless of what you entered, the site would complain about the scratch code and asked you to try the next one. In reality the bad boys were trying to collect several scratch codes for their own use."
Two-factor authentication won't stop identity theft, because identity theft is not an authentication problem. It's a transaction-security problem. I've written about that already. Solutions need to address the transactions directly, and my guess is that they'll be a combination of things. Some transactions will become more cumbersome. It will definitely be more cumbersome to get a new credit card. Back-end systems will be put in place to identify fraudulent transaction patterns. Look at credit card security; that's where you're going to find ideas for solutions to this problem.
Unfortunately, until financial institutions are liable for all the losses associated with identity theft, and not just their direct losses, we're not going to see a lot of these solutions. I've written about this before as well.
We got them for credit cards because Congress mandated that the banks were liable for all but the first $50 of fraudulent transactions.
Here's a related story. The Bank of New Zealand suspended Internet banking because of phishing concerns. Now there's a company that is taking the threat seriously.
Here's a phishing scam using SMS messages and a telephone call.
Passport required to use the Internet in Italy. Why? Terrorism, of course.
Many color laser printers embed secret information in every page they print, basically to identify you by. Here, the EFF has cracked the code of the Xerox DocuColor series of printers.
The UK has used terrorism laws to stifle free speech: <http://www.schneier.com/blog/archives/2005/10/...>
The police are asking for access to private webcams.
This is just a lovely essay. Very subtle.
A weird and amusing tour of U.S. government security awareness posters.
"Study Reveals Pittsburgh Unprepared for Full-Scale Zombie Attack"
An absolutely amazing story about phantom ATM withdrawals and British banking from the early 90s. (The story is from the early 1990s; it has just become public.) Read how a very brittle security system, coupled with banks using the legal system to avoid fixing the problem, resulted in lots of innocent people losing money to phantom withdrawals. Read how lucky everyone was that the catastrophic security problem was never discovered by criminals. It's an amazing story.
This is an interesting (six-month-old) story about a supermarket loyalty program. Person 1 loses a valuable watch in a supermarket. Person 2 finds it and, instead of returning it as required by law, keeps it. Two years later, he brings it in for repair. The repairman checks the serial number against a lost/stolen database. Person 2 doesn't admit he found the watch, but instead claims that he bought it in some sort of used watch store. The police check the loyalty-program records from the supermarket and find that Person 2 was in the supermarket within hours of when Person 1 said he lost the watch.
FBI abuses of the USA Patriot Act:
One of the sillier movie-plot threats I've seen recently: terrorists playing bingo in Kentucky.
Interesting commentary on digital identification cards.
Movie-plot threats aren't limited to terrorism. Bird flu is the current movie-plot threat in the medical world. People are convinced that they have the disease when they don't.
New technology allows eavesdropping through walls.
Missouri wants to track people's movements through their cell phones.
There's a new Australian anti-terrorism law in the works. It includes such things as 14-day secret detention without arrest by security services, shoot-to-kill "on suspicion" powers for police, and imprisonment and fines for revealing an individual has been the subject of an investigation. The news reports are pretty bad.
Here's a security threat I'll bet you never even considered before -- convicted felons with large dogs: "The Contra Costa County board of supervisors [in California] unanimously supported on Tuesday prohibiting convicted felons from owning any dog that is aggressive or weighs more than 20 pounds, making it all but certain the proposal will become law when it formally comes before the board for approval Nov. 15." These are not felons in jail. These are felons who have been released from jail after serving their time. They're allowed to re-enter society, but letting them own a large dog would be just too much of a risk to the community?
Judges have blocked police from using cell phones as tracking devices. Good news for once.
NIST hosted a cryptographic hash workshop to talk about what to do in the wake of the recent cryptanalytic attacks on SHA-1. Lots of interesting discussion; no conclusions except that more research and discussion is required. I liveblogged the event.
Authenticating people by their typing patterns:
I've often said that security discussions are rarely about security. Here's a story that illustrates that. A New Jersey mother doesn't like her child's school bus stopping at McDonald's on Friday mornings. Apparently unable to come up with a cogent argument against these stops (which seems odd to me, honestly, as I can think of several), she invokes movie-plot security threats: "Tyler wants the stops to, well, stop before a student is hit by someone speeding into the drive-thru or before a robbery occurs and her son and other students are inside."
Here's an interesting paper on Oracle's password hashing algorithm.
This fascinating research paper discusses the vulnerabilities of the U.S. Navy Fleet Broadcast System in the 1980s. If you remember, John Walker and cohorts handed the Soviets the secrets that allowed them to eavesdrop on this system.
MIT is setting up a 24/7 wireless tracking network:
I think this instantaneous data grabbing system is a harbinger of the future.
Microsoft calls for a national privacy law. No, really. Microsoft is doing better about privacy recently. Certainly the devil is in the details, but this is a good start.
Really good blog posting on national security letters, increasingly used by the FBI to collect personal information without any judicial oversight.
Now this is a surprise. Richard Clarke advised New York City to perform those pointless subway searches. Reading the New York Times article, it seems that his goal wasn't to deter terrorism but simply to move it from the New York City subways to another target -- perhaps the Boston subways.
A team at the German Federal Agency for Information Technology Security has factored a 193-digit number. (Note that this is not a record; in May a 200-digit number was factored But there's a cash prize associated with this one.)
Sniffing passwords is both easy and fun:
Military use for silly string: to find trip wires.
My comments on a Business Week article on fraudulent stock transactions over the Internet:
Here's an Illinois bill that could make it illegal to own magnetic strip readers:
The Sky Posse is an organization, not affiliated with anyone official, of people who vow to fight back in the event of an airplane hijacking. Members wear cool-looking pins, made to resemble a Western sheriff's badge, with the slogan "Ready to Roll." Kind of silly, but probably harmless.
A report that the CIA slipped software bugs to the Soviets in the 1980s:
Hans Bethe was one of the first nuclear scientists, a member of the Manhattan Project, and a political activist. In a recent article about him, there's a great quote: "Sometimes insistence on 100% security actually impairs our security, while the bold decision -- though at the time it seems to involve some risk -- will give us more security in the long run."
Metadata in Microsoft Office:
There's a new report from Sandia National Laboratories (written with Lawrence Berkeley National Laboratory) titled "Guidelines to Improve Airport Preparedness Against Chemical and Biological Terrorism." It's classified, but there's an unclassified version available.
The NSA has a site for kids. Crypto Cat, Decipher Dog, and friends.
Mark Russinovich <a href="http://www.sysinternals.com/blog/2005/10/...> a rootkit on his system. After much analysis, he discovered that the rootkit was installed as a part of the DRM software linked with a CD he bought. The package cannot be uninstalled. Even worse, the package actively cloaks itself from process listings and the file system. So he posted about it on his blog, and ended up creating a firestorm, and a soap opera.
Removing the rootkit kills Windows:
Washington Post news story:
Sony lies about their rootkit: "This Service Pack removes the cloaking technology component that has been recently discussed in a number of articles published regarding the XCP Technology used on SONY BMG content protected CDs. This component is not malicious and does not compromise security. However to alleviate any concerns that users may have about the program posing potential security vulnerabilities, this update has been released to enable users to remove this component from their computers." But their update does not remove the rootkit, it just gets rid of the sys cloaking.
And you can use the rootkit to avoid World of Warcraft spyware.
F-Secure makes a good point about how this sort of thing can seriously affect the reliability of Windows:
Declan McCullagh has a good essay on the topic. There will be lawsuits.
The Italian police are getting involved.
Here's a Trojan that uses Sony's rootkit to hide.
Microsoft will update its security tools to detect and remove the rootkit. That makes a lot of sense. If Windows crashes because of this -- and others of this ilk -- Microsoft will be blamed.
The Copyright Office of the U.S. Library of Congress is conducting its required regular review of the anti-circumvention provisions of the Digital Millennium Copyright Act. Comments can be submitted over the Internet, and are due December 1st.
Schneier is delivering the keynote at the InfoSecurity Conference in New York on December 8th.
Counterpane has published an analysis of the attack trends we see in the course of security monitoring. We're watching something like 500 networks in 35 different countries, so it's pretty interesting.
The manufacturer of Tasers, Taser International, Inc., is selling a Taser Cam. The device mounts on the Taser and records -- both audio and video -- whenever the weapon is turned on, regardless of whether the weapon is fired.
It's the same idea as having cameras record all police interrogations, or record all police-car stops. It helps protect the populace against police abuse, and helps protect the police of accusations of abuse.
This is where cameras do good: when they lessen a power imbalance. Imagine if they were continuously recording the actions of elected officials -- when they were acting in their official capacity, that is.
Of course, cameras are only as useful as their data. If critical recordings are "lost," then there's no accountability. The system is pretty kludgy, and the recording has to be downloaded to a computer using a USB cable.
How soon before the cameras simply upload their recordings, in real time, to some trusted vault somewhere?
A simply horrible lead sentence in a Manila Times story: "If you see a man aged 17 to 35, wearing a ball cap, carrying a backpack, clutching a cellular phone and acting uneasily, chances are he is a terrorist."
Let's see: Approximately 4.5 million people use the New York City subway every day. Assume that the above profile fits 1% of them. Does that mean that there are 25,000 terrorists riding the New York City subways every single day? Seems unlikely.
The rest of the article gets better, but still....
If you'll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.
Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly -- less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob's effects varied greatly from organization to organization: some networks were brought to their knees, while others didn't even notice.
The worm started spreading on Sunday, 14 August. Honestly, it wasn't much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it's much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn't think it was worth all the press coverage.
By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other -- stealing "owned" computers back and forth. If your network was infected, it was a mess.
Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm's creation was not a hacker, but rather a criminal looking to profit.
The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they're increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.
What could you have done beforehand to protect yourself against Zotob and its kin? "Install the patch" is the obvious answer, but it's not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install -- at least Microsoft Windows system patches -- large corporate networks can't. Far too often, patches cause other things to break.
It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.
Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?
Given that it's impossible to know what's coming beforehand, how you respond to an actual worm largely determines your defense's effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it's impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.
The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don't think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.
This essay will appear in the November/December 2005 issue of IEEE Security & Privacy.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Comments on CRYPTO-GRAM should be sent to email@example.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane is the world's leading protector of networked information - the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of Counterpane Internet Security, Inc.
Copyright (c) 2005 by Bruce Schneier.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.