April 15, 2003
by Bruce Schneier
Founder and CTO
Counterpane Internet Security, Inc.
A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org.
Copyright (c) 2003 by Counterpane Internet Security, Inc.
In this issue:
- Automated Denial-of-Service Attack Using the U.S. Post Office
- The Doghouse: EverSeal Solutions
- Crypto-Gram Reprints
- Counterpane News
- Security Notes from All Over: Baseball
- National Crime Information Center (NCIC) Database Accuracy
In December 2002, the notorious “spam king” Alan Ralsky gave an interview. Aside from his usual comments that antagonized spam-hating e-mail users, he mentioned his new home in West Bloomfield, Michigan. The interview was posted on Slashdot, and some enterprising reader found his address in some database. Egging each other on, the Slashdot readership subscribed him to thousands of catalogs, mailing lists, information requests, etc. The results were devastating: within weeks he was getting hundreds of pounds of junk mail per day and was unable to find his real mail amongst the deluge.
Ironic, definitely. But more interesting is the related paper by security researchers Simon Byers, Avi Rubin and Dave Kormann, who have demonstrated how to automate this attack.
If you type the following search string into Google — “request catalog name address city state zip” — you’ll get links to over 250,000 (the exact number varies) Web forms where you can type in your information and receive a catalog in the mail. Or, if you follow where this is going, you can type in the information of anyone you want. If you’re a little bit clever with Perl (or any other scripting language), you can write a script that will automatically harvest the pages and fill in someone’s information on all 250,000 forms. You’ll have to do some parsing of the forms, but it’s not too difficult. (There are actually a few more problems to solve. For example, the search engines normally don’t return more than 1,000 actual hits per query.) When you’re done, voila! It’s Slashdot’s attack, fully automated and dutifully executed by the U.S. Postal Service.
If this were just a nasty way to harass people you don’t like, it wouldn’t be worth writing about. What’s interesting about this attack is that it exploits the boundary between cyberspace and the real world. The reason spamming normally doesn’t work with physical mail is that sending a piece of mail costs money, and it’s just too expensive to bury someone’s house in mail. Subscribing someone to magazines and signing them up for embarrassing catalogs is an old trick, but it has limitations because it’s physically difficult to do it on a large scale. But this attack exploits the automation properties of the Internet, the Web availability of catalog request forms, and the paper world of the Post Office and catalog mailings. All the pieces are required for the attack to work.
And there’s no easy defense. Companies want to make it easy for someone to request a catalog. If the attacker used an anonymous connection to launch his attack — one of the zillions of open wireless networks would be a good choice — I don’t see how he would ever get caught. Even worse, it could take years for the victim to get his name off all of the mailing lists — if he ever could.
Individual catalog companies can protect themselves by adding a human test to their sign-up form. The idea is to add a step that a person can easily do, but a machine can’t. The most common technique is to produce a text image that OCR technology can’t understand but the human eye can, and to require that the text be typed into the form. These have been popping up on Web sites to prevent automatic registration; I’ve seen them on Yahoo and PayPal, for example.
If everyone used this sort of thing, the attack wouldn’t work. But the economics of the situation means that this won’t happen. The attack works in aggregate; each individual catalog mailer only participates to a small degree. There would have to be a lot of fraud for it to be worth the money for a single catalog mailer to install the countermeasure. (Making it illegal to send a catalog to someone who didn’t request it could change the economics.)
Attacks like this abound. They arise when an old physical process is moved onto the Internet, and is then automated in some unanticipated way. They’re emergent properties of the systems. And they’re going to become more prevalent in the years ahead.
It’s a one-time pad, which is reason enough to doghouse these guys. But they have this truly beautiful quote on their “How it Works” page: “Now you might think that because there are only some 72 commonly used letters, numbers and punctuation marks, where the upper bit of a byte is always a ‘0’, that the attacker’s job is easier and he can guess some of them. That is why we scramble your data with the DES encryptions before we OTP encrypt it. The DES operation scrambles the data on a bit basis as well as a byte basis, leaving all number bits in question.”
Crypto-Gram is currently in its sixth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
How to Think About Security:
Is 1028 Bits Enough?
Liability and Security
Natural Advantages of Defense: What Military History Can Teach Network Security, Part 1
Cryptography: The Importance of Not Being Different:
Threats Against Smart Cards:
Attacking Certificates with Computer Viruses:
From a news article on the arrest of al Qaeda operational planner Khalid Shaikh Mohammed: “Much of the information on Mohammed’s laptop computer was protected by an encryption code that CIA analysts cracked easily, U.S. intelligence officials said. The analysts said the code was surprisingly simple.” More likely is that the key was stored in some temporary file on the disk somewhere, or fell to a dictionary attack. But maybe these guys use home-grown cryptography.
Actual problems with anonymous computerized voting:
Users don’t trust Microsoft security, but they still trust Microsoft security:
Interesting paper, “Strike and Counterstrike: The Law on Automated Intrusions and Striking Back.”
Really interesting paper, “The Myth of Security at Canada’s Airports.”
The origins of that fake news story about a virus-infected printer being smuggled into Iraq during the First Gulf War.
Saudi terrorist sympathizers learn computer security at American universities. “After studying in Texas and Indiana, al-Hussayen began the University of Idaho’s doctoral program in computer science in 1999, with a specialty in computer security and intrusion techniques, according to the indictment.”
Analyzing the trade-offs of security gained and freedoms lost:
Interesting paper on how to use memory errors to attack a virtual computer. The attack exploits the fact that a “time of compilation” check is not necessarily valid at “time of use.”
There are several massive networks of compromised machines, one consisting of around 140,000 computers. The machines have had bots placed on them; the bots establish communication with Internet Relay Chat (IRC) servers to receive commands. Given that it takes hundreds of networked computers to take down a major Internet site in a denial-of-service attack, these networks could do significant damage.
New way to steal password. A Discover credit card customer receives an e-mail telling him that his account is on hold due to inactivity, and that in order to reactivate his account, he must log in to this phony Web site. The information collected includes plenty of data that would enable identity theft: Social Security number, mother’s maiden name, account number, and passwords. Similar scams have targeted PayPal and eBay customers.
Survey says that two-thirds of all security breaches are the result of human error. The survey seems really sloppy, but I believe the results.
President Bush signed an executive order allowing details of the Internet to be classified for security purposes.
Interesting report on spam and how to avoid it
A proposed bill to extend the DMCA that could potentially make firewalls and other security devices illegal.
Vendor tests of face recognition systems. “Typically, the watch list task is more difficult than the identification or verification tasks alone. Figure 8 shows detection and identification rates for varying watch list sizes at a false alarm rate of 1%. For the best system using a watch list of 25 people, the detection and identification rate is 77%. Increasing the size watch list to 3,000 people, decreases the detection and identification rate to 56%.”
Risks of wiretapping today:
New security flag for IPv4. This has profound implications for Internet security, and is likely to be deployed world-wide within months.
A nice article that captures the spirit of the Computers, Freedom, and Privacy conference in New York earlier this month:
Man sent to jail for selling mod chips for the Xbox
Bag matching and U.S. airlines: why isn’t it happening?
Richard Clarke no longer works for the Bush Administration, so he can speak his mind about cybersecurity in the U.S. government.
Bruce Schneier gave the keynote address at the Computers, Freedom, and Privacy conference in New York (in April). You can listen to it here:
Schneier is speaking at the RSA Conference in San Francisco. He is speaking on “Security Proxies and Agenda” on Wednesday, April 16, at 9:00 AM and on “How to Think About Security” on Thursday, April 17, at 10:00 AM. He is also chairing the Cryptographer’s Panel on April 14.
A couple of weeks ago I was listening to a baseball game on the radio. The announcer was talking about the new antiterrorism security countermeasures at the ballpark. One of them, he said, was that people are not allowed to bring bottles and cans into the park with them.
This is, of course, ridiculous. The prohibition against bringing outside drinks into the park has nothing to do with terrorism. The park wants people to buy drinks from their concession stands, at inflated prices, and to not be able to undercut those prices by bringing in drinks from outside.
This is an example of a non-security agenda co-opting a security countermeasure, and it happens a lot. Airlines were in favor of the photo ID requirement not because of some vague threat of terrorism, but because it killed the practice of reselling nonrefundable tickets. Hotels make a copy of your driver’s license not because of security, but because they want your information for their marketing database.
Security decisions are always about more than security. When trying to evaluate a particular decision, always pay attention to the non-security agendas of the people involved.
Last month the U.S. Justice Department administratively discharged the FBI of its statutory duty to ensure the accuracy and completeness of the National Crime Information Center (NCIC) database. This database is enormous. It contains over 39 million criminal records. It contains information on wanted persons, missing persons, and gang members, as well as information about stolen cars, boats, and other information. Over 80,000 law enforcement agencies have access to this database. On average, there are 2.8 million transactions processed each day.
The Privacy Act of 1974 requires the FBI to make reasonable efforts to ensure the accuracy and completeness of the records in this database. Last month, the Justice Department exempted the system from the law’s accuracy requirements.
This isn’t just bad social practice, it’s bad security. A database with more errors is much less useful than a database with fewer errors, and an error-filled security database is much more likely to target innocents than it is to let the guilty go free.
To see this, let’s walk through an example. Assume a simple database — name and a single code indicating “innocent” or “guilty.” When a policeman encounters someone, he looks that person up in the database, and then arrests him if the database says “guilty.”
Example 1: Assume the database is 100% accurate. If that is the case, there won’t be any false arrests because of bad data. It works perfectly.
Example 2: Assume a 0.0001% error rate: one error in a million. (An error is defined as a person having an “innocent” code when he is guilty, or a “guilty” code when he is innocent.) Furthermore, assume that one in 10,000 people are guilty. In this case, for every 100 guilty people the database correctly identifies it will mistakenly identify one innocent person as guilty (because of an error). And the number of guilty people erroneously listed as innocent is tiny: one in a million.
Example 3: Assume a 1% error rate — one in a hundred — and the same one in 10,000 ratio of guilty people. The results are very different. For every 100 guilty people the database correctly identifies, it will mistakenly identify 10,000 innocent people as guilty. The number of guilty people erroneously listed as innocent is larger, but still very small: one in 100.
The differences between examples 2 and 3 are striking. In example 2, one person is erroneously arrested for every 100 people correctly arrested. In example 3, one person is correctly arrested for every 100 people erroneously arrested. The increase in error rate makes the database all but useless as a system for figuring out how to arrest. And this is despite the fact that, in both cases, almost no guilty people get away because of a database error.
The reason for this phenomenon is that the number of guilty people is a very small percentage of the population. If one in ten people were guilty, then a 0.0001% error rate would mistakenly arrest one innocent for every 100,000 guilty, and a 1% error rate would arrest approximately one innocent for every guilty. And if the number of guilty people is even less than one in ten thousand, then the problem of arresting innocents magnifies even more as the database has more errors.
Now this is a simple example, and the NCIC database has far more complex data and tries to make more complex correlations. And I am assuming that the error rate for false positives are the same as the error rate for false negatives, and there aren’t any data dependencies that complicate the analysis. But even with these complications, the problems are still the same. Because there are so few terrorists (for example) amongst the general population, a error-filled database is far more likely to identify innocent people as terrorists than it is to catch actual terrorists.
This kind of thing is already happening. There are 13 million people on the FBI’s terrorist watch list. That’s ridiculous, it’s simply inconceivable that a number of people equal to 4.5% of the population of the United States are terrorists. There are far more innocents on that list than there are guilty people not on that list. And these innocents are regularly harassed by police trying to do their job. And in any case, any watch list with 13 million people is basically useless. How many resources can anyone afford to spend watching about one-twentieth of the population, anyway?
That 13-million-person list feels a whole like CYA on the part of the FBI. Adding someone to the list probably has no cost and, in fact, may be one criterion for how your performance is evaluated at the FBI. Removing someone from the list probably takes considerable courage, since someone is going to have to take the fall when “the warnings were ignored” and “they failed to connect the dots.” Best to leave that risky stuff to other people, and to keep innocent people on the list forever.
Many argue that this kind of thing is bad social policy. I argue that it is bad security as well.
What you can do: sign this petition online.
13 million people on terrorist watch list:
What happens to innocents on the government’s “no fly” list:
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of “Secrets and Lies” and “Applied Cryptography,” and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide.