November 15, 2003
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com.
In this issue:
Nathaniel Heatwole is a student at Guilford College. Several times between February 7 and September 15 he tested airline security. First he smuggled box cutters, clay simulating plastic explosives, and bleach simulating bomb-making chemicals through security. Then he hid these things in the lavatories of airplanes, along with notes. And finally, he sent an e-mail to the Transportation Security Administration (TSA) titled "Information Regarding 6 Recent Security Breaches."
The problem is that the TSA never asked him to test their security.
For years, computer networks have been plagued with hackers breaking into systems. These people are not breaking into systems for profit. They don't commit fraud. They don't commit theft. They're breaking into systems for the intellectual curiosity. They're breaking into systems for the fun. They're breaking into systems to see if they can.
A traditional and common defense by hackers is that they're breaking into systems in order to test their security. The idea is that the only way to learn about computer and network security is to attack systems. Never mind that these hackers don't own the systems they're breaking into; that's the excuse.
The Department of Homeland Security and the Transportation Security Administration have been attacked by their first hacker. This wasn't a terrorist; he wasn't out to take over planes. This wasn't even a criminal; he didn't try to extort money. He was a hacker, plain and simple. He wanted to test the efficacy of the security screeners. He wanted to demonstrate that the security measures were, in his eyes, inadequate. He wanted to hack airport security.
Point 1: This is extraordinarily silly. Every traveler I know has stories of knives being missed by airport security. No one who flies regularly thinks that the TSA is doing a good job of keeping sharp objects off airplanes. Even worse, no one who flies regularly thinks that keeping sharp objects off airplanes makes us all safer. Most of what the TSA does is security theater -- window dressing. It keeps up appearances, and maybe (hopefully) makes the terrorists a little less sure they can smuggle their weapons aboard airplanes. Probably not.
Point 2: This is, and should be treated as, a crime. "I was only testing security" is not a valid defense. For years, we in the computer security field have been hearing that excuse. Because the hacker didn't intend harm, because he just broke into the system and just looked around, it wasn't a real crime. Here's a thought experiment for you. Imagine you return home and find the following note attached to your refrigerator: "I was testing the security of back doors in the neighborhood and found yours unlocked. I just looked around. I didn't take anything. You should fix your lock." Do you feel violated? Of course you do.
Point 3: While it is a crime, it isn't a terribly serious crime. Heatwole's stunt was embarrassing, and cost a whole lot of money to investigate and clean up. It could have disrupted the travel schedules of lots of people. But he's not a terrorist. He didn't do this to feed security information to al Qaeda. His actions didn't endanger anyone's lives. There's a tendency to want to throw the book at him because he embarrassed important government officials, but that's not a good enough reason. We need to discourage this behavior, but the punishment needs to fit the crime. Treat Heatwole as a criminal, but not a serious criminal.
Welcome to our world, Department of Homeland Security. Welcome, TSA. We've been fighting these sorts of people for years. You're going to have better luck prosecuting them, but don't let your anger get in the way of reason.
A version of this essay appeared in IEEE Security & Privacy
Another box cutter was found on an airplane. No one knows who planted this one.
We all know that the new airline security procedures are silly. Baggage screeners taking away pocket knives and box cutters doesn't improve airline security, even after 9/11.
People who think otherwise don't understand what allowed the terrorists to take over four planes two years ago. It wasn't a small knife. It wasn't a box cutter. The critical weapon that the terrorists had was surprise. With surprise they could have taken the planes over with their bare hands. Without surprise they couldn't have taken the planes over, even if they had guns.
And surprise has been confiscated on all flights since 9/11. It doesn't matter what weapons any potential new hijackers have; the passengers will no longer allow them to take over airplanes. I don't believe that airplane hijacking is a thing of the past, but when the next plane gets taken over it will be because a group of hijackers figured out a clever new weapon that we haven't thought of, and not because they snuck some small pointy objects through security.
Crypto-Gram is currently in its sixth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Why Digital Signatures are Not Signatures
Programming Satan's Computer: Why Computers Are Insecure
Elliptic Curve Public-Key Cryptography
The Future of Fraud: Three reasons why electronic commerce is different
Software Copy Protection: Why copy protection does not work
Paper: "How to Find Hidden Cameras" Really interesting reading.
Good essay on @Stake and the integrity of their actions in firing Dan Geer:
A California man's hacking conviction has been overturned on appeal. Another is that it's appalling that it ever happened, considering that the victim served over a year of jail time. It's a win for the good guys, but it took far too long.
"Identity and Economics" presentation from DefCon:
A terrorism operations manual, believed to be used by al Quaeda
Turns out that many automobiles have master keys. Criminals are using them to steal cars.
Interesting article on casino security. (Be careful about believing details in articles like this. Most of these stories are planted by the casinos to convince the public how good security is at catching cheats. It's actually not nearly that good.)
Listening in on terrorist communications. The problem isn't data collection; it's data analysis:
The SANS Top 20 computer security vulnerabilities:
There seems to be no effect from California's security-breach disclosure law:
Excellent analysis of the security of Windows vs. Linux:
Turns out that many U.S. driver's license numbers aren't random at all, but contain embedded information about your name, etc. This is an interesting site about these numbers. Vital reading if you're planning on getting a fake ID.
Bruce Tognazzini on computer security interfaces:
Australia seems to be trying to do e-voting properly:
New risks to privacy. An outsourced medical transcription worker in Pakistan (through three levels of contractors) threatened to post confidential medical records on the net because she hadn't been paid.
Marcus Ranum has a new book: "The Myth of Homeland Security." Good companion volume to my own "Beyond Fear":
Interesting article on the mind of a suicide terrorist:
Someone deliberately inserted a back door into the Linux kernel. It was discovered and removed before release. This story shows both the security dangers and the benefits of open source software.
Robert Cringely on identity theft:
In a pretty clever PR move, Microsoft is offering a bounty for information leading to the arrest and conviction of malware writers.
Good article on cyberincident response planning:
Rebuttal to our monoculture paper. A good essay; these points are worth debating:
Fun with electronic voting. The MicroVote machines registered 144,000 votes from 19,000 registered voters. After much panicking and tracking down the bug, the actual number of votes turned out as 5,352. Or maybe not; you can't prove it one way or the other.
Bruce Schneier will speak at Comdex in Las Vegas this month. He's giving a talk about "Following the Money: Why Security Decisions are so Rarely About Security" on Tuesday, 18 November, at 2:00. He will also participate in panels on "How much security is enough?" (Mon. 11/17, 11:00) and "Where Hardware Security Meets Software Security: Weak Points and Real Attacks" (Tue. 11/18, 3:30).
Bruce Schneier will be interviewed on WGN Radio Chicago on 25 November 2003, from 9:00 PM - 11:00 PM. The show is called "Extension 720."
Counterpane has announced a partnership with Network Associates.
Counterpane will be exhibiting at the Inaugural European Forum On Cyber
My latest book continues to get great reviews. I'm especially pleased to see the book reviewed in non-computer publications like "The Economist." To those who have purchased the book already, thank you very much for helping make it a success. I hope that I am having some effect on the sorry state of security these days.
"[Schneier] is one of the world's leading experts on computer security, and arguably the most articulate.... Surprisingly entertaining, with many examples of security systems, both good and bad, drawn from the natural world, military history and other fields.... Beyond Fear deserves to be widely read."
The home page of this company says "lightyears beyond encryption." Actually, it's an anti-copying technology for music CDs. This technology is being used to protect the new CD by BMG soul artist Anthony Hamilton.
It's actually not worth fighting the pop-ups and the Flash and the annoying website to learn about how the system works or how you can purchase it. It turns out you can defeat this system by holding down the shift key when you insert a music CD into your computer. This disables autorun, so the SunnComm software never gets executed.
Unfortunately, SunnComm has some more tricks up its sleeve. They're suing John Halderman, the Princeton PhD student who first noticed this. That'll make the system secure again; of course it will.
Aaron Caffrey is a UK teenager accused of launching a distributed denial-of-service attack against an independent contractor for the Port of Houston, Texas. Last month he was acquitted on all charges in a UK court. Caffrey's defense was that while the attack did come from Caffrey's computer, it was the work of someone who had installed a Trojan horse program on the machine and altered his computer's log files.
I have read several opinions on this case. Some believe that the "Trojan defense" sets a dangerous precedent, and that computer criminals will claim it every time. I believe that it sets a very good precedent, and will force prosecutors to do more than show that a particular computer was involved in a crime.
The hardest part of computer security is the piece between the computer and the user. The hardest part of encryption is maintaining the security of the data when it's being entered into the keyboard and when it's being displayed on the screen. The hardest part of digital signatures is proving that the text signed is the same text that the user viewed. And the hardest part of computer forensics is knowing who is sitting in front of a particular computer at any time.
Just because a particular computer was involved in an attack doesn't mean that the computer's owner was involved. Maybe, as Aaron Caffrey alleged, the computer was being controlled by someone else. We know that many hackers control a series of computers in an attempt to disguise their tracks. Maybe, as is being alleged in another case, the computer was in a public space and someone else used it to commit the crime. Maybe the user was duped into pushing certain keys or clicking on certain mouse buttons, and had no idea what he was really doing.
Also in the U.K., two men accused of downloading child pornography convinced the court that a Trojan on their computer did it and not them.
This defense makes it harder for the prosecution, but that's not a bad thing. The barrier should be high to convict someone of a crime. If the prosecutor can prove that a particular computer was involved but can't prove that a particular person was involved, that sounds like insufficient evidence to convict. I want the prosecutor to be able to prove that the person committed the crime.
By allowing this defense we're permitting some guilty people to go free, but we're also protecting the innocent. I don't think society would be well-served by denying this defense and thus offering people a sure-fire way to frame someone for a computer crime.
From: Russell Nelson <nelsoncrynwr.com>
> A New York detective was once asked whether pickpockets in
Do you mean this as evidence to bolster your point or to counter it? It seems to me that if he never arrested even one pickpocket in a tie, that would be very good evidence that pickpockets wearing ties escape arrest.
From: Troy Davis <troynack.net>
> A 19-year-old used a fake website to lure victims into downloading
Though online brokerage users are obvious targets, the grand finale is customers of more traditional banks who have intentionally enabled Internet access to one account for online trading. Enabling one account often automatically allows Internet transactions for all accounts held at the same bank, via the same single password.
As a result, full Internet access to all bank accounts is frighteningly common even among casual users. Does Joe Average need the ability to move $50,000 without a phone call, let alone a visit to the bank? Not only do I not need the ability, I don't want it.
Pros wouldn't bother with Trojan horses or the Internet. Select mid- or late-career professionals with substantial salaries, basic end-user technology experience, and verifiable factors associated with active Internet use (school-age kids, job requiring telecommuting).
Pick one of a dozen legitimate physical access methods: law firm cleaning crew, downtown condo maintenance staff. Retrieve the keystroke logger a few weeks later.
As you mention, the attacker would be out of the country or off the radar (open WAP), not only before anyone knew, but before the first transaction was initiated. The online equivalent of ATM transaction velocity limits -- three $300 withdrawals in one day and your card is denied -- is rarer than we'd like to believe.From: Ton van der Putte <Ton.vanderPutteatosorigin.com>
Last year in the June issue of CRYPTO-GRAM you made a reference to our article "Don't get your fingers burned". In the article we describe two methods to duplicate fingerprints. One method assumes co-operation (somebody "lends" his finger to make a duplicate), while in the other method a lifted latent fingerprint is duplicated by means of a photo/chemical process. With these dummy fingerprints we have been able to fool all fingerprint sensors we have tested in our lab and on exhibitions (about 20 different brands). I started with these experiments in the early nineties, so more than 10 years ago.
Last week we were invited by the BBC to come to London for in interview about duplicating fingerprints. The reason was that the British Administration intends to add biometrics to the new British identity card, one of the options is fingerprint biometrics. The programme, "Kenyon Confronts" has aired on Wednesday October 29th and is (for a short period of time) available for on-line viewing at the BBC site.
Since my first experiments were dated ten years back, I decided to redo my experiments. I knew it would be easier to duplicate fingerprints with all the materials and equipment available today, but the results even amazed me. To give you an idea, ten years ago to make a duplicate of a fingerprint with co-operation took me 2 to 3 hours and for an optimum result I used materials used by dental technicians. Nowadays I use materials you can buy in a do-it-yourself shop and the total material costs are about $10 (enough for about 20 dummy fingers).
The time it takes to make a perfect duplicate is about 15 minutes (with special material it can be reduced to less than 10 minutes). To make a duplicate of a lifted fingerprint took me several days in 1992 and I had to do a lot of experiments to find the right process/technique. Now it takes me half an hour and the material costs are $20 (also sufficient for about 20 duplicates), the only equipment you need is a digital camera and an UV lamp. Not only do I now make the duplicates in a fraction of the time, but also the quality is better.
The reason for writing you all this is the following. Although, most of the fingerprint manufacturers still ignore that there is a problem or claim to have solved it, some are willing to admit, but use the argument that it is very difficult and expensive to duplicate fingerprints and that it can only be done by highly skilled professionals. In the first place I think this is not a very strong argument, second I admit I am a professional, but now the average do-it-yourselfer is able to achieve perfect results and requires only limited means and skills.
So it is our opinion, that as long as the manufacturers of fingerprint equipment do not solve the live detection problem (i.e. detect the difference between a live finger and a dummy), biometric fingerprint sensors should not be used in combination with identity cards, or in medium to high security applications. In fact, we even believe that identity cards with fingerprint biometrics are in fact weaker than cards without it. The following two examples may illustrate this statement.
1. Suppose, because of the fingerprint check, there is no longer visual identification by an official or a controller. When the fingerprint matches with the template in the card then access is granted if it is a valid card (not on the blacklist). In that case someone who's own card is on the blacklist, can buy a valid identity card with matching dummy fingerprint (only 15 minutes work) and still get access without anyone noticing this.
2. Another example: Suppose there still is visual identification and only in case of doubt--the look-alike problem with identity cards--the fingerprint will be checked. When the photo on the identity card and the person do not really match and the official asks for fingerprint verification, most likely the positive result of the fingerprint scan will prevail. That is, the "OK" from the technical fingerprint system will remove any (legitimate) doubt.
It is our opinion that especially the combination of identity cards and biometric fingerprint sensors results in risks of which not many people are aware.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Comments on CRYPTO-GRAM should be sent to email@example.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..