April 15, 2007
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0704.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- Announcing: Second Annual Movie-Plot Threat Contest
- The U.S. Terrorist Database
- Bank Botches Two-Factor Authentication
- Schneier/BT Counterpane News
- U.S. Government Contractor Injects Malicious Software into Critical Military Computers
- The Doghouse: Brutuslib
- Comments from Readers
The first Movie-Plot Threat Contest asked you to invent a horrific and completely ridiculous, but plausible, terrorist plot. All the entries were worth reading, but Tom Grant won with his idea to crash an explosive-filled plane into the Grand Coulee Dam.
This year the contest is a little different. We all know that a good plot to blow up an airplane will cause the banning, or at least screening, of something innocuous. If you stop and think about it, it’s a stupid response. We screened for guns and bombs, so the terrorists used box cutters. We took away box cutters and small knives, so they hid explosives in their shoes. We started screening shoes, so they planned to use liquids. We now confiscate liquids (even though experts agree the plot was implausible)…and they’re going to do something else. We can’t win this game, so why are we playing?
Well, we are playing. And now you can, too. Your goal: invent a terrorist plot to hijack or blow up an airplane with a commonly carried item as a key component. The component should be so critical to the plot that the TSA will have no choice but to ban the item once the plot is uncovered. I want to see a plot horrific and ridiculous, but just plausible enough to take seriously.
Make the TSA ban wristwatches. Or laptop computers. Or polyester. Or zippers over three inches long. You get the idea.
Your entry will be judged on the common item that the TSA has no choice but to ban, as well as the cleverness of the plot. It has to be realistic; no science fiction, please. And the write-up is critical; last year, the best entries were the most entertaining to read.
As before, assume an attacker profile on the order of 9/11: 20 to 30 unskilled people, and about $500,000 with which to buy skills, equipment, etc.
Post your movie plots—and read other entries—on the blog post.
Judging will be by me, swayed by popular acclaim in the blog comments section. The prize will be an autographed copy of “Beyond Fear” (in both English and Japanese) and the adulation of your peers. And, if I can swing it—I couldn’t last year—a phone call with a real live movie producer.
Entries close at the end of the month: April 30.
The purpose of this contest is absurd humor, but I hope it also makes a point. Terrorism is a real threat, but we’re not any safer through security measures that require us to correctly guess what the terrorists are going to do next.
The implausibility of the liquids plot:
It’s called Terrorist Identities Datamart Environment (TIDE), and it’s huge. In 2003, there were 100,000 people on it; now there are 435,000. Of course there are problems:
“TIDE has also created concerns about secrecy, errors and privacy. The list marks the first time foreigners and U.S. citizens are combined in an intelligence database. The bar for inclusion is low, and once someone is on the list, it is virtually impossible to get off it. At any stage, the process can lead to ‘horror stories’ of mixed-up names and unconfirmed information, Travers acknowledged.”
Mostly the article tells you things you already know: the list is riddled with errors, and there’s no defined process for getting on or off the list. But the most surreal quote is at the end, from Rick Kopel, the center’s acting director:
“The center came in for ridicule last year when CBS’s ’60 Minutes’ noted that 14 of the 19 Sept. 11 hijackers were listed—five years after their deaths. Kopel defended the listings, saying that ‘we know for a fact that these people will use names that they believe we are not going to list because they’re out of circulation—either because they’re dead or incarcerated…. It’s not willy-nilly. Every name on the list, there’s a reason that it’s on there.'”
Get that? There’s someone who deliberately puts wrong names on the list because they think the terrorists might use aliases, and they want to catch them. Given that reasoning, wouldn’t you want to put the entire phone book on the list?
Time Magazine article (from last November) on why we get risk so wrong. Interesting stuff, and very similar to my essay on the psychology of security.
You can buy a fake U.S. Employment Authorization card for $200.
Here’s a real version. Notice how all the security features of the card are faked—very well.
Interesting article: Neil M. Richards & Daniel J. Solove, “Privacy’s Other Path: Recovering the Law of Confidentiality,” 96 Georgetown Law Journal, 2007.
“Terrorist bus drivers” is a movie-plot threat that taps into two of our fears: terrorists, and risks to our children.
Interesting story of a social-engineering diamond theft:
U.S. Patent Office spreads FUD about music downloads. “The report, which the patent office recently forwarded to the U.S. Department of Justice, states that peer-to-peer networks could manipulate sites so children violate copyright laws more frequently than adults. That could make children the target in most copyright lawsuits and, in turn, make those protecting their material appear antagonistic, according to the report.” What happened? Did someone in the entertainment industry bribe the PTO to write this?
Another great movie-plot threat: “The Personal Car Communicator (PCC) is your car key’s smart connection with your Volvo S80 applying the latest in two-way radio technology. When in range, you’ll always know the status of your car. Locked or unlocked. Alarm activated or not. If the alarm has been activated, the heartbeat sensor will also tell you if there is someone inside the car. The PCC also includes keyless entry and keyless drive.”
Ford press release explaining the technology:
The greater Manchester police want everyone to help them find terrorists:
This reminds me of TIPS, the ill-conceived U.S. program to have meter readers and the like—people who regularly enter people’s homes—report suspicious activity to the police. It’s just dumb; people will report each other because their food smells wrong, or they talk in a funny language. The system will be swamped with false alarms, which police will have to waste their time following up on. This sort of state-sponsored snitchery is something you’d expect out of the former East Germany or Soviet Union—not the UK.
For comparison’s sake, here’s a similar program that I actually liked.
Control your car from the Internet. Or, for more fun, hack into the system and control someone else’s car from the Internet.
Selling and reselling phone minutes: an interesting new variation of phone fraud:
A new threat I hadn’t thought of before: stealing data from disk drives in photocopiers:
Google has new privacy rules: they only store personal data for two years, instead of forever. It’s a good change, but I’m not sure it is enough. I’d really prefer it if Google deleted the information after a shorter time.
The ultimate movie-plot threat: killer asteroids. And there’s not enough money to track them.
Tom Kyte, Oracle database expert, relays a surreal story of a border crossing—and the incompetence of the border official—into the U.S. from Canada:
Interesting article about diplomatic immunity as a “get out of jail free” card in Germany.
“The Straight Dope” on diplomatic immunity:
American Express is patenting tracking people via RFID. I don’t know how serious AmEx is about this, but it certainly is a good illustration of the possibilities of the technology.
Dutch e-voting scandal:
Really good article about how we misplace the blame in personal identity thefts:
The British government issues 10,000 fraudulent passports in one year. These aren’t fake passports; they’re real ones mis-issued. They have RFID chips and any other anti-counterfeiting measure the British government includes. This is the kind of thing that demonstrates why attempts to make passports harder to forge are not the right way to spend security dollars.
This article is about Singapore’s vast data mining program. What’s troubling to me is that even though Congress pulled funding for the program, it was developed elsewhere and now may be sold back to the U.S.
Security measures in the UK’s new 20-pound note:
How to recover from identity theft: a U.S.-specific checklist:
Website of U.S. “right to privacy” law cases:
In the UK, parents are buying body armor for children. One type of risk we consistently overestimate is risks involving our children.
Great old article from The Onion: Al-Qaeda or Teens?
Interesting insights on teenagers and risk assessment, in an article on auto-asphyxiation:
The Royal Academy of Engineering (in the UK) has just published a report, “Dilemmas of Privacy and Surveillance: Challenges of Technological Change,” where they argue that security and privacy are not in opposition, and that we can have both if we’re sensible about it.
At least read the recommendations:
Mennonites are considering moving to a different state because they don’t want their photo taken for their driver’s licenses. Many (all?) states had religious exemptions to the photo requirement, but now fewer do. The most interesting paragraph to me is the last one, which says that Amish hunters in Pennsylvania are getting their non-Amish neighbors to buy guns for them, to get around a law requiring a photo-ID to purchase a gun.
This is an old article—from 2000—but the sidebar (at the end) describing how the electronics store Crazy Eddie committed massive financial fraud is fascinating.
Comment by Crazy Eddie’s former CFO:
Security-related April Fool’s jokes.
My favorite: Windows Transparency Information Disclosure
I don’t know if “Threat Alert” Jesus is specifically April Fool’s, but it’s pretty funny.
“2006 Operating System Vulnerability Study”: long, but interesting. At least read the closing.
A “red team” was able to sneak about 90% of simulated weapons through Denver airport security. I’m not sure which is more important—the news or the fact that no one is surprised.
“Papers, Please,” a great essay from 1990 by Bill Holm.
VBootkit bypasses Windows Vista’s code-signing mechanisms.
WEP (Wired Equivalent Privacy) was the protocol used to secure wireless networks. It’s known to be insecure and has been replaced by Wi-Fi Protected Access, but it’s still in use. This paper shows how to break 104-bit WEP in less than 60 seconds, the best attack to date:
More info and a proof-of-concept implementation:
Random-number humor from Dilbert:
And 17 is the most random number between 1 and 20.
This paper, from February’s “International Journal of Health Geographics,” analyzes the consequences of a nuclear attack on several American cities and points out that burn unit capacity nationwide is far too small to accommodate the victims. It says just training people to flee crosswind could greatly reduce deaths from fallout.
I’ve long said that emergency response is something we should be spending money on. This kind of analysis is both interesting and helpful.
Dept of Homeland Security wants DNSSEC keys:
The UK police are considering mandating the quality of commercial CCTV cameras to ensure that the images meet their evidence standards.
By law, every business has to check its customers against a list of “specially designated nationals,” and not do business with anyone on that list. Of course, the list is riddled with bad names and many innocents get caught up in the net. And many businesses decide that it’s easier to turn away potential customers whose name is on the list, creating—well—a shunned class. This is the same problem as the no-fly list, only in a larger context. And it’s no way to combat terrorism. Thankfully, many businesses don’t know to check this list, and people whose names are similar to suspected terrorists’ can still lead mostly normal lives. But the trend here is not good.
The Specifically Designated Nationals List (SDNL):
The Onion is reporting on this as well:
Interesting analysis that concludes that there aren’t many serious spammers out there:
German police want the right to hack computers:
Childhood safety vs. childhood health—another example of how we get the risks wrong:
Great blog comment:
I love this quote from a press release: “The computer was protected by two layers of security, a unique user-identifier and a multiple-character, alpha-numeric password.” Um, hello? Having a username and a password—even if they’re both secret—does not count as two factors, two layers, or two of anything. You need to have two *different* authentication systems: a password and a biometric, a password and a token. I wouldn’t trust the New Horizons Community Credit Union with my money.
Responses to many of the blog comments, by one of the paper’s co-authors:
From their press release: “The computer was protected by two layers of security, a unique user-identifier and a multiple-character, alpha-numeric password.”
Um, hello? Having a username and a password—even if they’re both secret—does not count as two factors, two layers, or two of anything. You need to have two *different* authentication systems: a password and a biometric, a password and a token.
I wouldn’t trust the New Horizons Community Credit Union with my money.
Last month I accepted my EFF Pioneer Award. There is audio and video of my speech.
I am very pleased to receive the award, and am simply stunned by this quote from Cory Doctorow: “Technology could never achieve what the fundamental values of a democratic society can attain. We can change the world with the power of ideas. I defy you to read Bruce’s incredible essays and not have it change the way you think about the world.”
Bruce Schneier T-shirts (I had nothing to do with any of this):
Chapter 57 of “my surreal life.”
RU Sirius interviewed me for his podcast show.
eWeek named me #40 in the “Top 100 Most Influential People in IT.”
IT.com named me as one of the “59 Top Influencers in IT Security”
News articles on Schneier from India:
Wired published an excerpt from my “Psychology of Security” essay:
Schneier is speaking at the Electronic Transactions Association Annual Meeting and Conference in Las Vegas on April 19.
Schneier is the tech guest of honor at Penguicon, in Troy, MI, April 20-22.
Schneier is speaking at InfoSec Europe, in London, on April 25.
Schneier is speaking at the Computers, Freedom, and Privacy conference in Montreal on May 4.
Schneier is speaking for the ACLU in Iowa City, IA, on May 5.
This is just a frightening story. Basically, a contractor with a top secret security clearance was able to inject malicious code and sabotage computers used to track Navy submarines.
Yeah, it was annoying to find and fix the problem, but hang on. How is it possible for a single disgruntled idiot to damage a multi-billion-dollar weapons system? Why aren’t there any security systems in place to prevent this? I’ll bet anything that there was absolutely no control or review over who put what code in where. I’ll bet that if this guy had been just a little bit cleverer, he could have done a whole lot more damage without ever getting caught.
One of the ways to deal with the problem of trusted individuals is by making sure they’re trustworthy. The clearance process is supposed to handle that. But given the enormous damage that a single person can do here, it makes a lot of sense to add a second security mechanism: limiting the degree to which each individual must be trusted. A decent system of code reviews, or change auditing, would go a long way to reduce the risk of this sort of thing.
I’ll also bet you anything that Microsoft has more security around its critical code than the U.S. military does.
It’s 2007, and I can’t believe people are still using homebrewed encryption algorithms. This one looks pretty easy to break.
Last month, Marine General James Cartwright told the House Armed Services Committee that the best cyber defense is a good offense.
As reported in “Federal Computer Week,” Cartwright said: “History teaches us that a purely defensive posture poses significant risks,” and that if “we apply the principle of warfare to the cyberdomain, as we do to sea, air and land, we realize the defense of the nation is better served by capabilities enabling us to take the fight to our adversaries, when necessary, to deter actions detrimental to our interests.”
The general isn’t alone. In 2003, the entertainment industry tried to get a law passed giving them the right to attack any computer suspected of distributing copyrighted material. And there probably isn’t a sysadmin in the world who doesn’t want to strike back at computers that are blindly and repeatedly attacking their networks.
Of course, the general is correct. But his reasoning illustrates perfectly why peacetime and wartime are different, and why generals don’t make good police chiefs.
A cyber-security policy that condones both active deterrence and retaliation—without any judicial determination of wrongdoing—is attractive, but it’s wrongheaded, not least because it ignores the line between war, where those involved are permitted to determine when counterattack is required, and crime, where only impartial third parties (judges and juries) can impose punishment.
In warfare, the notion of counterattack is extremely powerful. Going after the enemy—its positions, its supply lines, its factories, its infrastructure—is an age-old military tactic. But in peacetime, we call it revenge, and consider it dangerous. Anyone accused of a crime deserves a fair trial. The accused has the right to defend himself, to face his accuser, to be represented by an attorney, and to be presumed innocent until proven guilty.
Both vigilante counterattacks and pre-emptive attacks fly in the face of these rights. They punish people who haven’t been found guilty. It’s the same whether it’s an angry lynch mob stringing up a suspect, the MPAA disabling the computer of someone it believes made an illegal copy of a movie, or a corporate security officer launching a denial-of-service attack against someone he believes is targeting his company over the net.
In all of these cases, the attacker could be wrong. This has been true for lynch mobs, and on the Internet it’s even harder to know who’s attacking you. Just because my computer looks like the source of an attack doesn’t mean it is. And even if it is, it might be a zombie controlled by yet another computer; I might be a victim, too. The goal of a government’s legal system is justice; the goal of a vigilante is expediency.
I understand the frustrations of General Cartwright, just as I do the frustrations of the entertainment industry, and the world’s sysadmins. Justice in cyberspace can be difficult. It can be hard to figure out who is attacking you, and it can take a long time to make them stop. It can be even harder to prove anything in court. The international nature of many attacks exacerbates the problems; more and more cybercriminals are jurisdiction shopping: attacking from countries with ineffective computer crime laws, easily bribable police forces, and no extradition treaties.
Revenge is appealingly straightforward, and treating the whole thing as a military problem is easier than working within the legal system.
But that doesn’t make it right. In 1789, the Declaration of the Rights of Man and of the Citizen declared: “No person shall be accused, arrested, or imprisoned except in the cases and according to the forms prescribed by law. Any one soliciting, transmitting, executing, or causing to be executed any arbitrary order shall be punished.”
I’m glad General Cartwright thinks about offensive cyberwar; it’s how generals are supposed to think. I even agree with Richard Clarke’s threat of military-style reaction in the event of a cyber-attack by a foreign country or a terrorist organization. But short of an act of war, we’re far safer with a legal system that respects our rights.
This essay originally appeared in Wired.
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
BT Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.
Copyright (c) 2007 by Bruce Schneier.