September 15, 2007
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
You can read this issue on the web at <http://www.schneier.com/crypto-gram-0709.html>. These same essays appear in the “Schneier on Security” blog: <http://www.schneier.com/>. An RSS feed is available.
In this issue:
- First Responders
- Basketball Referees and Single Points of Failure
- Interview with National Intelligence Director Mike McConnell
- Home Users: A Public Health Problem?
- Vague Threat Prompts Overreaction
- Stupidest Terrorist Overreaction?
- Wholesale Automobile Surveillance Comes to New York City
- Schneier/BT Counterpane News
- U.S. Government Threatens Retaliation Against States who Reject REAL ID
- Computer Forensics Case Study
- Getting Free Food at a Fast-Food Drive-In
- Comments from Readers
I live in Minneapolis, so the collapse of the Interstate 35W bridge over the Mississippi River earlier this month hit close to home, and was covered in both my local and national news.
Much of the initial coverage consisted of human interest stories, centered on the victims of the disaster and the incredible bravery shown by first responders: the policemen, firefighters, EMTs, divers, National Guard soldiers, and even ordinary people, who all risked their lives to save others. (Just two weeks later, three rescue workers died in their almost-certainly-futile attempt to save six miners in Utah.)
Perhaps the most amazing aspect of these stories is that there’s nothing particularly amazing about it. No matter what the disaster—hurricane, earthquake, terrorist attack—the nation’s first responders get to the scene soon after.
Which is why it’s such a crime when these people can’t communicate with each other.
Historically, police departments, fire departments and EMTs have all had their own independent communications equipment, so when there’s a disaster that involves them all, they can’t communicate with each other. A 1996 government report said this about the *first* World Trade Center bombing in 1993: “Rescuing victims of the World Trade Center bombing, who were caught between floors, was hindered when police officers could not communicate with firefighters on the very next floor.”
And we all know that police and firefighters had the same problem on 9/11. You can read details in firefighter Dennis Smith’s book and 9/11 Commission testimony. The “9/11 Commission Report” discusses this as well: Chapter 9 talks about the first responders’ communications problems, and commission recommendations for improving emergency-response communications are included in Chapter 12 (pp. 396-397).
In some cities, this communication gap is beginning to close. Homeland Security money has flowed into communities around the country. And while some wasted it on measures like cameras, armed robots, and things having nothing to do with terrorism, others spent it on interoperable communications capabilities. Minnesota did that in 2004.
It worked. Hennepin County Sheriff Rich Stanek told the St. Paul Pioneer-Press that lives were saved by disaster planning that had been fine-tuned and improved with lessons learned from 9/11:
“‘We have a unified command system now where everyone—police, fire, the sheriff’s office, doctors, coroners, local and state and federal officials—operate under one voice,’ said Stanek, who is in charge of water recovery efforts at the collapse site.
“‘We all operate now under the 800 (megahertz radio frequency system), which was the biggest criticism after 9/11,’ Stanek said, ‘and to have 50 to 60 different agencies able to speak to each other was just fantastic.'”
Others weren’t so lucky. Louisiana’s first responders had catastrophic communications problems in 2005, after Hurricane Katrina. According to National Defense Magazine: “Police could not talk to firefighters and emergency medical teams. Helicopter and boat rescuers had to wave signs and follow one another to survivors. Sometimes, police and other first responders were out of touch with comrades a few blocks away. National Guard relay runners scurried about with scribbled messages as they did during the Civil War.”
A congressional report on preparedness and response to Katrina said much the same thing.
In 2004, the U.S. Conference of Mayors issued a report on communications interoperability. In 25% of the 192 cities surveyed, the police couldn’t communicate with the fire department. In 80% of cities, municipal authorities couldn’t communicate with the FBI, FEMA, and other federal agencies.
The source of the problem is a basic economic one, called the “collective action problem.” A collective action is one that needs the coordinated effort of several entities in order to succeed. The problem arises when each individual entity’s needs diverge from the collective needs, and there is no mechanism to ensure that those individual needs are sacrificed in favor of the collective need.
Jerry Brito of George Mason University shows how this applies to first-responder communications. Each of the nation’s 50,000 or so emergency-response organizations—local police department, local fire department, etc.—buys its own communications equipment. As you’d expect, they buy equipment as closely suited to their needs as they can. Ensuring interoperability with other organizations’ equipment benefits the common good, but sacrificing their unique needs for that compatibility may not be in the best immediate interest of any of those organizations. There’s no central directive to ensure interoperability, so there ends up being none.
This is an area where the federal government can step in and do good. Too much of the money spent on terrorism defense has been overly specific: effective only if the terrorists attack a particular target or use a particular tactic. Money spent on emergency response is different: It’s effective regardless of what the terrorists plan, and it’s also effective in the wake of natural or infrastructure disasters.
No particular disaster, whether intentional or accidental, is common enough to justify spending a lot of money on preparedness for a specific emergency. But spending money on preparedness in general will pay off again and again.
This essay originally appeared on Wired.com.
In comments, people pointed out that that training and lack of desire to communicate are bigger problems than technical issues. This is certainly true. Just giving first responders interoperable radios won’t automatically solve the problem; they need to want to talk to other groups as well.
Minneapolis rescue workers:
Utah rescue-worker deaths:
9/11 Commission Report:
Wasted security measures:
Minnesota and interoperable communications:
Sports referees are supposed to be fair and impartial. They’re not supposed to favor one team over another. And they’re most certainly not supposed to have a financial interest in the outcome of a game.
Tim Donaghy, referee for the National Basketball Association, has been accused of both betting on basketball games and fixing games for the mob. He has confessed to far less—gambling in general, and selling inside information on players, referees, and coaches to a big-time professional gambler named James “Sheep” Battista. But the investigation continues, and the whole scandal is an enormous black eye for the sport. Fans like to think that the game is fair and that the winning team really is the winning team.
The details of the story are fascinating and well worth reading. But what interests me more are its general lessons about risk and audit.
What sorts of systems—IT, financial, NBA games, or whatever—are most at risk of being manipulated? The ones where the smallest change can have the greatest impact, and the ones where trusted insiders can make that change.
Of all major sports, basketball is the most vulnerable to manipulation. There are only five players on the court per team, fewer than in other professional team sports; thus, a single player can have a much greater effect on a basketball game than he can in the other sports. Star players like Michael Jordan, Kobe Bryant and LeBron James can carry an entire team on their shoulders. Even baseball great Alex Rodriguez can’t do that.
Because individual players matter so much, a single referee can affect a basketball game more than he can in any other sport. Referees call fouls. Contact occurs on nearly every play, any of which could be called as a foul. They’re called “touch fouls,” and they are mostly, but not always, ignored. The refs get to decide which ones to call.
Even more drastically, a ref can put a star player in foul trouble immediately—and cause the coach to bench him longer throughout the game—if he wants the other side to win. He can set the pace of the game, low-scoring or high-scoring, based on how he calls fouls. He can decide to invalidate a basket by calling an offensive foul on the play, or give a team the potential for some extra points by calling a defensive foul. There’s no formal instant replay. There’s no second opinion. A ref’s word is law—there are only three of them—and a crooked ref has enormous power to control the game.
It’s not just that basketball referees are single points of failure, it’s that they’re both trusted insiders and single points of catastrophic failure.
These sorts of vulnerabilities exist in many systems. Consider what a terrorist-sympathizing Transportation Security Administration screener could do to airport security. Or what a criminal CFO could embezzle. Or what a dishonest computer repair technician could do to your computer or network. The same goes for a corrupt judge, police officer, customs inspector, border-control officer, food-safety inspector, and so on.
The best way to catch corrupt trusted insiders is through audit. The particular components of a system that have the greatest influence on the performance of that system need to be monitored and audited, even if the probability of compromise is low. It’s after the fact, but if the likelihood of detection is high and the penalties (fines, jail time, public disgrace) are severe, it’s a pretty strong deterrent. Of course, the counterattack is to target the auditing system. Hackers routinely try to erase audit logs that contain evidence of their intrusions.
Even so, audit is the reason we want open source code reviews and verifiable paper trails in voting machines; otherwise, a single crooked programmer could single-handedly change an election. It’s also why the Securities and Exchange Commission closely monitors trades by brokers: They are in an ideal position to get away with insider trading. The NBA claims it monitors referees for patterns that might indicate abuse; there’s still no answer to why it didn’t detect Donaghy.
Most companies focus the bulk of their IT-security monitoring on external threats, but they should be paying more attention to internal threats. While a company may inherently trust its employees, those trusted employees have far greater power to affect corporate systems and are often single points of failure. And trusted employees can also be compromised by external elements, as Tom Donaghy was by Battista and possibly the Mafia.
All systems have trusted insiders. All systems have catastrophic points of failure. The key is recognizing them, and building monitoring and audit systems to secure them.
This is my 50th essay for Wired.com.
Mike McConnell, U.S. National Intelligence Director, gave an interesting interview to the El Paso Times.
I don’t think he’s ever been so candid before. For example, he admitted that the nation’s telcos assisted the NSA in their massive eavesdropping efforts. We already knew this, of course, but the government has steadfastly maintained that either confirming or denying this would compromise national security.
There are, of course, moments of surreality. He said that it takes 200 hours to prepare a FISA warrant. Ryan Single calculated that since there were 2,167 such warrants in 2006, there must be “218 government employees with top secret clearances sitting in rooms, writing only FISA warrants.” Seems unlikely.
But most notable is this bit:
“Q. So you’re saying that the reporting and the debate in Congress means that some Americans are going to die?
“A. That’s what I mean. Because we have made it so public. We used to do these things very differently, but for whatever reason, you know, it’s a democratic process and sunshine’s a good thing. We need to have the debate.”
Ah, the politics of fear. I don’t care if it’s the terrorists or the politicians, refuse to be terrorized.
To the average home user, security is an intractable problem. Microsoft has made great strides improving the security of their operating system “out of the box,” but there are still a dizzying array of rules, options, and choices that users have to make. How should they configure their anti-virus program? What sort of backup regime should they employ? What are the best settings for their wireless network? And so on and so on and so on.
How is it possible that we in the computer industry have created such a shoddy product? How have we foisted on people a product that is so difficult to use securely, that requires so many add-on products?
It’s even worse than that. We have sold the average computer user a bill of goods. In our race for an ever-increasing market, we have convinced every person that he needs a computer. We have provided application after application—IM, peer-to-peer file sharing, eBay, Facebook—to make computers both useful and enjoyable to the home user. At the same time, we’ve made them so hard to maintain that only a trained sysadmin can do it.
And then we wonder why home users have such problems with their buggy systems, why they can’t seem to do even the simplest administrative tasks, and why their computers aren’t secure. They’re not secure because home users don’t know how to secure them.
At work, I have an entire IT department I can call on if I have a problem. They filter my net connection so that I don’t see spam, and most attacks are blocked before they even get to my computer. They tell me which updates to install on my system and when. And they’re available to help me recover if something untoward does happen to my system. Home users have none of this support. They’re on their own.
This problem isn’t simply going to go away as computers get smarter and users get savvier. The next generation of computers will be vulnerable to all sorts of different attacks, and the next generation of attack tools will fool users in all sorts of different ways. The security arms race isn’t going away any time soon, but it will be fought with ever more complex weapons.
This isn’t simply an academic problem; it’s a public health problem. In the hyper-connected world of the Internet, everyone’s security depends in part on everyone else’s. As long as there are insecure computers out there, hackers will use them to eavesdrop on network traffic, send spam, and attack other computers. We are all more secure if all those home computers attached to the Internet via DSL or cable modems are protected against attack. The only question is: what’s the best way to get there?
I wonder about those who say “educate the users.” Have they tried? Have they ever met an actual user? It’s unrealistic to expect home users to be responsible for their own security. They don’t have the expertise, and they’re not going to learn. And it’s not just user actions we need to worry about; these computers are insecure right out of the box.
The only possible way to solve this problem is to force the ISPs to become IT departments. There’s no reason why they can’t provide home users with the same level of support my IT department provides me with. There’s no reason why they can’t provide “clean pipe” service to the home. Yes, it will cost home users more. Yes, it will require changes in the law to make this mandatory. But what’s the alternative?
In 1991, Walter S. Mossberg debuted his “Personal Technology” column in The Wall Street Journal with the words: “Personal computers are just too hard to use, and it isn’t your fault.” Sixteen years later, the statement is still true—and doubly true when it comes to computer security.
If we want home users to be secure, we need to design computers and networks that are secure out of the box, without any work by the end users. There simply isn’t any other way.
This essay is the first half of a point/counterpoint with Marcus Ranum in the September issue of “Information Security.” You can read his reply here: http://www.ranum.com/security/computer_security/…
Some spam filters rejected the August issue of Crypto-Gram. If you didn’t receive it in e-mail, you can read the issue here:
A very techie forensic analysis of how a Linux server gets turned into a zombie:
On the ineffectiveness of security cameras in San Francisco public housing developments:
Pig Latin: code talking for the dumb:
In Ohio, you can—by law—get a list of voters in the order they voted, and a time-stamped list of actual votes. Put those two lists together, and you know who voted for whom.
Security furniture: a “safe” bedside table:
Taser—yep, that’s the company’s name as well as the product’s name—is now selling a personal-use version of their product. It’s called the Taser C2, and it has an interesting embedded identification technology. Whenever the weapon is fired, it also sprays some serial-number bar-coded confetti, so a firing can be traced to a weapon and—presumably—the owner.
Another article about risk perception, and why we worry about the wrong things.
And a great graphic:
You won’t identity individual users, but you can test for the prevalence of drug use in a community by testing the sewage water. Presumably, if you push the sample high enough into the pipe, you can test groups of houses or even individual houses.
Here’s information on drug numbers in the Rhine. They estimated that, for a population of 38.5 million feeding wastewater into the Rhine down to Dusseldorf, cocaine use amounts to 11 metric tonnes per year. Street value: 1.64 billion Euros.
This padlocked USB drive is a clever idea. Only five buttons, a maximum of ten digits for the PIN, and almost certainly a gazillion ways to get around the padlock function once you pry the case open—but definitely on the right track.
Fusion centers are state-run, with funding help from the Department of Homeland Security. It’s all sort of ad hoc, but their purpose is to “fuse” federal, state, and local intelligence against terrorism. But—no surprise—they’re not doing much actual fusion, and they’re more commonly used for other purposes.
There has been much written about the new German hacker-tool law, which went into effect in August. Basically, the law is so flawed and so broad that no one can really comply with it: security researchers or even normal software companies. If your software is used in a crime, you could be arrested.
Thieves stole a drug-sniffing dog in Mexico. I thought this was a clever attack by a drug lord, but then the dog was found in a park tied to a tree—so I don’t know what’s going on.
This is a must-read article on about DCSNet (Digital Collection System Network), the FBI’s high-tech point-and-click domestic wiretapping network. The information is based on nearly 1,000 pages of documentation released under FOIA to the EFF.
Entering passwords through eye movement:
Australian porn filter cracked; the headline is all you need to know: “Teen cracks AU$84 million porn filter in 30 minutes.” (AU$84 million is $69.5 million U.S.; that’s real money.) Remember that the issue isn’t that one smart kid can circumvent the censorship software, it’s that one smart kid—maybe this one, maybe another one—can write a piece of shareware that allows *everyone* to circumvent the censorship software. It’s the same with DRM; technical measures just aren’t going to work.
Trends in physical security. Weird:
Uni-ball is using fear of check washing to sell pens. I admit that it’s a problem, but I don’t like the fear-mongering in the advertisement.
Do-it-yourself laser spy microphone:
Pentagon hacked by Chinese military. At least, that’s the story. Honestly, I don’t know what’s really going on.
NASA employees sue over invasive background checks:
http://hspd12jpl.org/ (Check out the “Forum” if you’re really interested.)
“Cyber crime toolkits” hit the news:
In one sense, there’s nothing new here. There have been rootkits and virus construction kits available on the Internet for years. The very definition of a “script kiddie” is someone who uses these tools without really understanding them. What is new is the market: these new tools aren’t for wannabe hackers, they’re for criminals. And with the new market comes a for-profit business model.
Police to monitor Indian cyber-cafes under the guise of preventing terrorism:
Terrorist plot foiled in Germany:
The more I read about this, the more obvious it is that intelligence and investigation is what caught these guys, and not any wholesale eavesdropping or data mining programs.
Cows get photo IDs in India:
I had been thinking about writing about the massive distributed-denial-of-service attack against the Estonian government last April. It’s been called the first cyberwar, although it is unclear that the Russian government was behind the attacks. And while I’ve written about cyberwar in general, I haven’t really addressed the Estonian attacks. Now I don’t have to. Kevin Poulsen has written an excellent article on both the reality and the hype surrounding the attacks on Estonia’s networks, commenting on a story in the magazine “Wired.”
The APEC conference was a big deal in Australia, and the security was serious. They blocked off a major part of Sydney, implemented special APEC laws allowing extra search powers for the police, and even gave everyone in Sydney the day off—just to keep people away. But the Chasers, a TV comedy team, succeeded in driving a fake motorcade with Canadian flags right through all the security barriers and weren’t stopped until right outside President Bush’s hotel. Inside their motorcade was someone dressed up as Osama Bin Laden.
Stupid APEC security:
Great video from The Chasers on APEC and security, including some very funny footage about what normal people are willing to do and have done to them in the name of security.
Federal judge strikes down National-Security-Letter provision of Patriot Act. He immediately stayed his decision, pending appeal.
The no-fly list catches an actual terrorist! Well, maybe. Gerry Adams is stopped at the border:
Cory Doctorow has been writing a biweekly column for “The Guardian” on DRM and the entertainment industry. He’s written three so far, and they’re all here.
Lousy electronic-stamp security in Germany:
1621 cryptography book was up for auction a couple of days ago:
Interesting commentary on the relationship between lights and crime:
Four-year-old girl asked to remove her hoodie for vague “security” reasons:
The New England Patriots, one of the two or three best teams in the last five years, have been accused of stealing signals from the other team with a video camera.
I remember when the NFL changed the rules to allow a radio link from the quarterback’s helmet to the sidelines. A smart team could not only eavesdrop on the other team, but selectively jam the signal when it would be most critical. The rules said that if one team’s radio link didn’t work, the other team had to turn its off, but that’s a minor consideration if you know it’s coming.
The KeeLoq electronic car-door entry system has been successfully cryptanalyzed:
New research shows that the Chinese national firewall isn’t that effective:
New security cartoon site:
“Say No to Nightmares”: An original song by Tay Zonday:
It reads like a hoax: “The Police Department set up checkpoints yesterday in Lower Manhattan and increased security after learning of a vague threat of a radiological attack here.”
And: “The police learned about the threat through an item on the Web site debka.com—a site that Mr. Browne said was believed to have Israeli intelligence and military sources—that said that Qaeda operatives were planning to detonate a truck filled with radiological material in New York, Los Angeles or Miami. Officials say the Web site carries reports that are often wrong, but occasionally right.”
Occasionally right? Which U.S. terrorist attack did it predict?
Come on, people: refuse to be terrorized.
Is this the stupidest terrorist reaction yet? “Two people who sprinkled flour in a parking lot to mark a trail for their offbeat running club inadvertently caused a bioterrorism scare and now face a felony charge.”
The competition is fierce, but I think we have a winner.
What bothers me most about the news coverage is that there isn’t even a suggestion that the authorities’ response might have been out of line.
“Mayoral spokeswoman Jessica Mayorga said the city plans to seek restitution from the Salchows, who are due in court Sept. 14.
“‘You see powder connected by arrows and chalk, you never know,’ she said. ‘It could be a terrorist, it could be something more serious. We’re thankful it wasn’t, but there were a lot of resources that went into figuring that out.'”
Translation: We screwed up, and we want someone to pay for our mistake.
New York is installing an automatic toll-collection system for cars in the busiest parts of the city. It’s called congestion pricing, and it promises to reduce both traffic and pollution.
The problem is that it keeps an audit log of which cars are driving where. London’s congestion pricing system is already being used for counterterrorism purposes—and now for regular crime as well. The E-ZPass automatic toll collection system, used in New York and other places, has been used in both criminal and civil trials: in one case to prove infidelity in divorce court.
There are good reasons for having this system, but I am worried about another wholesale surveillance tool.
BT Counterpane Launches Enhanced Managed Vulnerability Scan Services
A profile on Schneier was published in “City Pages” magazine.
Schneier is delivering the keynote at the 29th International Conference of Data Protection and Privacy Commissioners in Montreal on September 25, 2007.
Schneier is participating in an ACLU Colorado Real ID Town Hall in Denver on October 3, 2007.
Schneier is participating in an EPIC fundraiser/book signing in Washington DC on October 5, 2007.
Schneier is doing a book signing at the Gartner Symposium IT Expo in Orlando, FL on October 10, 2007
Schneier is delivering the keynote at Telephony Live! in Dallas, TX on October 11, 2007.
Schneier is delivering the keynote at InfoSecurity Mexico in Mexico City on October 15, 2007.
REAL ID is the U.S. government plan to impose uniform regulations on state driver’s licenses. It’s a national ID card, in all but cosmetic form.
Most states hate it: 17 have passed legislation rejecting REAL ID, and many others have such legislation somewhere in process. Now it looks like the federal government is upping the ante, and threatening retaliation against those states that don’t implement REAL ID:
“The cards would be mandatory for all ‘federal purposes,’ which include boarding an airplane or walking into a federal building, nuclear facility or national park, Homeland Security Secretary Michael Chertoff told the National Conference of State Legislatures last week. Citizens in states that don’t comply with the new rules will have to use passports for federal purposes.”
This sounds tough, but it’s a lot of bluster. The states that have passed anti-REAL-ID legislation lean both Republican and Democrat. The federal government just can’t say that citizens of—for example—Georgia (which passed a bill in May authorizing the governor to delay implementation of REAL ID) can’t walk into a federal courthouse without a passport. Or can’t board an airplane without a passport—imagine the lobbying by Delta Airlines here. They just can’t.
This is a report on the presentation of computer forensic evidence in a UK trial.
There are three things that concern me here:
1. The computer was operated by a police officer prior to forensic examination.
2. The forensic examiner gave an opinion on what files construed “radical Islamic politics.”
3. The presence of documents “in the “Windows Options” folders was construed as evidence that that someone wanted to hide those documents.
In general, computer forensics is rather ad hoc. Traditional rules of evidence are broken all the time. But this seems like a pretty egregious example.
It’s easy. Find a fast-food restaurant with two drive-through windows: one where you order and pay, and the other where you receive your food. This won’t work at the more-common U.S. configuration: a microphone where you order, and a single window where you both pay and receive your food. The video demonstrates the attack at a McDonald’s in—I assume—France.
Wait until there is someone behind you and someone in front of you. Don’t order anything at the first window. Tell the clerk that you forgot your money and didn’t order anything. Then drive to the second window, and take the food that the person behind you ordered.
It’s a clever exploit. Basically, it’s a synchronization attack. By exploiting the limited information flow between the two windows, you can insert yourself into the pay-receive queue.
It’s relatively easy to fix. The restaurant could give the customer a numbered token upon ordering and paying, which he would redeem at the next window for his food. Or the second window could demand to see the receipt. Or the two windows could talk to each other more, maybe by putting information about the car and driver into the computer. But, of course, these security solutions reduce the system’s optimization.
So if not a lot of people do this, the vulnerability will remain open.
There are hundreds of comments—many of them interesting—on these topics on my blog. Search for the story you want to comment on, and join in.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>. Back issues are also available at that URL.
Please feel free to forward CRYPTO-GRAM, in whole or in part, to colleagues and friends who will find it valuable. Permission is also granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of BT Counterpane, and is a member of the Board of Directors of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
BT Counterpane is the world’s leading protector of networked information – the inventor of outsourced security monitoring and the foremost authority on effective mitigation of emerging IT threats. BT Counterpane protects networks for Fortune 1000 companies and governments world-wide. See <http://www.counterpane.com>.
Crypto-Gram is a personal newsletter. Opinions expressed are not necessarily those of BT or BT Counterpane.
Copyright (c) 2007 by Bruce Schneier.