December 15, 2003
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com.
In this issue:
- Blaster and the August 14th Blackout
- Counterpane News
- The Doghouse: Amit Yoran
- Crypto-Gram Reprints
- Quantum Cryptography
- Beyond Fear News
- Computerized and Electronic Voting
- Comments from Readers
Did Blaster cause the August 14th blackout? The official analysis says “no,” but I’m not so sure.
According to the “Interim Report: Causes of the August 14th Blackout in the United States and Canada,” published in November and based on detailed research by a panel of government and industry officials, the blackout was caused by a series of failures.
The chain of events began at FirstEnergy, a power company in Ohio. There, a series of human and computer failures turned a small problem into a major one. And because critical alarm systems failed, workers at FirstEnergy did not stop the cascade because they did not know what was happening.
This is where I think Blaster may have been involved. The report gives a specific timeline for the failures. At 14:14 EDT, the “alarm and logging software” at FirstEnergy’s control room failed. This alarm software “provided audible and visual indications when a significant piece of equipment changed from an acceptable to problematic condition.” Of course, no one knew that it failed.
Six minutes later, “several” remote control consoles failed. At 14:41, the primary server computer that hosted the alarm function failed. Its functions were passed to a backup computer, which failed at 14:54.
Doesn’t this sound like a computer worm wending its way through FirstEnergy’s operational computers?
According to the report, “…for over an hour no one in FE’s control room grasped that their computer systems were not operating properly, even though FE’s Information Technology support staff knew of the problems and were working to solve them…”
Doesn’t this sound like IT working to clean a worm out of its network?
This massive computer failure was critical to the cascading power failure. The report continues: “Power system operators rely heavily on audible and on-screen alarms, plus alarm logs, to reveal any significant changes in their system’s conditions. After 14:14 EDT on August 14, FE’s operators were working under a significant handicap without these tools. However, they were in further jeopardy because they did not know that they were operating without alarms, so that they did not realize that system conditions were changing.”
Other computer glitches are mentioned in the report. At the Midwest Independent Transmission System Operator, a regional agency that oversees power distribution, there’s something called a “state estimator.” It’s a computer used to determine whether the power grid is in trouble. This computer also failed, at 12:15. According to the report, a technician tried to repair it and forgot to turn it back on when he went to lunch.
The Blaster worm first appeared on August 11, and infected more than a million computers in the days following. It targeted a vulnerability in the Microsoft operating system. Infected computers, in turn, tried to infect other computers, and in this way the worm automatically spread from computer to computer and network to network. Although the worm didn’t perform any malicious actions on the computers it infected, its mere existence drained resources and often caused the host computer to crash. To remove the worm a system administrator had to run a program that erased the malicious code; then the administrator had to patch the vulnerability so that the computer would not get re-infected.
According to research by Stuart Staniford, Blaster was a random-start sequential-scanner, and scanned at about 11 IPs/second. A given scanner would cover a Class B network in about 1 hour and 40 minutes. The FirstEnergy computer-failure times are fairly consistent with a series of computers with addresses dotted around a class B being compromised by a scan of the class B, probably by an infected instance on the same network. (Note that it was not necessary for the FirstEnergy network to be on the Internet; Blaster infected many internal networks.)
The coincidence of the timing is too obvious to ignore. At 14:14 EDT, the Blaster Worm was dropping systems all across North America. The report doesn’t explain why so many computers–both primary and backup systems–at FirstEnergy were failing at around the same time, but Blaster is certainly a reasonable suspect.
Unfortunately, the report doesn’t directly address the Blaster worm and its effects on FirstEnergy’s computers. The closest I could find was this paragraph, on page 99: “Although there were a number of worms and viruses impacting the Internet and Internet connected systems and networks in North America before and during the outage, the SWG’s preliminary analysis provides no indication that worm/virus activity had a significant effect on the power generation and delivery systems. Further SWG analysis will test this finding.”
Why the tortured prose? The writers take pains to assure us that “the power generation and delivery systems” were not affected by Blaster. But what about the alarm systems? Clearly they were all affected by something, and all at the same time.
This wouldn’t be the first time a Windows epidemic swept through FirstEnergy. The company has admitted that they were hit by Slammer in January.
Let’s be fair. I don’t know that Blaster caused the blackout. The report doesn’t say that Blaster caused the blackout. Conventional wisdom is that Blaster did not cause the blackout. But it seems more and more likely that Blaster was one of the many causes of the blackout.
Regardless of the answer, there’s a very important moral here. As networked computers infiltrate more and more of our critical infrastructure, that infrastructure is vulnerable not only to attacks but also to sloppy software and sloppy operations. And these vulnerabilities are invariably not the obvious ones. The computers that directly control the power grid are well-protected. It’s the peripheral systems that are less protected and more likely to be vulnerable. And a direct attack is unlikely to cause our infrastructure to fail, because the connections are too complex and too obscure. It’s only by accident–Blaster affecting systems at just the wrong time, allowing a minor failure to become a major one–that these massive failures occur.
We’ve seen worms knock out 911 telephone service. We’ve seen worms disable ATMs. None of this was predictable beforehand, but all of it is preventable. I believe that this sort of thing will become even more common in the future.
A preliminary version of this essay appeared on news.com:
Interim Report: Causes of the August 14th Blackout in the United States and Canada
The relevant data is on pages 28-29 of the report.
How worms can infect internal networks:
Blackout not caused by worm:
News article on the report:
Geoff Shively talked about possible Blaster/blackout links just a few days after the blackout:
Over four years ago I founded Counterpane Internet Security, Inc., to be the world’s leading provider of Managed Security Services, and we still are. Every day we defend hundreds of networks all over the world against insider and outsider attacks. We help companies meet compliance demands. We make the Internet safer. Recently we announced our Enterprise Protection Suite, which combines Managed Security Monitoring with Managed Vulnerability Scanning, fully outsourced Device Management, and Security Consulting services. Receive a 15% discount off your first year’s service here:
We had a great third quarter:
We are hiring:
Essay by Schneier on computer security and liability:
Interview with Schneier on computer security and monoculture:
Many people have asked me about this product: whether it is secure, whether it is worth buying, etc.
Honestly, I don’t know. I haven’t evaluated the product. I haven’t looked at the source code. I don’t know anything more about the product and its security than what I can read on their website.
At first read, though, it looks okay. It looks like they’ve thought about the security and the cryptography. The fact that they publish their source code is a good sign (but not any guarantee of security).
If I find any analyses, I will write about them in Crypto-Gram. Until then, this is no different from any other security product: you need to trust the vendor.
Raseac, another company that makes encrypted phones:
Here’s a question: if you don’t think it’s possible to improve the security of computer code, what are you doing in the computer security industry?
“Amit Yoran, the new head of the Department of Homeland Security’s national cybersecurity division, said the administration is assessing the impact of various regulatory proposals. One of them calls for companies to report, through the Securities and Exchange Commission, their preparedness for attacks on their computer networks. Mr. Yoran, formerly a vice president of Symantec Corp., said the department is considering other measures, though it leans toward private-sector approaches.
“‘For example, should we hold software vendors accountable for the security of their code or for flaws in their code?’ Mr. Yoran asked in an interview. ‘In concept, that may make sense. But in practice, do they have the capability, the tools to produce more secure code?'”
The sheer idiocy of this quote amazes me. Does he really think that writing more secure code is too hard for companies to manage? Does he really think that companies are doing absolutely the best they possibly can?
I can handle blatant pandering to industry, but this is just too stupid to ignore.
Crypto-Gram is currently in its sixth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Comments on the Department of Homeland Security:
Crime: The Internet’s Next Big Thing:
National ID Cards:
Judges Punish Bad Security:
Computer Security and Liabilities:
Fun with Vulnerability Scanners:
Voting and Technology:
“Security Is Not a Product; It’s a Process”
European Digital Cellular Algorithms:
The Fallacy of Cracking Contests:
How to Recognize Plaintext:
MagiQ Technologies is now selling an actual product that uses single photons to exchange keys over fiber optic lines. Navajo systems use photons to transmit encryption keys over fiber-optic lines, and the security is based on the quantum law that an observer–an eavesdropper in this case–perturbs the system by observing it.
This isn’t new. The basic science was developed in the early 1980s, and there have been steady advances in engineering since then. I describe how it all works–basically–in Applied Cryptography, 2nd Edition (pages 554-557).
I don’t have any hope for this sort of product. I don’t have any hope for the commercialization of quantum cryptography in general; I don’t believe it solves any security problem that needs solving. I don’t believe that it’s worth paying for, and I can’t imagine anyone but a few technophiles buying and deploying it.
It’s not that quantum cryptography might be insecure; it’s that we don’t need cryptography to be any more secure.
Security is a chain; it’s as strong as the weakest link. Mathematical cryptography, as bad as it sometimes is, is the strongest link in most security chains. The computer security, the network security, the people security–these are all much worse.
Cryptography is the one area of security that we can get right. We know how to make that link strong. Maybe quantum cryptography can make that link stronger, but why would anyone bother? There are far more serious security problems to worry about, and it makes much more sense to spend money securing those.
It’s like defending yourself against an approaching attacker by putting a huge stake in the ground. It’s useless to argue about whether the stake should be fifty feet tall or a hundred feet tall, because the attacker is going to go around it. Even quantum cryptography doesn’t “solve” all of cryptography: the keys are exchanged with photons, but a conventional mathematical algorithm takes over for the actual encryption.
I’m always in favor of security research, and I have enjoyed following the developments in quantum cryptography. But as a product, it has no future.
Another article saying that cyberterrorism is a myth:
Banking scam targets Citibank customers:
The economics of spam:
This is an old scam. A man uses a computer virus to change Internet dial-up numbers of victims’ computers to special premium rates, in an attempt to make a pile of money. How he thought that he wouldn’t get caught is beyond me.
This is the 1997 version of same attack. Note that the virus turned the volume of the modem down, to reduce the chance of detection:
A very entertaining (and anonymous) story of someone’s interview with the National Security Agency:
Republican Senator Orrin Hatch suspended a member of his staff for hacking into the computers of two Democratic senators.
Here’s a story of a computer error causing losses on the NASDAQ stock exchange. It illustrates a serious problem: not all financial transactions can be undone. The error was a large sell order that caused a stock’s price to plunge. Many people acted on that information, and when it turned out to be false they were stuck with large losses themselves.
List of famous unsolved codes and ciphers:
Good collection of links on faking fingerprint readers:
Schneier is giving a talk at the 92nd St Y in New York on January 11th at 7:30 PM:
Schneier is doing a series of lectures and signings to promote Beyond Fear:
Raleigh-Durham, NC – January 12th, 7:00 pm, Market Street Books
San Jose, CA – January 13th, 7:30 PM, Kepler’s Books
Portland, OR – January 14th, 7:30 PM, Powell’s Technical Books
San Diego, CA – January 16th, 7:00 PM, San Diego Technical Books
There are dozens of stories about computerized voting machines producing erroneous results. Votes mysteriously appear or disappear. Votes cast for one person are credited to another. Here are two from the most recent election: One candidate in Virginia found that the computerized election machines failed to register votes for her, and in fact subtracted a vote for her, in about “one out of a hundred tries.” And in Indiana, 5,352 voters in an district of 19,000 managed to cast 144,000 ballots on a computerized machine.
These problems were only caught because their effects were obvious–and obviously wrong. Subtle problems remain undetected, and for every problem we catch–even though their effects often can’t be undone–there are probably dozens that escape our notice.
Computers are fallible and software is unreliable; election machines are no different than your home computer.
Even more frightening than software mistakes is the potential for fraud. The companies producing voting machine software use poor computer-security practices. They leave sensitive code unprotected on networks. They install patches and updates without proper security auditing. And they use the law to prohibit public scrutiny of their practices. When damning memos from Diebold became public, the company sued to suppress them. Given these shoddy security practices, what confidence do we have that someone didn’t break into the company’s network and modify the voting software?
And because elections happen all at once, there would be no means of recovery. Imagine if, in the next presidential election, someone hacked the vote in New York. Would we let New York vote again in a week? Would we redo the entire national election? Would we tell New York that their votes didn’t count?
Any discussion of computerized voting necessarily leads to Internet voting. Why not just do away with voting machines entirely, and let everyone vote remotely?
Online voting schemes have even more potential for failure and abuse. Internet systems are extremely difficult to secure, as evidenced by the never-ending stream of computer vulnerabilities and the widespread effect of Internet worms and viruses. It might be convenient to vote from your home computer, but it would also open new opportunities for people to play Hack the Vote.
And any remote voting scheme has its own problems. The voting booth provides security against coercion. I may be bribed or threatened to vote a certain way, but when I enter the privacy of the voting booth I can vote the way I want. Remote voting, whether by mail or by Internet, removes that security. The person buying my vote can be sure that he’s buying a vote by taking my blank ballot from me and completing it himself.
In the U.S., we believe that allowing absentees to vote is more important than this added security, and that it is probably a good trade-off. And people like the convenience. In California, for example, over 25% vote by mail.
Voting is particularly difficult in the United States for two reasons. One, we vote on dozens of different things at one time. And two, we demand final results before going to sleep at night.
What we need are simple voting systems–paper ballots that can be counted even in a blackout. We need technology to make voting easier, but it has to be reliable and verifiable.
My suggestion is simple, and it’s one echoed by many computer security researchers. All computerized voting machines need a paper audit trail. Build any computerized machine you want. Have it work any way you want. The voter votes on it, and when he’s done the machine prints out a paper receipt, much like an ATM does. The receipt is the voter’s real ballot. He looks it over, and then drops it into a ballot box. The ballot box contains the official votes, which are used for any recount. The voting machine has the quick initial tally.
This system isn’t perfect, and doesn’t address many security issues surrounding voting. It’s still possible to deny individuals the right to vote, stuff machines and ballot boxes with pre-cast votes, lose machines and ballot boxes, intimidate voters, etc. Computerized machines don’t make voting completely secure, but machines with paper audit trails prevent all sorts of new avenues of error and fraud.
CRS Report on Electronic Voting:
California Secretary of State statement on e-voting paper trail requirement:
Voter Confidence and Increased Accessibility Act of 2003
From: “Preston L. Bannster <preston.bannistercox.net>
Subject: Airplane Hackers
There is, I think, an important distinction between a hacker who breaks into someone else’s computer system, and the actions of this “airplane hacker.” When you break into someone else’s computer system, you are, in effect, invading someone else’s property. Perhaps you have no intent to cause harm. Perhaps you will cause harm unintended. In any case you were not invited, are unwelcome, and should not be there.
Airport security is a little different. Post-9/11 we have traded off a number of our freedoms, shifted more power into the government, and moved at least a bit closer to becoming some form of police state. The promise made to us — to all of us — was that in exchange we would gain protection, that somehow we would gain security. Apparently Nathaniel Heatwole decided to test that promise. After all, we have in effect been sold and paid for a service. Nathaniel was simply testing to see — in some minor way — if we were getting something for what we have paid.
Was this all a bit silly? On some level, certainly. Doubtless anyone who makes a habit of thinking critically about security would have decided early on that much of what we were sold was worthless. Not much point in confirming what we already know. On the other hand, you wouldn’t have much of a business if the majority of the human race thought just as carefully about security :). This stunt tells a story that is very easy to understand. Tell stories like this often enough, and you might change the minds of enough voters. This is a good thing.
To borrow from your analogy, this is more turning on all the alarms, breaking into your own house, and leaving a note for the security company.
Were Nathaniel’s actions criminal? Perhaps – but then how do we charge those to whom we have sold our liberty for a false promise of security? Somehow it seems that the second is a far greater crime.
From: “David Wall Yozons” <david.wallyozons.com>
Subject: Airplane Hackers
Your analogy about breaking into your home and leaving a note is quite different from what Heatwole did. He boarded a plane with permission and had a ticket that he paid for. He didn’t harm anybody or anything. What he smuggled wasn’t even particularly dangerous.
You feel violated if someone enters your home, but only because they did so without permission. If your guest smuggled in a steak when you were eating a fish dinner, you wouldn’t be that upset.
I’m not saying it was a bright move, but the violation is pretty minor and should be treated as a minor offense as you suggest. The TSA should get rid of its window dressing rules since those are the rules that you suggest should result in his criminal prosecution.
So a guy who pays to come, creates no real harm, and violates a silly law should be prosecuted severely because he’s a “hacker”? The law is meant to protect, not harass, so let’s hope they don’t throw the book at him.
From: Doug Greene <gwizeTransforms.com>
Subject: Airplane Hackers
Your analogy fails in that air transportation is regulated in the public interest and security is the responsibility of the government. Hence, it should be subject to public oversight. This is not the same situation as a private home. Who feels embarrassed and violated here other than the government agency charged with failing to insure the level of security it claims it does?
You acknowledge that: “Most of what the TSA does is security theater — window dressing. It keeps up appearances, and maybe (hopefully) makes the terrorists a little less sure they can smuggle their weapons aboard airplanes. Probably not.” Yet, consumers of air transport services are harassed daily in the name of security that does not work. Either it should work reasonably well or we should restore the civil liberties that have been infringed upon in the name of such fraudulent security systems.
You say “the TSA never asked him to test their security.” Of course not. Contractors hired by the TSA to test their security are much more likely to keep security failures hidden from public scrutiny.
We need laws that allow limited independent public interest testing of such security systems. Until we do, exploits such as Nathaniel Heatwole’s are better than self-interested tests that may have been contracted by the TSA and were probably not.
From: Brian T. Sniffen <btsalum.mit.edu>
Subject: Airplane Hackers
I think you made a grave mistake in your article on Heatwole. There are two points you didn’t cover, either of which requires serious changes to your conclusions.
First, there’s the problem of fuzziness: so-called hackers who break into other people’s computers or homes are committing crimes, certainly. But where’s the difference between that and breaking into other people’s intellectual property? Should the next researcher who discovers security flaws in copyrighted software be prosecuted? Certainly, the company feels like it’s been violated. But when Matt Blaze published information that my apartment locks were insecure, I didn’t consider him a criminal: I thanked him for it. As I recall, so did you. Meanwhile, Schlage and locksmiths across the country were calling for his head.
Second, there’s the problem of social responsibility. There is a difference between breaking into a home, or demonstrating a flaw in a public security system that represents itself as defending public safety. From my understanding of what Heatwole did, no crimes were committed: he went through the normal TSA boarding process, using the rules they set up. That’s very different from picking locks to enter a house or circumventing security checks — in software or in social systems. I consider Heatwole’s actions in the same category as those who broke the DMCA and other copyright laws to expose flaws in Diebold’s voting machines: exposing the lies or incompetence of those holding the public trust may be in violation of the law, but should rarely be treated as a crime.
From: Michael Giagnocavo <mggAtrevido.net>
Subject: Airplane Hackers
Surprise was an exploit on a known security hole: cockpit access. I can’t touch any controls when I get into a $5000 taxi, but some people easily got into the cockpit of a multi-million dollar aircraft.
Like buffer overflows (and mitigation techniques that compilers can do), taking away surprise just makes it harder to exploit the known hole. Surprise was just their “exploit code” this time around for that hole.
Assuming the hole hasn’t been closed (and cockpit access isn’t bolted down), what’s to stop someone from tear-gassing everyone on board and gaining control again?
From: Ben Mord <bmordiconnicholson.com>
Subject: Airplane Hackers
Despite your suggestion that passengers would unite against future terrorists, I fear our airplanes remain vulnerable to social engineering. I offer the following as evidence:
[Note: this is 2.5 hours of streaming video.]
A single insane individual broke through an “armored” cockpit door in a post 9/11 world as a small army of flight attendants offered him water. Let us be thankful he was not simply using his insanity as a ruse to compromise cockpit security as co-conspirators waited silently to launch a second attack. Let us also learn from the fact that as a single individual with no preparations, this individual did in fact compromise armored cockpit security following 9/11 by many months.
I do recognize, however, that you may be engaging in some social engineering of your own. It is advantageous for people to believe that they would unite in thwarting any threat against the integrity of the cockpit. Due to the network effect, the ubiquity of such a belief *might* help it become a reality, and the perception of this as a reality might also act as a deterrent. And yet, as a self-governing society we must be wary of self-delusion, lest we make bad decisions. How should we manipulate the network effect to our common benefit while avoiding dangerous delusions? This is a tough balance.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Comments on CRYPTO-GRAM should be sent to email@example.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers “Beyond Fear,” “Secrets and Lies,” and “Applied Cryptography,” and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane’s expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.