November 15, 2004
A free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise.
For back issues, or to subscribe, visit <http://www.schneier.com/crypto-gram.html>.
Or you can read this issue on the web at <http://www.schneier.com/crypto-gram-0411.html>.
Schneier also publishes these same essays in his blog: <http://www.schneier.com/blog>. An RSS feed is available.
In this issue:
- Why Election Technology is Hard
- Electronic Voting Machines
- Clever Virus Attack
- Mail-in Ballot Attack
- Computer Security and Liability
- Crypto-Gram Reprints
- World Series Security
- Counterpane News
- The Security of Checks and Balances
- The Doghouse: Merced County
- Security Information Management Systems (SIMS)
- Technology and Counterterrorism
- Comments from Readers
Four years after the Florida debacle of 2000 and two years after Congress passed the Help America Vote Act, voting problems are again in the news: confusing ballots, malfunctioning voting machines, problems over who's registered and who isn't. All this brings up a basic question: Why is it so hard to run an election?
A fundamental requirement for a democratic election is a secret ballot, and that's the first reason. Computers regularly handle multimillion-dollar financial transactions, but much of their security comes from the ability to audit the transactions after the fact and correct problems that arise. Much of what they do can be done the next day if the system is down. Neither of these solutions works for elections.
American elections are particularly difficult because they're so complicated. One ballot might have 50 different things to vote on, all but one different in each state and many different in each district. It's much easier to hold national elections in India, where everyone casts a single vote, than in the United States. Additionally, American election systems need to be able to handle 100 million voters in a single day -- an immense undertaking in the best of circumstances.
Speed is another factor. Americans demand election results before they go to sleep; we won't stand for waiting more than two weeks before knowing who won, as happened in India and Afghanistan this year.
To make matters worse, voting systems are used infrequently, at most a few times a year. Systems that are used every day improve because people familiarize themselves with them, discover mistakes and figure out improvements. It seems as if we all have to relearn how to vote every time we do it.
It should be no surprise that there are problems with voting. What's surprising is that there aren't more problems. So how to make the system work better?
-- Simplicity: This is the key to making voting better. Registration should be as simple as possible. The voting process should be as simple as possible. Ballot designs should be simple, and they should be tested. The computer industry understands the science of user-interface -- that knowledge should be applied to ballot design.
-- Uniformity: Simplicity leads to uniformity. The United States doesn't have one set of voting rules or one voting system. It has 51 different sets of voting rules -- one for every state and the District of Columbia -- and even more systems. The more systems are standardized around the country, the more we can learn from each other's mistakes.
-- Verifiability: Computerized voting machines might have a simple user interface, but complexity hides behind the screen and keyboard. To avoid even more problems, these machines should have a voter-verifiable paper ballot. This isn't a receipt; it's not something you take home with you. It's a paper "ballot" with your votes -- one that you verify for accuracy and then put in a ballot box. The machine provides quick tallies, but the paper is the basis for any recounts.
-- Transparency: All computer code used in voting machines should be public. This allows interested parties to examine the code and point out errors, resulting in continually improving security. Any voting-machine company that claims its code must remain secret for security reasons is lying. Security in computer systems comes from transparency -- open systems that pass public scrutiny -- and not secrecy.
But those are all solutions for the future. If you're a voter this year, your options are fewer. My advice is to vote carefully. Read the instructions carefully, and ask questions if you are confused. Follow the instructions carefully, checking every step as you go. Remember that it might be impossible to correct a problem once you've finished voting. In many states -- including California -- you can request a paper ballot if you have any worries about the voting machine.
And be sure to vote. This year, thousands of people are watching and waiting at the polls to help voters make sure their vote counts.
This essay originally appeared in the San Francisco Chronicle.
Also read Avi Rubin's op-ed on the subject.
In the aftermath of the U.S.'s 2004 election, electronic voting machines are again in the news. Computerized machines lost votes, subtracted votes instead of adding them, and doubled votes. Because many of these machines have no paper audit trails, a large number of votes will never be counted. And while it is unlikely that deliberate voting-machine fraud changed the result of the presidential election, the Internet is buzzing with rumors and allegations of fraud in a number of different jurisdictions and races. It is still too early to tell if any of these problems affected any individual elections. Over the next several weeks we'll see whether any of the information crystallizes into something significant.
The U.S has been here before. After 2000, voting machine problems made international headlines. The government appropriated money to fix the problems nationwide. Unfortunately, electronic voting machines -- although presented as the solution -- have largely made the problem worse. This doesn't mean that these machines should be abandoned, but they need to be designed to increase both their accuracy, and people's trust in their accuracy. This is difficult, but not impossible.
Before I can discuss electronic voting machines, I need to explain why voting is so difficult. Basically, a voting system has four required characteristics:
1. Accuracy. The goal of any voting system is to establish the intent of each individual voter, and translate those intents into a final tally. To the extent that a voting system fails to do this, it is undesirable. This characteristic also includes security: It should be impossible to change someone else's vote, ballot stuff, destroy votes, or otherwise affect the accuracy of the final tally.
2. Anonymity. Secret ballots are fundamental to democracy, and voting systems must be designed to facilitate voter anonymity.
3. Scalability. Voting systems need to be able to handle very large elections. One hundred million people vote for president in the United States. About 372 million people voted in India's June elections, and over 115 million in Brazil's October elections. The complexity of an election is another issue. Unlike many countries where the national election is a single vote for a person or a party, a United States voter is faced with dozens of individual election: national, local, and everything in between.
4. Speed. Voting systems should produce results quickly. This is particularly important in the United States, where people expect to learn the results of the day's election before bedtime. It's less important in other countries, where people don't mind waiting days -- or even weeks -- before the winner is announced.
Through the centuries, different technologies have done their best. Stones and pot shards dropped in Greek vases gave way to paper ballots dropped in sealed boxes. Mechanical voting booths, punch cards, and then optical scan machines replaced hand-counted ballots. New computerized voting machines promise even more efficiency, and Internet voting even more convenience.
But in the rush to improve speed and scalability, accuracy has been sacrificed. And to reiterate: accuracy is not how well the ballots are counted by, for example, a punch-card reader. It's not how the tabulating machine deals with hanging chads, pregnant chads, or anything like that. Accuracy is how well the process translates voter intent into properly counted votes.
Technologies get in the way of accuracy by adding steps. Each additional step means more potential errors, simply because no technology is perfect. Consider an optical-scan voting system. The voter fills in ovals on a piece of paper, which is fed into an optical-scan reader. The reader senses the filled-in ovals and tabulates the votes. This system has several steps: voter to ballot to ovals to optical reader to vote tabulator to centralized total.
At each step, errors can occur. If the ballot is confusing, then some voters will fill in the wrong ovals. If a voter doesn't fill them in properly, or if the reader is malfunctioning, then the sensor won't sense the ovals properly. Mistakes in tabulation -- either in the machine or when machine totals get aggregated into larger totals -- also cause errors. A manual system -- tallying the ballots by hand, and then doing it again to double-check -- is more accurate simply because there are fewer steps.
The error rates in modern systems can be significant. Some voting technologies have a 5% error rate: one in twenty people who vote using the system don't have their votes counted properly. This system works anyway because most of the time errors don't matter. If you assume that the errors are uniformly distributed -- in other words, that they affect each candidate with equal probability -- then they won't affect the final outcome except in very close races. So we're willing to sacrifice accuracy to get a voting system that will more quickly handle large and complicated elections. In close races, errors can affect the outcome, and that's the point of a recount. A recount is an alternate system of tabulating votes: one that is slower (because it's manual), simpler (because it just focuses on one race), and therefore more accurate.
Note that this is only true if everyone votes using the same machines. If parts of town that tend to support candidate A use a voting system with a higher error rate than the voting system used in parts of town that tend to support candidate B, then the results will be skewed against candidate A. This is an important consideration in voting accuracy, although tangential to the topic of this essay.
With this background, the issue of computerized voting machines becomes clear. Actually, "computerized voting machines" is a bad choice of words. Many of today's voting technologies involve computers. Computers tabulate both punch-card and optical-scan machines. The current debate centers around all-computer voting systems, primarily touch-screen systems, called Direct Record Electronic (DRE) machines. (The voting system used in India's most recent election -- a computer with a series of buttons -- is subject to the same issues.) In these systems the voter is presented with a list of choices on a screen, perhaps multiple screens if there are multiple elections, and he indicates his choice by touching the screen. These machines are easy to use, produce final tallies immediately after the polls close, and can handle very complicated elections. They also can display instructions in different languages and allow for the blind or otherwise handicapped to vote without assis
They're also more error-prone. The very same software that makes touch-screen voting systems so friendly also makes them inaccurate. And even worse, they're inaccurate in precisely the worst possible way.
Bugs in software are commonplace, as any computer user knows. Computer programs regularly malfunction, sometimes in surprising and subtle ways. This is true for all software, including the software in computerized voting machines. For example:
In Fairfax County, VA, in 2003, a programming error in the electronic voting machines caused them to mysteriously subtract 100 votes from one particular candidates' totals.
In San Bernardino County, CA in 2001, a programming error caused the computer to look for votes in the wrong portion of the ballot in 33 local elections, which meant that no votes registered on those ballots for that election. A recount was done by hand.
In Volusia County, FL in 2000, an electronic voting machine gave Al Gore a final vote count of negative 16,022 votes.
The 2003 election in Boone County, IA, had the electronic vote-counting equipment showing that more than 140,000 votes had been cast in the Nov. 4 municipal elections. The county has only 50,000 residents and less than half of them were eligible to vote in this election.
There are literally hundreds of similar stories.
What's important about these problems is not that they resulted in a less accurate tally, but that the errors were not uniformly distributed; they affected one candidate more than the other. This means that you can't assume that errors will cancel each other out and not affect the election; you have to assume that any error will skew the results significantly.
Another issue is that software can be hacked. That is, someone can deliberately introduce an error that modifies the result in favor of his preferred candidate. This has nothing to do with whether the voting machines are hooked up to the Internet on election day. The threat is that the computer code could be modified while it is being developed and tested, either by one of the programmers or a hacker who gains access to the voting machine company's network. It's much easier to surreptitiously modify a software system than a hardware system, and it's much easier to make these modifications undetectable.
A third issue is that these problems can have further-reaching effects in software. A problem with a manual machine just affects that machine. A software problem, whether accidental or intentional, can affect many thousands of machines -- and skew the results of an entire election.
Some have argued in favor of touch-screen voting systems, citing the millions of dollars that are handled every day by ATMs and other computerized financial systems. That argument ignores another vital characteristic of voting systems: anonymity. Computerized financial systems get most of their security from audit. If a problem is suspected, auditors can go back through the records of the system and figure out what happened. And if the problem turns out to be real, the transaction can be unwound and fixed. Because elections are anonymous, that kind of security just isn't possible.
None of this means that we should abandon touch-screen voting; the benefits of DRE machines are too great to throw away. But it does mean that we need to recognize its limitations, and design systems that can be accurate despite them.
Computer security experts are unanimous on what to do. (Some voting experts disagree, but I think we're all much better off listening to the computer security experts. The problems here are with the computer, not with the fact that the computer is being used in a voting application.) And they have two recommendations:
1. DRE machines must have a voter-verifiable paper audit trails (sometimes called a voter-verified paper ballot). This is a paper ballot printed out by the voting machine, which the voter is allowed to look at and verify. He doesn't take it home with him. Either he looks at it on the machine behind a glass screen, or he takes the paper and puts it into a ballot box. The point of this is twofold. One, it allows the voter to confirm that his vote was recorded in the manner he intended. And two, it provides the mechanism for a recount if there are problems with the machine.
2. Software used on DRE machines must be open to public scrutiny. This also has two functions. One, it allows any interested party to examine the software and find bugs, which can then be corrected. This public analysis improves security. And two, it increases public confidence in the voting process. If the software is public, no one can insinuate that the voting system has unfairness built into the code. (Companies that make these machines regularly argue that they need to keep their software secret for security reasons. Don't believe them. In this instance, secrecy has nothing to do with security.)
Computerized systems with these characteristics won't be perfect -- no piece of software is -- but they'll be much better than what we have now. We need to start treating voting software like we treat any other high-reliability system. The auditing that is conducted on slot machine software in the U.S. is significantly more meticulous than what is done to voting software. The development process for mission-critical airplane software makes voting software look like a slapdash affair. If we care about the integrity of our elections, this has to change.
Proponents of DREs often point to successful elections as "proof" that the systems work. That completely misses the point. The fear is that errors in the software -- either accidental or deliberately introduced -- can undetectably alter the final tallies. An election without any detected problems is no more a proof the system is reliable and secure than a night that no one broke into your house is proof that your door locks work. Maybe no one tried, or maybe someone tried and succeeded...and you don't know it.
Even if we get the technology right, we still won't be done. If the goal of a voting system is to accurately translate voter intent into a final tally, the voting machine is only one part of the overall system. In the 2004 U.S. election, problems with voter registration, untrained poll workers, ballot design, and procedures for handling problems resulted in far more votes not being counted than problems with the technology. But if we're going to spend money on new voting technology, it makes sense to spend it on technology that makes the problem easier instead of harder.
A version of this essay appeared on openDemocracy.com:
Avi Rubin's experiences as an election judge:
Problems with 2004 Presidential Election:
An open-source project to develop an electronic voting machine:
I received this e-mail message, with an attachment entitled "email@example.com." The file is really an executable .com file, presumably one harboring a virus. Clever social engineering attack, and one I had not seen before.
From: ((some fake address))
Subject: Message could not be delivered
Dear user firstname.lastname@example.org,
Your email account has been used to send a huge amount of spam messages during the last week. Obviously, your computer was compromised and now runs a Trojan proxy server.
Please follow our instruction in the attached file in order to keep your computer safe.
counterpane.com user support team.
Ampersand lives in Oregon, which does its voting entirely by mail. On Monday -- the day a lot of Oregon voters got their ballots -- someone knocked over Ampersand's "No on 36" sign and stole his mailbox, presumably hoping to get his ballot and prevent him from voting "no" on Amendment 36. In fact, he'd happened to receive his ballot the previous Saturday, but it could easily have worked.
From "Alas A Blog": "On Monday, someone came into our yard, knocked over our "No on 36" sign, and stole our mailbox (with Monday's mail inside it).
"I doubt this was just random vandalism; Oregon mailed out voter ballots last week (Oregon does the vote entirely by mail), and a huge number of Oregonians got their ballots on Monday. So I think someone grabbed our mailbox and ran hoping that they'd get our ballots and thus keep us from voting against measure 36."
I doubt this was part of any widespread effort. Surely anyone doing it on a large scale would get tired of hauling off mailboxes, and just steal the mail inside. It's also hard to avoid getting caught, since you have to steal the mail during the day -- after it's delivered but before the resident comes home to get it.
Still, it is interesting how the predictably timed mailing of ballots, and the prevalence of political lawn signs, enables a very narrowly targeted attack.
Information insecurity is costing us billions. We pay for it in theft: information theft, financial theft. We pay for it in productivity loss, both when networks stop working and in the dozens of minor security inconveniences we all have to endure. We pay for it when we have to buy security products and services to reduce those other two losses. We pay for security, year after year.
The problem is that all the money we spend isn't fixing the problem. We're paying, but we still end up with insecurities.
The problem is insecure software. It's bad design, poorly implemented features, inadequate testing and security vulnerabilities from software bugs. The money we spend on security is to deal with the effects of insecure software.
And that's the problem. We're not paying to improve the security of the underlying software. We're paying to deal with the problem rather than to fix it.
The only way to fix this problem is for vendors to fix their software, and they won't do it until it's in their financial best interests to do so.
Today, the costs of insecure software aren't borne by the vendors that produce the software. In economics, this is known as an externality, the cost of a decision that's borne by people other than those making the decision.
There are no real consequences to the vendors for having bad security or low-quality software. Even worse, the marketplace often rewards low quality. More precisely, it rewards additional features and timely release dates, even if they come at the expense of quality.
If we expect software vendors to reduce features, lengthen development cycles and invest in secure software development processes, it needs to be in their financial best interests to do so. If we expect corporations to spend significant resources on their own network security -- especially the security of their customers -- it also needs to be in their financial best interests.
Liability law is a way to make it in those organizations' best interests. Raising the risk of liability raises the costs of doing it wrong and therefore increases the amount of money a CEO is willing to spend to do it right. Security is risk management; liability fiddles with the risk equation.
Basically, we have to tweak the risk equation so the CEO cares about actually fixing the problem, and putting pressure on his balance sheet is the best way to do that.
Clearly, this isn't all or nothing. There are many parties involved in a typical software attack. There's the company that sold the software with the vulnerability in the first place. There's the person who wrote the attack tool. There's the attacker himself, who used the tool to break into a network. There's the owner of the network, who was entrusted with defending that network. One hundred percent of the liability shouldn't fall on the shoulders of the software vendor, just as 100% shouldn't fall on the attacker or the network owner. But today, 100% of the cost falls directly on the network owner, and that just has to stop.
We will always pay for security. If software vendors have liability costs, they'll pass those on to us. It might not be cheaper than what we're paying today. But as long as we're going to pay, we might as well pay to fix the problem. Forcing the software vendor to pay to fix the problem and then pass those costs on to us means that the problem might actually get fixed.
Liability changes everything. Currently, there is no reason for a software company not to offer feature after feature after feature. Liability forces software companies to think twice before changing something. Liability forces companies to protect the data they're entrusted with. Liability means that those in the best position to fix the problem are actually responsible for the problem.
Information security isn't a technological problem. It's an economics problem. And the way to improve information technology is to fix the economics problem. Do that, and everything else will follow.
This essay originally appeared in Computerworld:
Crypto-Gram is currently in its seventh year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
The Trojan Defense
Why Digital Signatures are Not Signatures
Programming Satan's Computer: Why Computers Are Insecure
Elliptic Curve Public-Key Cryptography
The Future of Fraud: Three reasons why electronic commerce is different
Software Copy Protection: Why copy protection does not work
The World Series is no stranger to security. Fans try to sneak into the ballpark without tickets, or with counterfeit tickets. Often foods and alcohol are prohibited from being brought into the ballpark, to enforce the monopoly of the high-priced concessions. Violence is always a risk: both small fights and larger-scale riots that result from fans from both teams being in such close proximity -- like the one that almost happened during the sixth game of the AL series.
Today, the new risk is terrorism. Security at the Olympics cost $1.5 billion. $50 million each was spent at the Democratic and Republican conventions. There has been no public statement about the security bill for the World Series, but it's reasonable to assume it will be impressive.
In our fervor to defend ourselves, it's important that we spend our money wisely. Much of what people think of as security against terrorism doesn't actually make us safer. Even in a world of high-tech security, the most important solution is the guy watching to keep beer bottles from being thrown onto the field.
Generally, security measures that defend specific targets are wasteful, because they can be avoided simply by switching targets. If we completely defend the World Series from attack, and the terrorists bomb a crowded shopping mall instead, little has been gained.
Even so, some high-profile locations, like national monuments and symbolic buildings, and some high-profile events, like political conventions and championship sporting events, warrant additional security. What additional measures make sense?
ID checks don't make sense. Everyone has an ID. Even the 9/11 terrorists had IDs. What we want is to somehow check intention; is the person going to do something bad? But we can't do that, so we check IDs instead. It's a complete waste of time and money, and does absolutely nothing to make us safer.
Automatic face recognition systems don't work. Computers that automatically pick terrorists out of crowds are a great movie plot device, but doesn't work in the real world. We don't have a comprehensive photographic database of known terrorists. Even worse, the face recognition technology is so faulty that it often can't make the matches even when we do have decent photographs. We tried it at the 2001 Super Bowl; it was a failure.
Airport-like attendee screening doesn't work. The terrorists who took over the Russian school sneaked their weapons in long before their attack. And screening fans is only a small part of the solution. There are simply too many people, vehicles, and supplies moving in and out of a ballpark regularly. This kind of security failed at the Olympics, as reporters proved again and again that they could sneak all sorts of things into the stadiums undetected.
What does work is people: smart security officials watching the crowds. It's called "behavior recognition," and it requires trained personnel looking for suspicious behavior. Does someone look out of place? Is he nervous, and not watching the game? Is he not cheering, hissing, booing, and waving like a sports fan would?
This is what good policemen do all the time. It's what Israeli airport security does. It works because instead of relying on checkpoints that can be bypassed, it relies on the human ability to notice something that just doesn't feel right. It's intuition, and it's far more effective than computerized security solutions.
Will this result in perfect security? Of course not. No security measures are guaranteed; all we can do is reduce the odds. And the best way to do that is to pay attention. A few hundred plainclothes policemen, walking around the stadium and watching for anything suspicious, will provide more security against terrorism than almost anything else we can reasonably do.
And the best thing about policemen is that they're adaptable. They can deal with terrorist threats, and they can deal with more common security issues, too.
Most of the threats at the World Series have nothing to do with terrorism; unruly or violent fans are a much more common problem. And more likely than a complex 9/11-like plot is a lone terrorist with a gun, a bomb, or something that will cause panic. But luckily, the security measures ballparks have already put in place to protect against the former also help protect against the latter.
The new mayor of Madison, Alabama, has a surprisingly sensible attitude about security.
High-school kids are sneaking cell phones past metal detectors.
A prisoner is freed from jail based on a forged fax.
Faxes are fascinating. They're treated like original documents, but lack any of the authentication mechanisms that we've developed for original documents: letterheads, watermarks, signatures. Most of the time there's no problem, but sometimes you can exploit people's innate trust in faxes to good effect.
Here's an alarm system that calls out to other similar systems within 150 meters. Interesting application of the peer-to-peer philosophy to physical alarms.
Bill Gates points out that all those IE security holes have not been Microsoft's fault. (The first step towards recovery is recognizing that you have a problem.)
According to the Associated Press, physical mail from the U.S. to Canada will be rejected unless the complete name and address of the sender are included. The reason: a "response to increased security."
Long article on the security of Windows vs. Linux:
Bruce Schneier is hosting a series of roundtable events, discussing security concerns with CIOs and CISOs. If you would like to receive an invitation to one of these events, please send e-mail to email@example.com. Locations I'll be visiting between now and the end of this year include San Francisco, Sacramento, and Chicago.
Much of the political rhetoric surrounding the US presidential election centers around the relative security posturings of President George W. Bush and Senator John Kerry, with each side loudly proclaiming that his opponent will do irrevocable harm to national security.
Terrorism is a serious issue facing our nation in the early 21st century, and the contrasting views of these candidates is important. But this debate obscures another security risk, one much more central to the US: the increasing centralization of American political power in the hands of the executive branch of the government.
Over 200 years ago, the framers of the US Constitution established an ingenious security device against tyrannical government: they divided government power among three different bodies. A carefully thought-out system of checks and balances in the executive branch, the legislative branch, and the judicial branch, ensured that no single branch became too powerful. After watching tyrannies rise and fall throughout Europe, this seemed like a prudent way to form a government.
Since 9/11, the United States has seen an enormous power grab by the executive branch. From denying suspects the right to a trial -- and sometimes to an attorney -- to the law-free zone established at Guantanamo, from deciding which ratified treaties to ignore to flouting laws designed to foster open government, the Bush administration has consistently moved to increase its power at the expense of the rest of the government. The so-called "Torture Memos," prepared at the request of the president, assert that the president can claim unlimited power as long as it is somehow connected with counterterrorism.
Presidential power as a security issue will not play a role in the upcoming US election. Bush has shown through his actions during his first term that he favors increasing the powers of the executive branch over the legislative and the judicial branches. Kerry's words show that he is in agreement with the president on this issue. And largely, the legislative and judicial branches are allowing themselves to be trampled over.
In times of crisis, the natural human reaction is to look for safety in a single strong leader. This is why Bush's rhetoric of strength has been so well-received by the American people, and why Kerry is also campaigning on a platform of strength. Unfortunately, consolidating power in one person is dangerous. History shows again and again that power is a corrupting influence, and that more power is more corrupting. The loss of the American system of checks and balances is more of a security danger than any terrorist risk.
The ancient Roman Senate had a similar way of dealing with major crises. When there was a serious military threat against the safety and security of the Republic, the long debates and compromise legislation that accompanied the democratic process seemed a needless luxury. The Senate would appoint a single person, called a "dictator" (Latin for "one who orders") to have absolute power over Rome in order to more efficiently deal with the crisis. He was appointed for a period of six months or for the duration of the emergency, whichever period was shorter. Sometimes the process worked, but often the injustices that resulted from having a dictator were worse than the original crisis.
Today, the principles of democracy enshrined in the US constitution are more important than ever. In order to prevail over global terrorism while preserving the values that have made America great, the constitutional system of checks and balances is critical.
This is not a partisan issue; I don't believe that John Kerry, if elected, would willingly lessen his own power any more than second-term President Bush would. What the US needs is a strong Congress and a strong court system to balance the presidency, not weak ones ceding ever more power to the presidency.
Originally published in the Sydney Morning Herald.
Merced County is in California, and they explained why they chose Election Systems & Software (ES&S) as their electronic voting machines. There are a bunch of vague selection criteria, but this one is quite explicit: "Uses 1,064 bit encryption, not 128 which is less secure."
This is the website, although they have removed the offending sentence after I blogged it:
The computer security industry is guilty of overhyping and underdelivering. Again and again, it tells customers that they must buy a certain product to be secure. Again and again, they buy the products -- and are still insecure.
Firewalls didn't keep out network attackers -- in fact, the notion of "perimeter" is severely flawed. Intrusion detection systems (IDSs) didn't keep networks safe, and worms and viruses do considerably damage despite the prevalence of antivirus products. It's in this context that I want to evaluate Security Information Management Systems, or SIMS, which promise to solve a serious network problem: log analysis.
Computer logs are a goldmine of security information, containing not just IDS alerts, but messages from firewalls, servers, applications, and other network devices. Your network produces megabytes of these logs every day, and hidden in them are attack footprints. The trick is finding and reacting to them fast enough.
Analyzing log messages can determine how the attacker broke in, what he accessed, whether any backdoors were added, and so on. The idea behind log analysis is that if you can read the log messages in real time, you can figure out what the attacker is doing. And if you can respond fast enough, you can kick him out before he does damage. It's security detection and response. Log analysis works, whether or not you use SIMS.
Even better, it works against a wide variety of risks. Unlike point solutions, security monitoring is general. Log analysis can detect attackers regardless of their tactics.
But SIMS don't live up to the hype, because they're missing the essential ingredient that so many other computer security products lack: human intelligence. Firewalls often fail because they're configured and maintained improperly. IDSs are often useless because there's no one to respond to their alerts -- or to separate the real attacks from the false alarms. SIMS have the same problem: unless there's a human expert monitoring them, they're not defending anything. The tools are only as effective as the people using them.
SIMS require vigilance: attacks can happen at any time of the day and any day of the year. Consequently, staffing requires five fulltime employees; more, if you include supervisors and backup personnel with more specialized skills. Even if an organization could find the budget for all of these people, it would be very difficult to hire them in today's job market. And attacks against a single organization don't happen often enough to keep a team of this caliber engaged and interested.
Back in 1999, I founded Counterpane Internet Security; we sell an outsourced service called Managed Security Monitory, in which trained security analysts monitor IDS alerts and log messages. Because of the information our analysts received from the network -- in real time -- as well as their training and expertise, the analysts could detect attacks in progress and provide customers with a level of security they were incapable of achieving otherwise.
When building the Counterpane monitoring service in 1999, we examined log-monitoring appliances from companies like Intellitactics and e-Security. Back then, they weren't anywhere near good enough for us to use, so we developed our own proprietary system. Today, because of the caliber of the human analysts who use the Counterpane system, it's much better than any commercial SIMS. We were able to design it with our expert detection-and-response analysts in mind, and not the general sysadmin market.
The key to network security is people, not products. Piling more security products, such as SIMS, only our network won't help. This is why I believe that network security will eventually be outsourced. There's no other cost-effective way to reliably get the experts you need, and therefore no other cost-effective way to reliably get security.
This originally appeared in the September/October 2004 issue of IEEE Security and Privacy Magazine.
Technology makes us safer.
Communications technologies ensure that emergency response personnel can communicate with each other in an emergency--whether police, fire or medical. Bomb-sniffing machines now routinely scan airplane baggage. Other technologies may someday detect contaminants in our water supply or our atmosphere.
Throughout law enforcement and intelligence investigation, different technologies are being harnessed for the good of defense. However, technologies designed to secure specific targets have a limited value.
By its very nature, defense against terrorism means we must be prepared for anything. This makes it expensive--if not nearly impossible--to deploy threat-specific technological advances at all the places where they're likely needed. So while it's good to have bomb-detection devices in airports and bioweapon detectors in crowded subways, defensive technology cannot be applied at every conceivable target for every conceivable threat. If we spent billions of dollars securing airports and the terrorists shifted their attacks to shopping malls, we wouldn't gain any security as a society.
It's far more effective to try and mitigate the general threat. For example, technologies that improve intelligence gathering and analysis could help federal agents quickly chase down information about suspected terrorists. The technologies could help agents more rapidly uncover terrorist plots of any type and aimed at any target, from nuclear plants to the food supply. In addition, technologies that foster communication, coordination and emergency response could reduce the effects of a terrorist attack, regardless of what form the attack takes. We get the most value for our security dollar when we can leverage technology to extend the capabilities of humans.
Just as terrorists can use technology more or less wisely, we as defenders can do the same. It is only by keeping in mind the strengths and limitations of technology that we can increase our security without wasting money, freedoms or civil liberties, and without making ourselves more vulnerable to other threats. Security is a trade-off, and it is important that we use technologies that enable us to make better trade-offs and not worse ones.
This essay was originally published on CNet:
From: "Dosco Jones" <doscojonesearthlink.net>
Subject: Disrupting Air Travel
While the writing was described as 'Arabic,' upon examination it was determined to be in Farsi (the official language of Iran and at least two other countries -- <http://en.wikipedia.org/wiki/Persian_language>). The content was a simple meditative prayer. While written Farsi does use a modified form of the Arabic alphabet, it is not of the Arabic language family. I doubt many native-born Americans know this.
From: "Michael Lambrellis" <mikelambrellishotmail.com>
Subject: Disrupting Air Travel
Leaving a suspicious note on an airplane is a highly effective, and low-cost denial-of-service attack. In the current climate, scraps of paper with Arabic script would almost certainly be considered suspicious enough to cause a plane to turn back. The culprit (if found) could even plausibly deny malicious intent by ensuring that the fragment turns out to be a note to a loved one, a recipe for falafel or the address of the Hilton Hotel in Riyadh.
A scrap of paper can't cause any deaths but it certainly has tangible benefits to the terrorist. It is a classic asymmetric attack which costs virtually nothing for the perpetrator to commit, but incurs large costs for those attacked. Imagine a sustained campaign of such notes left on planes over a period of a month or two. The initial publicity would eventually decay as people become inured to the threat. This would have the eventual effect of making people less likely to report such notes or anything similarly suspicious. The financial cost is large. Lost earnings for the airlines, their customers, delays in goods shipped, as well as reduced custom. How could the airlines possibly respond? Train all pilots in Arabic? Enough false alarms and eventually flight crews will be ordered to studiously ignore such notes.
Unfortunately, just like in the online world where DOS attacks greatly outnumber actual system penetrations, I fully expect the ease of carrying out DOS-like attacks to eventually outweigh genuine terrorist attacks. The result? An overall drop in the level of wariness in the broader community, and businesses will deal with the financial cost the same way as credit-card fraud -- i.e., factor it in as another insurable risk.
From: Tracy R Reed <treedcopilotconsulting.com>
Subject: Re: CRYPTO-GRAM, October 15, 2004
> Encrypted e-mail client for the Treo:
Being a Treo owner I read this bit with great interest. Unfortunately, it turns out that they use their own proprietary encryption algorithm which is "21,000" bit and therefore 200 times more secure than 1024 bit SSL. This does not sound encouraging. As you have pointed out many times, proprietary algorithms have a high probability of being insecure and the fact that they compare their proprietary algorithm with SSL without even mentioning whether theirs is public key or symmetric or what, and say it is 200 times stronger suggests that they are not really cryptographers. I would just like to find an e-mail client that supports GPG and SSL IMAP connections for my Treo and I would be happy.
From: Will Rodger <wrodgerpobox.com>
Subject: Kryptonite Bicycle Locks
There's a bit more to the Kryptonite bicycle lock story than meets the eye. The design they're using now is actually one they've used on other high-end locks in their stable: The disk cylinder (as opposed to disk-style) lock uses a flat key with indentations drilled into its face. It's a completely different design and not vulnerable to the ball point pen attack. I can't say how resistant it is to other attacks, though -- you may want to poke around for the sake of fact checking.
Many messenger bikes are parked on the street in front of my office at any one time. Most still use the old-style K locks. Interestingly, an older design of the lock is supposedly not as vulnerable, since the pins bottom out at the end of the keyway instead of where they need to be to open the lock. Maybe the couriers figure they will be out before anyone with a Bic pen can open the lock. Perhaps they know that skilled bike thieves can already break the locks, and so are indifferent if the attack means that they can save 60 seconds with the Bic?
From: "John at Brooksonline" <johnbrooksonline.net>
Subject: Reading License Plates
You may like to know that in UK some London (Metropolitan) 'traffic' cars have had plate scanners built into their radiator grilles for a while. The driver / observer needs to do nothing at all! Each time the scanner sees a plate it checks to see if the vehicle (or registered owner) is actually wanted for whatever, then beeps and puts the details on the in-car terminal. Car in front then gets pulled over.
From: Diane Carlini <dcarlinilexarmedia.com>
Subject: Lexar JumpDrives
I saw your recent posting regarding Lexar and wanted to take a minute to respond. While we appreciate your input, there are a few points to clarify for your readers as there is confusion around the stake security advisory.
Specifically, stake's findings revealed a slight security exposure in scenarios where an experienced hacker could potentially monitor and gain access to the secure area. This was only the case in version 1.0, which included SafeGuard. Lexar's JumpDrive Secure 2.0 device now includes software based on 256-bit AES Encryption Technology. With this new product, JumpDrive Secure 2.0 offers the highest level of data protection that is commonly available today.
Registered JumpDrive Secure customers will be contacted to inform them of this Security Advisory found in version 1.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on security: computer and otherwise. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
You can subscribe, unsubscribe, or change your address on the Web at <http://www.schneier.com/crypto-gram.html>.
You can also subscribe or unsubscribe by sending e-mail to firstname.lastname@example.org. To subscribe, use the word "subscribe" (without quotes) as the subject. To unsubscribe, use the word "unsubscribe" as the subject.
Comments on CRYPTO-GRAM should be sent to email@example.com. Permission to print comments is assumed unless otherwise stated. Comments may be edited for length and clarity.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is the author of the best sellers "Beyond Fear," "Secrets and Lies," and "Applied Cryptography," and an inventor of the Blowfish and Twofish algorithms. He is founder and CTO of Counterpane Internet Security Inc., and is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on security topics. See <http://www.schneier.com>.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide. See <http://www.counterpane.com>.
Photo of Bruce Schneier by Per Ervland.
Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..