Schneier on Security
A blog covering security and security technology.
November 2006 Archives
The DHS wants to share terrorist biometric information:
Robert Mocny, acting director of the U.S. Visitor and Immigrant Status Indicator Technology program, outlined a proposal under which the United States would begin exchanging information about terrorists first with closely allied governments in Britain, Europe and Japan ,and then progressively extend the program to other countries as a means of foiling terrorist attacks.
Anyone think that this will be any better than the no-fly list?
Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.
The second security problem is similar, but you store your valuables in someone else's safe. Even worse, it's someone you don't trust. He doesn't know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it's still just a part of your overall home security. In the second case, the safe is the only security device you have.
This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.
These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It's difficult to secure because breaks are generally "class breaks." The expert who figures out how to do it can build hardware -- or write software -- to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.
This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.
I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring -- and actually affecting -- the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer's owner from learning the DRM system's cryptographic keys.
These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these "side-channel attacks," because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.
Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.
Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren't any secrets on the card. Your bank doesn't care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back -- the real data, and the security, are in the bank's databases.
Or compare a DRM system with a financial model that doesn't care about copying. The former is impossible to secure, the latter easy.
While common in digital systems, this kind of security problem isn't limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It's the same problem: the owners of the data on the tickets -- the lottery commission -- tried to keep that data secret from those who had physical control of the tickets. And they failed.
Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn't possible, because there are no secrets on the tickets for an attacker to learn.
Separating data ownership and device ownership doesn't mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker -- with confidence. I'm not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer -- especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.
This essay originally appeared on Wired.com.
EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer "sorry better luck next time" and claim the prize on their own at a later date.
Nice scam, but nothing to do with the point of this essay.
Many countries have the concept of a "notary public." Their training and authority varies from country to country; in the United States, their primary role is to witness the signature of legal documents. Many important legal documents require notarization in addition to a signature, primarily as a security device.
When I get a document notarized, I present my photo ID to a notary public. Generally, I go to my local bank, where many of the employees are notary publics and I don't have to pay a fee for the service. I sign the document while the notary watches, and he then signs an attestation to the fact that he saw me sign it. He doesn't read the document; that's not his job. And then I send my notarized document to whoever needed it: another bank, the patent office, my mortgage company, whatever.
It's an eminently hackable system. Sure, you can always present a fake ID -- I'll bet my bank employee has never seen a West Virginia driver's license, for example -- but that takes work. The easiest way to hack the system is through social engineering.
Bring a small pile of documents to be notarized. In the middle of the pile, slip in a document with someone else's signature. Since he's busy with his own signing and stamping -- and you're engaging him in slightly distracting conversation -- he's probably not going to notice that he's notarizing something "someone else" signed. If he does, apologize for your honest mistake and try again elsewhere.
Of course, you're better off visiting a notary who charges by the document: he'll be more likely to appreciate the stack of documents you've brought to him and less likely to ask questions. And pick a location -- not like a bank -- that isn't filled with security cameras.
Of course, this won't be enough if the final recipient of the document checks the signature; you're on your own when it comes to forgery. And in my state the notary has to keep a record of the document he signs; this one won't be in his records if he's ever asked. But if you need to switch the deed on a piece of property, change ownership of a bank account, or give yourself power of attorney over someone else, hacking the notary system makes the job a lot easier.
Anyone know how often this kind of thing happens in real life?
Someone goes door-to-door, soliciting contributions to a charity. He prefers a check -- it's safer for you, after all. But he offers his pen for you to sign your check, and the pen is filled with erasable ink. Later, he changes both the payee and the amount, and cashes the check.
This surely isn't a new scam, but it's happening in the UK right now. I've already written about attackers using different solvents to wash ink off checks, but this one is even more basic -- the attacker gives the victim a bad pen to start with.
I thought checks were printed with ink that also erased, voiding the check. Why does this sort of attack still work?
Earlier this month there was a bioterrorism drill in Seattle. Postal carriers delivered dummy packages to "nearly thousands" of people (yes, that's what the article said; my guess is "nearly a thousand"), testing how the postal system could be used to quickly deliver medications. (Here's a reaction from a recipient.)
Sure, there are lots of scenarios where this kind of delivery system isn't good enough, but that's not the point. In general, I think emergency response is one of the few areas where we need to spend more money. And, in general, I think tests and drills like this are good -- how else will we know if the systems will work the way we think they will?
Last March I wrote that two-factor authentication isn't going to reduce financial fraud or identity theft, that all it will do is force the criminals to change their tactics:
Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.
The solution is not to better authenticate the person, but to authenticate the transaction. (Think credit cards. No one checks your signature. They really don't care if you're you. They maintain security by authenticating the transactions.)
Of course, no one listens to me. U.S. regulators required banks to implement two-factor authentication by the end of this year. But customers are rebelling, and banks are scrambling to figure out something -- anything -- else. And, amazingly enough and purely by accident it seems, they've stumbled on security solutions that actually work:
Instead, to comply with new banking regulations and stem phishing losses, banks and the vendors who serve them are hurriedly putting together multipronged strategies that they say amount to "strong" authentication. The emerging approach generally consists of somehow recognizing a customer's computer, asking additional challenge questions for risky behavior and putting in place back-end fraud detection.
Exactly. That's how you do it.
EDITED TO ADD (12/6): Another example.
There was a profile of me in the St. Paul Pioneer Press on Sunday.
I'm pretty pleased with the article, but this is -- by far -- my favorite line, about Applied Cryptography:
"The first seven or eight chapters you can read without knowing any math at all," Walker said. "The second half of the book you can't export overseas -- it's classified as munitions."
It's not true, of course, but it's a great line.
There's also this in the Providence Journal.
According to The New York Times:
The National Cryptologic Museum, at Fort Meade, Md., home of thousands of code-breaking and code-making artifacts dating back to the 1500s, has acquired a major collection of books on codes and ciphers, the museum said. It was donated by David Kahn, a leading American scholar of cryptology and the author of "The Codebreakers: The Story of Secret Writing." The collection includes "Polygraphiae Libri Sex" (1518) by Johannes Trithemius, the first known printed book on cryptology, along with notes of interviews with modern cryptologists, memos, photocopies and pamphlets. About a dozen items from the collection are currently on display.
I was interviewed on the subject of RFID passports.
I've written about the 2006 Workshop on Economics of Information Security (WEIS); I think it's the most interesting security conference out there.
Interesting article on the history and current search for a drug that compels people to tell the truth:
There is no pharmaceutical compound today whose proven effect is the consistent or predictable enhancement of truth-telling.
Innocent passenger arrested for trying to bring a rubber-band ball onto an airplane.
Woman passes out on plane after her drugs are confiscated.
San Francisco International Airport screeners were warned in advance of undercover test.
And a cartoon.
We have a serious problem in this country. The TSA operates above, and outside, the law. There's no due process, no judicial review, no appeal.
EDITED TO ADD (11/21): And six Muslim imams removed from a plane by US Airways because...well because they're Muslim and that scares people. After they were cleared by the authorities, US Airways refused to sell them a ticket. Refuse to be terrorized, people!
Note that US Airways is the culprit here, not the TSA.
EDITED TO ADD (11/22): Frozen spaghetti sauce confiscated:
You think this is silly, and it is, but a week ago my mother caused a small commotion at a checkpoint at Boston-Logan after screeners discovered a large container of homemade tomato sauce in her bag. What with the preponderance of spaghetti grenades and lasagna bombs, we can all be proud of their vigilance. And, as a liquid, tomato sauce is in clear violation of the Transportation Security Administration's carry-on statutes. But this time, there was a wrinkle: The sauce was frozen.
In the end, the TSA did the right thing and let the woman on with her frozen food.
A new paper describes a timing attack against RSA, one that bypasses existing security measures against these sorts of attacks. The attack described is optimized for the Pentium 4, and is particularly suited for applications like DRM.
Meta moral: If Alice controls the device, and Bob wants to control secrets inside the device, Bob has a very difficult security problem. These "side-channel" attacks -- timing, power, radiation, etc. -- allow Alice to mount some very devastating attacks against Bob's secrets.
A document obtained by EPIC from the State Department reveals that 2004 government tests found passports with radio frequency identification (RFID) chips that are read 27% to 43% less successfully than the previous Machine Readable Zone technology (two lines of text printed at the bottom of the first page of a passport).
I've written about RFID passports before.
Only in the UK.
EDITED TO ADD (11/17): Commentary by Bruce Sterling.
Research paper by Omer Berkman and Odelia Moshe Ostrovsky: "The Unbearable Lightness of PIN Cracking":
Abstract. We describe new attacks on the financial PIN processing API. The attacks apply to switches as well as to verification facilities. The attacks are extremely severe allowing an attacker to expose customer PINs by executing only one or two API calls per exposed PIN. One of the attacks uses only the translate function which is a required function in every switch. The other attacks abuse functions that are used to allow customers to select their PINs online. Some of the attacks can be applied on a switch even though the attacked functions require issuer’s keys which do not exist on a switch. This is particularly disturbing as it was widely believed that functions requiring issuer’s keys cannot do any harm if the respective keys are unavailable.
Basically, the paper describes an inherent flaw with the way ATM PINs are encrypted and transmitted on the international financial networks, making them vulnerable to attack from malicious insiders in a bank.
One of the most disturbing aspects of the attack is that you're only as secure as the most untrusted bank on the network. Instead of just having to trust your own issuer bank that they have good security against insider fraud, you have to trust every other financial institution on the network as well. An insider at another bank can crack your ATM PIN if you withdraw money from any of the other bank's ATMs.
The authors tell me that they've contacted the major credit card companies and banks with this information, and haven't received much of a response. They believe it is now time to alert the public.
Here's a dumb idea: voting from your TiVo.
EPIC on electronic voting machines.
In the world of voting, automatic recount laws are not uncommon. Virginia, where George Allen lost to James Webb in the Senate race by 7,800 out of over 2.3 million votes, or 0.33 percent percent, is an example. If the margin of victory is 1 percent or less, the loser is allowed to ask for a recount. If the margin is 0.5 percent or less, the government pays for it. If the margin is between 0.5 percent and 1 percent, the loser pays for it.
We have recounts because vote counting is -- to put it mildly -- sloppy. Americans like their election results fast, before they go to bed at night. So we're willing to put up with inaccuracies in our tallying procedures, and ignore the fact that the numbers we see on television correlate only roughly with reality.
Traditionally, it didn't matter very much, because most voting errors were "random errors."
There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random -- equally likely to happen to anyone. In a close race, random errors won't change the result because votes intended for candidate A that mistakenly go to candidate B happen at the same rate as votes intended for B that mistakenly go to A. (Mathematically, as candidate A's margin of victory increases, random errors slightly decrease it.)
This is why, historically, recounts in close elections rarely change the result. The recount will find the few percent of the errors in each direction, and they'll cancel each other out. In an extremely close election, a careful recount will yield a different result -- but that's a rarity.
The other kind of voting error is a systemic error. These are errors in the voting process -- the voting machines, the procedures -- that cause votes intended for A to go to B at a different rate than the reverse.
An example would be a voting machine that mysteriously recorded more votes for A than there were voters. (Sadly, this kind of thing is not uncommon with electronic voting machines.) Another example would be a random error that only occurs in voting equipment used in areas with strong A support. Systemic errors can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A.
Even worse, systemic errors can introduce errors out of proportion to any actual randomness in the vote-counting process. That is, the closeness of an election is not any indication of the presence or absence of systemic errors.
When a candidate has evidence of systemic errors, a recount can fix a wrong result -- but only if the recount can catch the error. With electronic voting machines, all too often there simply isn't the data: there are no votes to recount.
This year's election in Florida's 13th Congressional District is such an example. The winner won by a margin of 373 out of 237,861 total votes, but as many as 18,000 votes were not recorded by the electronic voting machines. These votes came from areas where the loser was favored over the winner, and would have likely changed the result.
Or imagine this -- as far as we know -- hypothetical situation: After the election, someone discovers rogue software in the voting machines that flipped some votes from A to B. Or someone gets caught vote tampering -- changing the data on electronic memory cards. The problem is that the original data is lost forever; all we have is the hacked vote.
Faced with problems like this, we can do one of two things. We can certify the result anyway, regretful that people were disenfranchised but knowing that we can't undo that wrong. Or, we can tell everyone to come back and vote again.
To be sure, the very idea of revoting is rife with problems. Elections are a snapshot in time -- election day -- and a revote will not reflect that. If Virginia revoted for the Senate this year, the election would not just be for the junior senator from Virginia, but for control of the entire Senate. Similarly, in the 2000 presidential election in Florida, or the 2004 presidential election in Ohio, single-state revotes would have decided the presidency.
And who should be allowed to revote? Should only people in those precincts where there were problems revote, or should the entire election be rerun? In either case, it is certain that more voters will find their way to the polls, possibly changing the demographic and swaying the result in a direction different than that of the initial set of voters. Is that a bad thing, or a good thing?
Should only people who actually voted -- records are kept -- or who could demonstrate that they were erroneously turned away from the polls be allowed to revote? In this case, the revote will almost certainly have fewer voters, as some of the original voters will be unable to vote a second time. That's probably a bad thing -- but maybe it's not.
The only analogy we have for this are run-off elections, which are required in some jurisdictions if the winning candidate didn't get 50 percent of the vote. But it's easy to know when you need to have a run-off. Who decides, and based on what evidence, that you need to have a revote?
I admit that I don't have the answers here. They require some serious thinking about elections, and what we're trying to achieve. But smart election security not only tries to prevent vote hacking -- or even systemic electronic voting-machine errors -- it prepares for recovery after an election has been hacked. We have to start discussing these issues now, when they're non-partisan, instead of waiting for the inevitable situation, and the pre-drawn battle lines those results dictate.
This essay originally appeared on Wired.com.
A good idea:
The office of U.S. intelligence czar John Negroponte announced Intellipedia, which allows intelligence analysts and other officials to collaboratively add and edit content on the government's classified Intelink Web much like its more famous namesake on the World Wide Web.
I'll just quote this bit:
Files are encrypted in place using the 524,288 Bit cipher SCC, better know as the king of ciphers.
For reference, here's my snake oil guide from 1999.
Welcome to a surveillance society:
If you want to hire a car at Stansted Airport, you now need to give a fingerprint.
This is the most amusing bit:
"It's not intrusive really. It's different -- and people need to adjust to it. It's not Big Brother, it's about protecting people's identities. The police will never see these thumbprints unless a crime is committed."
What are the odds that no crime will ever be committed?
Fingerprints are becoming more common in the UK:
But regardless of any ideological arguments, the use of biometric technology -- where someone is identified by a physical characteristic -- is already entering the mainstream.
In the U.S., elections are run by an army of hundreds of thousands of volunteers. These are both Republicans and Democrats, and the idea is that the one group watches the other: security by competing interests. But at the top are state-elected or -appointed officials, and many election shenanigans in the past several years have been perpetrated by them.
In yet another New York Times op-ed, Loyola Law School professor Richard Hansen argues for professional, non-partisan election officials:
The United States should join the rest of the world's advanced democracies and put nonpartisan professionals in charge. We need officials whose ultimate allegiance is to the fairness, integrity and professionalism of the election process, not to helping one party or the other gain political advantage. We don't need disputes like the current one in Florida being resolved by party hacks.
To me, this is easier said than done. Where are these hundreds of thousands of disinterested election officials going to come from? And how do we ensure that they're disinterested and fair, and not just partisans in disguise? I actually like security by competing interests.
But I do like his idea of a supermajority-confirmed chief elections officer for each state. And at least he's starting the debate about better election procedures in the U.S.
In a New York Times op-ed, New York University sociology professor Dalton Conley points out that vote counting is inherently inaccurate:
The rub in these cases is that we could count and recount, we could examine every ballot four times over and we'd get -- you guessed it -- four different results. That's the nature of large numbers -- there is inherent measurement error. We'd like to think that there is a "true" answer out there, even if that answer is decided by a single vote. We so desire the certainty of thinking that there is an objective truth in elections and that a fair process will reveal it.
He's right, but it's more complicated than that.
There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random. Votes intended for A that mistakenly go to B are just as likely as votes intended for B that mistakenly go to A. This is why, traditionally, recounts in close elections are unlikely to change things. The recount will find the few percent of the errors in each direction, and they'll cancel each other out. But in a very close election, a careful recount will yield a more accurate -- but almost certainly not perfectly accurate -- result.
Systemic errors are more important, because they will cause votes intended for A to go to B at a different rate than the reverse. Those can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A. These errors can either be a particular problem in the system -- a badly designed ballot, for example -- or a random error that only occurs in precincts where A has more supporters than B.
Here's where the problems of electronic voting machines become critical: they're more likely to be systemic problems. Vote flipping, for example, seems to generally affect one candidate more than another. Even individual machine failures are going to affect supporters of one candidate more than another, depending on where the particular machine is. And if there are no paper ballots to fall back on, no recount can undo these problems.
Conley proposes to nullify any election where the margin of victory is less than 1%, and have everyone vote again. I agree, but I think his margin is too large. In the Virginia Senate race, Allen was right not to demand a recount. Even though his 7,800-vote loss was only 0.33%, in the absence of systemic flaws it is unlikely that a recount would change things. I think an automatic revote if the margin of victory is less than 0.1% makes more sense.
Yes, it costs more to run an election twice, but keep in mind that many places already use runoffs when the leading candidate fails to cross a particular threshold. If we are willing to go through all that trouble, why not do the same for certainty in an election that teeters on a razor's edge? One counter-argument is that such a plan merely shifts the realm of debate and uncertainty to a new threshold -- the 99 percent threshold. However, candidates who lose by the margin of error have a lot less rhetorical power to argue for redress than those for whom an actual majority is only a few votes away.
This is a good idea, but it doesn't address the systemic problems with voting. If there are systemic problems, there should be another election day limited to only those precincts that had the problem and only those people who can prove they voted -- or tried to vote and failed -- during the first election day. (Although I could be persuaded that another re-voting protocol would make more sense.)
But most importantly, we need better voting machines and better voting procedures.
EDITED TO ADD (11/17): I mistakenly mischaracterized Conley's position. He says that there should be a revote when the margin of error is greater than 1 per cent, not a 1 per cent margin of victory.
In terms of a two-candidate race in which each has attained around 50 percent of the vote, a 1 percent margin of error would be represented by 1.29 divided by the square root of the number of votes
That's a really good system, although it will be impossible to explain to the general public.
This year I wrote an essay for Forbes.com. It's really nothing that I, and others, haven't already said previously.
Florida 13 is turning out to be a bigger problem than I described:
The Democrat, Christine Jennings, lost to her Republican opponent, Vern Buchanan, by just 373 votes out of a total 237,861 cast - one of the closest House races in the nation. More than 18,000 voters in Sarasota County, or 13 percent of those who went to the polls Tuesday, did not seem to vote in the Congressional race when they cast ballots, a discrepancy that Kathy Dent, the county elections supervisor, said she could not explain.
There'll be a recount, and with that close a margin it's pretty random who will eventually win. But because so many votes were not recorded -- and I don't see how anyone who has any understanding of statistics can look at this data and not conclude that votes were not recorded -- we'll never know who should really win this district.
In Pennsylvania, the Republican State Committee is asking the Secretary of State to impound voting machines because of potential voting errors:
Pennsylvania GOP officials claimed there were reports that some machines were changing Republican votes to Democratic votes. They asked the state to investigate and said they were not ruling out a legal challenge.
RedState.com describes some of the problems:
RedState is getting widespread reports of an electoral nightmare shaping up in Pennsylvania with certain types of electronic voting machines.
I'm happy to see a Republican at the receiving end of the problems.
Actually, that's not true. I'm not happy to see anyone at the receiving end of voting problems. But I am sick and tired of this being perceived as a partisan issue, and I hope some high-profile Republican losses that might be attributed to electronic voting-machine malfunctions (or even fraud) will change that perception. This is a serious problem that affects everyone, and it is in everyone's interest to fix it.
FL-13 was the big voting-machine disaster, but there were other electronic voting-machine problems reported:
The types of machine problems reported to EFF volunteers were wide-ranging in both size and scope. Polls opened late for machine-related reasons in polling places throughout the country, including Ohio, Florida, Georgia, Virginia, Utah, Indiana, Illinois, Tennessee, and California. In Broward County, Florida, voting machines failed to start up at one polling place, leaving some citizens unable to cast votes for hours. EFF and the Election Protection Coalition sought to keep the polling place open late to accommodate voters frustrated by the delays, but the officials refused. In Utah County, Utah, more than 100 precincts opened one to two hours late on Tuesday due to problems with machines. Both county and state election officials refused to keep polling stations open longer to make up for the lost time, and a judge also turned down a voter's plea for extended hours brought by EFF.
And there's this election for mayor, where one of the candidates received zero votes -- even though that candidate is sure he voted for himself.
ComputerWorld is also reporting problems across the country, as is The New York Times. Avi Rubin, whose writings on electronic voting security are always worth reading, writes about a problem he witnessed in Maryland:
The voter had made his selections and pressed the "cast ballot" button on the machine. The machine spit out his smartcard, as it is supposed to do, but his summary screen remained, and it did not appear that his vote had been cast. So, he pushed the smartcard back in, and it came out saying that he had already voted. But, he was still in the screen that showed he was in the process of voting. The voter then pressed the "cast ballot" again, and an error message appeared on the screen that said that he needs to call a judge for assistance. The voter was very patient, but was clearly taking this very seriously, as one would expect. After discussing the details about what happened with him very carefully, I believed that there was a glitch with his machine, and that it was in an unexpected state after it spit out the smartcard. The question we had to figure out was whether or not his vote had been recorded. The machine said that there had been 145 votes cast. So, I suggested that we count the voter authority cards in the envelope attached to the machine. Since we were grouping them into bundles of 25 throughout the day, that was pretty easy, and we found that there were 146 authority cards. So, this meant that either his vote had not been counted, or that the count was off for some other reason. Considering that the count on that machine had been perfect all day, I thought that the most likely thing is that this glitch had caused his vote not to count. Unfortunately, because while this was going on, all the other voters had left, other election judges had taken down and put away the e-poll books, and we had no way to encode a smartcard for him. We were left with the possibility of having the voter vote on a provisional ballot, which is what he did. He was gracious, and understood our predicament.
How many hundreds of these stories do we need before we conclude that electronic voting machines aren't accurate enough for elections?
On the plus side, the FL-13 problems have convinced some previous naysayers in that district:
Supervisor of Elections Kathy Dent now says she will comply with voters who want a new voting system -- one that produces a paper trail.... Her announcement Friday marks a reversal for the elections supervisor, who had promoted and adamantly defended the touch-screen system the county purchased for $4.5 million in 2001.
One of the dumber comments I hear about electronic voting goes something like this: "If we can secure multi-million-dollar financial transactions, we should be able to secure voting." Most financial security comes through audit: names are attached to every transaction, and transactions can be unwound if there are problems. Voting requires an anonymous ballot, which means that most of our anti-fraud systems from the financial world don't apply to voting. (I first explained this back in 2001.)
In Minnesota, we use paper ballots counted by optical scanners, and we have some of the most well-run elections in the country. To anyone reading this who needs to buy new election equipment, this is what to buy.
On the other hand, I am increasingly of the opinion that an all mail-in election -- like Oregon has -- is the right answer. Yes, there are authentication issues with mail-in ballots, but these are issues we have to solve anyway, as long as we allow absentee ballots. And yes, there are vote-buying issues, but almost everyone considers them to be secondary. The combined benefits of 1) a paper ballot, 2) no worries about long lines due to malfunctioning or insufficient machines, 3) increased voter turnout, and 4) a dampening of the last-minute campaign frenzy make Oregon's election process very appealing.
Last week in Florida's 13th Congressional district, the victory margin was only 386 votes out of 153,000. There'll be a mandatory lawyered-up recount, but it won't include the almost 18,000 votes that seem to have disappeared. The electronic voting machines didn't include them in their final tallies, and there's no backup to use for the recount. The district will pick a winner to send to Washington, but it won't be because they are sure the majority voted for him. Maybe the majority did, and maybe it didn't. There's no way to know.
Electronic voting machines represent a grave threat to fair and accurate elections, a threat that every American -- Republican, Democrat or independent -- should be concerned about. Because they're computer-based, the deliberate or accidental actions of a few can swing an entire election. The solution: Paper ballots, which can be verified by voters and recounted if necessary.
To understand the security of electronic voting machines, you first have to consider election security in general. The goal of any voting system is to capture the intent of each voter and collect them all into a final tally. In practice, this occurs through a series of transfer steps. When I voted last week, I transferred my intent onto a paper ballot, which was then transferred to a tabulation machine via an optical scan reader; at the end of the night, the individual machine tallies were transferred by election officials to a central facility and combined into a single result I saw on television.
All election problems are errors introduced at one of these steps, whether it's voter disenfranchisement, confusing ballots, broken machines or ballot stuffing. Even in normal operations, each step can introduce errors. Voting accuracy, therefore, is a matter of 1) minimizing the number of steps, and 2) increasing the reliability of each step.
Much of our election security is based on "security by competing interests." Every step, with the exception of voters completing their single anonymous ballots, is witnessed by someone from each major party; this ensures that any partisan shenanigans -- or even honest mistakes -- will be caught by the other observers. This system isn't perfect, but it's worked pretty well for a couple hundred years.
Electronic voting is like an iceberg; the real threats are below the waterline where you can't see them. Paperless electronic voting machines bypass that security process, allowing a small group of people -- or even a single hacker -- to affect an election. The problem is software -- programs that are hidden from view and cannot be verified by a team of Republican and Democrat election judges, programs that can drastically change the final tallies. And because all that's left at the end of the day are those electronic tallies, there's no way to verify the results or to perform a recount. Recounts are important.
This isn't theoretical. In the U.S., there have been hundreds of documented cases of electronic voting machines distorting the vote to the detriment of candidates from both political parties: machines losing votes, machines swapping the votes for candidates, machines registering more votes for a candidate than there were voters, machines not registering votes at all. I would like to believe these are all mistakes and not deliberate fraud, but the truth is that we can't tell the difference. And these are just the problems we've caught; it's almost certain that many more problems have escaped detection because no one was paying attention.
This is both new and terrifying. For the most part, and throughout most of history, election fraud on a massive scale has been hard; it requires very public actions or a highly corrupt government -- or both. But electronic voting is different: a lone hacker can affect an election. He can do his work secretly before the machines are shipped to the polling stations. He can affect an entire area's voting machines. And he can cover his tracks completely, writing code that deletes itself after the election.
And that assumes well-designed voting machines. The actual machines being sold by companies like Diebold, Sequoia Voting Systems and Election Systems & Software are much worse. The software is badly designed. Machines are "protected" by hotel minibar keys. Vote tallies are stored in easily changeable files. Machines can be infected with viruses. Some voting software runs on Microsoft Windows, with all the bugs and crashes and security vulnerabilities that introduces. The list of inadequate security practices goes on and on.
The voting machine companies counter that such attacks are impossible because the machines are never left unattended (they're not), the memory cards that hold the votes are carefully controlled (they're not), and everything is supervised (it isn't). Yes, they're lying, but they're also missing the point.
We shouldn't -- and don't -- have to accept voting machines that might someday be secure only if a long list of operational procedures are followed precisely. We need voting machines that are secure regardless of how they're programmed, handled and used, and that can be trusted even if they're sold by a partisan company, or a company with possible ties to Venezuela.
Sounds like an impossible task, but in reality, the solution is surprisingly easy. The trick is to use electronic voting machines as ballot-generating machines. Vote by whatever automatic touch-screen system you want: a machine that keeps no records or tallies of how people voted, but only generates a paper ballot. The voter can check it for accuracy, then process it with an optical-scan machine. The second machine provides the quick initial tally, while the paper ballot provides for recounts when necessary. And absentee and backup ballots can be counted the same way.
You can even do away with the electronic vote-generation machines entirely and hand-mark your ballots like we do in Minnesota. Or run a 100% mail-in election like Oregon does. Again, paper ballots are the key.
Paper? Yes, paper. A stack of paper is harder to tamper with than a number in a computer's memory. Voters can see their vote on paper, regardless of what goes on inside the computer. And most important, everyone understands paper. We get into hassles over our cellphone bills and credit card mischarges, but when was the last time you had a problem with a $20 bill? We know how to count paper. Banks count it all the time. Both Canada and the U.K. count paper ballots with no problems, as do the Swiss. We can do it, too. In today's world of computer crashes, worms and hackers, a low-tech solution is the most secure.
Secure voting machines are just one component of a fair and honest election, but they're an increasingly important part. They're where a dedicated attacker can most effectively commit election fraud (and we know that changing the results can be worth millions). But we shouldn't forget other voter suppression tactics: telling people the wrong polling place or election date, taking registered voters off the voting rolls, having too few machines at polling places, or making it onerous for people to register. (Oddly enough, ineligible people voting isn't a problem in the U.S., despite political rhetoric to the contrary; every study shows their numbers to be so small as to be insignificant. And photo ID requirements actually cause more problems than they solve.)
Voting is as much a perception issue as it is a technological issue. It's not enough for the result to be mathematically accurate; every citizen must also be confident that it is correct. Around the world, people protest or riot after an election not when their candidate loses, but when they think their candidate lost unfairly. It is vital for a democracy that an election both accurately determine the winner and adequately convince the loser. In the U.S., we're losing the perception battle.
The current crop of electronic voting machines fail on both counts. The results from Florida's 13th Congressional district are neither accurate nor convincing. As a democracy, we deserve better. We need to refuse to vote on electronic voting machines without a voter-verifiable paper ballot, and to continue to pressure our legislatures to implement voting technology that works.
This essay originally appeared on Forbes.com.
Avi Rubin wrote a good essay on voting for Forbes as well.
"It was 50-70 centimeters (19.5-27.5 inches) in diameter and looked like a huge beach ball. It was transparent but had a kind of thick, red cord in the middle. It was a bit science-fiction," Svensen told newspaper Bergens Tidende's web site.
Alice, Bob, and Eve. (I get a mention, too.)
Good essay on data mining.
EDITED TO ADD (11/9): Slashdot thread.
Slides (and more information) from a talk given by K.A. Taipale to the Committee on Policy Consequences and Legal/Ethical Implications of Offensive Information Warfare, at the National Academies, last week.
At the request of the Department of Homeland Security, a group called The Conference Board completed a study about senior management and their perceptions of IT security. The results aren't very surprising.
Most C-level executives view security as an operational issue -- kind of like facilities management -- and not as a strategic review. As such, they don't have direct responsibility for security.
Such attitudes about security have caused many organizations to distance their security teams from other parts of the business as well. "Security directors appear to be politically isolated within their companies," Cavanagh says. Security pros often do not talk to business managers or other departments, he notes, so they don't have many allies in getting their message across to upper management.
What to do? The report has some suggestions, the same ones you can hear at any security conference anywhere.
Security managers need to reach out more aggressively to other areas of the business to help them make their case, Cavanagh says. "Risk managers are among the best potential allies," he observes, because they are usually tasked with measuring the financial impact of various threats and correlating them with the likelihood that those threats will happen.
I guess it's more confirmation of the conventional wisdom.
The full report is available, but it costs $125 if you're something called a Conference Board associate, and $495 if you're not. But my guess is that you've already heard everything that's in it.
Foxtrot on e-voting.
Remember to vote, everyone (in the US). If you don't, there's no chance your vote will be counted correctly.
It's easy to skim personal information off an RFID credit card.
From The New York Times:
They could skim and store the information from a card with a device the size of a couple of paperback books, which they cobbled together from readily available computer and radio components for $150. They say they could probably make another one even smaller and cheaper: about the size of a pack of gum for less than $50. And because the cards can be read even through a wallet or an item of clothing, the security of the information, the researchers say, is startlingly weak. 'Would you be comfortable wearing your name, your credit card number and your card expiration date on your T-shirt?' Mr. Heydt-Benjamin, a graduate student, asked.
And from The Register:
The attack uses off-the-shelf radio and card reader equipment that could cost as little as $150. Although the attack fails to yield verification codes normally needed to make online purchases, it would still be potentially possible for crooks to use the data to order goods and services from online stores that don't request this information.
And from the RFID Journal:
I don't think the exposing of potential vulnerabilities of these cards is a huge black eye for the credit-card industry or for the RFID industry. Millions of people won't suddenly have their credit-card numbers exposed to thieves the way they do when someone hacks a bank's database or an employee loses a laptop with the card numbers on it. But it is likely that these vulnerabilities will need to be addressed as the technology becomes more mature and criminals start figuring out ways to abuse it.
Seagate has announced a product called DriveTrust, which provides hardware-based encryption on the drive itself. The technology is proprietary, but they use standard algorithms: AES and triple-DES, RSA, and SHA-1. Details on the key management are sketchy, but the system requires a pre-boot password and/or combination of biometrics to access the disk. And Seagate is working on some sort of enterprise-wide key management system to make it easier to deploy the technology company-wide.
The first target market is laptop computers. No computer manufacturer has announced support for DriveTrust yet.
On August 18 of last year, the Zotob worm badly infected computers at the Department of Homeland Security, particularly the 1,300 workstations running the US-VISIT application at border crossings. Wired News filed a Freedom of Information Act request for details, which was denied.
After we sued, CBP released three internal documents, totaling five pages, and a copy of Microsoft's security bulletin on the plug-and-play vulnerability. Though heavily redacted, the documents were enough to establish that Zotob had infiltrated US-VISIT after CBP made the strategic decision to leave the workstations unpatched. Virtually every other detail was blacked out. In the ensuing court proceedings, CBP claimed the redactions were necessary to protect the security of its computers, and acknowledged it had an additional 12 documents, totaling hundreds of pages, which it withheld entirely on the same grounds.
The details say nothing about the technical details of the computer systems, and only point to the incompetence of the DHS in handling the incident.
Details are in the Wired News article.
I simply don't have the physics background to evaluate this:
Scheuer and Yariv's concept for key distribution involves establishing a laser oscillation between the two users, who each decide how to reflect the light at their end by choosing one of three mirrors that peak at different frequencies.
But this quote gives me pause:
Although users can't easily detect an eavesdropper here, the system increases the difficulty of eavesdropping "almost arbitrarily," making detecting eavesdroppers almost unnecessary.
EDITED TO ADD (11/6): Here's the paper.
It's yet another massive government surveillance program:
US Customs and Border Protection issued a notice in the Federal Register yesterday which detailed the agency's massive database that keeps risk assessments on every traveler entering or leaving the country. Citizens who are concerned that their information is inaccurate are all but out of luck: the system "may not be accessed under the Privacy Act for the purpose of contesting the content of the record."
This means you can't review your data for accuracy, and you can't correct any errors.
But the system can be used to give you a risk assessment score, which presumably will affect how you're treated when you return to the U.S.
I've already explained why data mining does not find terrorists or terrorist plots. So have actual math professors. And we've seen this kind of "risk assessment score" idea and the problems it causes with Secure Flight.
This needs some mainstream press attention.
EDITED TO ADD (11/5): It's buried in the back pages, but at least The Washington Post wrote about it.
You can't make this stuff up:
A retired veteran and candidate for Oklahoma State School Superintendent says he wants to make schools safer by creating bulletproof textbooks.
Can you just imagine the movie-plot scenarios going through his head? Does he really think this is a smart way to spend security dollars?
I just shake my head in wonder....
I've written repeatedly about the difference between perceived and actual risk, and how it explains many seemingly perverse security trade-offs. Here's a Los Angeles Times op-ed that does the same. The author is Daniel Gilbert, psychology professor at Harvard. (I just recently finished his book Stumbling on Happiness, which is not a self-help book but instead about how the brain works. Strongly recommended.)
The op-ed is about the public's reaction to the risks of global warming and terrorism, but the points he makes are much more general. He gives four reasons why some risks are perceived to be more or less serious than they actually are:
It's interesting to compare this to what I wrote in Beyond Fear (pages 26-27) about perceived vs. actual risk:
EDITED TO ADD (11/2): Here are some additional resources. "E-Voting: State by State," a guide to e-voting vendors, and a review of HBO's "Hacking Democracy" documentary. Also, a debate from The Wall Street Journal on electronic voting, and an Ars Technica article on current-year problems with electronic voting.
EDITED TO ADD (11/2): Another review of the documentary.
CEO arrested for stealing the identities of his employees:
Terrence D. Chalk, 44, of White Plains was arraigned in federal court in White Plains, along with his nephew, Damon T. Chalk, 35, after an FBI investigation turned up the curious lending and spending habits. The pair are charged with submitting some $1 million worth of credit applications using the names and personal information -- names, addresses and social security numbers -- of some of Compulinx's 50 employees. According to federal prosecutors, the employees' information was used without their knowledge; the Chalks falsely represented to the lending institutions, in writing and in face-to-face meetings, that the employees were actually officers of the company.
Last week Christopher Soghoian created a Fake Boarding Pass Generator website, allowing anyone to create a fake Northwest Airlines boarding pass: any name, airport, date, flight. This action got him visited by the FBI, who later came back, smashed open his front door, and seized his computers and other belongings. It resulted in calls for his arrest -- the most visible by Rep. Edward Markey (D-Massachusetts) -- who has since recanted. And it's gotten him more publicity than he ever dreamed of.
All for demonstrating a known and obvious vulnerability in airport security involving boarding passes and IDs.
This vulnerability is nothing new. There was an article on CSOonline from February 2006. There was an article on Slate from February 2005. Sen. Chuck Schumer spoke about it as well. I wrote about it in the August 2003 issue of Crypto-Gram. It's possible I was the first person to publish it, but I certainly wasn't the first person to think of it.
It's kind of obvious, really. If you can make a fake boarding pass, you can get through airport security with it. Big deal; we know.
You can also use a fake boarding pass to fly on someone else's ticket. The trick is to have two boarding passes: one legitimate, in the name the reservation is under, and another phony one that matches the name on your photo ID. Use the fake boarding pass in your name to get through airport security, and the real ticket in someone else's name to board the plane.
This means that a terrorist on the no-fly list can get on a plane: He buys a ticket in someone else's name, perhaps using a stolen credit card, and uses his own photo ID and a fake ticket to get through airport security. Since the ticket is in an innocent's name, it won't raise a flag on the no-fly list.
You can also use a fake boarding pass instead of your real one if you have the "SSSS" mark and want to avoid secondary screening, or if you don't have a ticket but want to get into the gate area.
Historically, forging a boarding pass was difficult. It required special paper and equipment. But since Alaska Airlines started the trend in 1999, most airlines now allow you to print your boarding pass using your home computer and bring it with you to the airport. This program was temporarily suspended after 9/11, but was quickly brought back because of pressure from the airlines. People who print the boarding passes at home can go directly to airport security, and that means fewer airline agents are required.
Airline websites generate boarding passes as graphics files, which means anyone with a little bit of skill can modify them in a program like Photoshop. All Soghoian's website did was automate the process with a single airline's boarding passes.
Soghoian claims that he wanted to demonstrate the vulnerability. You could argue that he went about it in a stupid way, but I don't think what he did is substantively worse than what I wrote in 2003. Or what Schumer described in 2005. Why is it that the person who demonstrates the vulnerability is vilified while the person who describes it is ignored? Or, even worse, the organization that causes it is ignored? Why are we shooting the messenger instead of discussing the problem?
As I wrote in 2005: "The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler's identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record -- the third leg of the triangle -- it's possible to create two different boarding passes and have no one notice. That's why the attack works."
The way to fix it is equally obvious: Verify the accuracy of the boarding passes at the security checkpoints. If passengers had to scan their boarding passes as they went through screening, the computer could verify that the boarding pass already matched to the photo ID also matched the data in the computer. Close the authentication triangle and the vulnerability disappears.
But before we start spending time and money and Transportation Security Administration agents, let's be honest with ourselves: The photo ID requirement is no more than security theater. Its only security purpose is to check names against the no-fly list, which would still be a joke even if it weren't so easy to circumvent. Identification is not a useful security measure here.
Interestingly enough, while the photo ID requirement is presented as an antiterrorism security measure, it is really an airline-business security measure. It was first implemented after the explosion of TWA Flight 800 over the Atlantic in 1996. The government originally thought a terrorist bomb was responsible, but the explosion was later shown to be an accident.
Unlike every other airplane security measure -- including reinforcing cockpit doors, which could have prevented 9/11 -- the airlines didn't resist this one, because it solved a business problem: the resale of non-refundable tickets. Before the photo ID requirement, these tickets were regularly advertised in classified pages: "Round trip, New York to Los Angeles, 11/21-30, male, $100." Since the airlines never checked IDs, anyone of the correct gender could use the ticket. Airlines hated that, and tried repeatedly to shut that market down. In 1996, the airlines were finally able to solve that problem and blame it on the FAA and terrorism.
So business is why we have the photo ID requirement in the first place, and business is why it's so easy to circumvent it. Instead of going after someone who demonstrates an obvious flaw that is already public, let's focus on the organizations that are actually responsible for this security failure and have failed to do anything about it for all these years. Where's the TSA's response to all this?
The problem is real, and the Department of Homeland Security and TSA should either fix the security or scrap the system. What we've got now is the worst security system of all: one that annoys everyone who is innocent while failing to catch the guilty.
This essay -- my 30th for Wired.com -- appeared today.
EDITED TO ADD (11/4): More news and commentary.
EDITED TO ADD (1/10): Great essay by Matt Blaze.
Does this surprise anyone?
While keylogging software, phishing e-mails that impersonate official bank messages and hackers who break into customer databases may dominate headlines, more than 90% of identity fraud starts off conventionally, with stolen bank statements, misplaced passwords or other similar means, according to Javelin Strategy & Research.
The Data Privacy and Integrity Advisory Committee of the Department of Homeland Security recommended against putting RFID chips in identity cards. It's only a draft report, but what it says is so controversial that a vote on the final report is being delayed.
Powered by Movable Type. Photo at top by Per Ervland.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc.