Blog: November 2006 Archives

Global Envelope

The DHS wants to share terrorist biometric information:

Robert Mocny, acting director of the U.S. Visitor and Immigrant Status Indicator Technology program, outlined a proposal under which the United States would begin exchanging information about terrorists first with closely allied governments in Britain, Europe and Japan ,and then progressively extend the program to other countries as a means of foiling terrorist attacks.

The Global Envelope proposal apparently opened the door to the exchange of biometric information about persons in this country to other governments and vice versa, in an environment where even officials’ pledges to observe privacy principles collide with inconsistent or absent legal protections.

In remarks to the International Conference on Biometrics and Ethics in Washington this afternoon, Mocny repeatedly stressed DHS’ commitment to observing privacy principles during the design and implementation of its biometric systems. “We have a responsibility to use this information wisely and responsibly,” he said.

Mocny cited the need to avoid duplication of effort by developing technical standards that all national biometric identification systems would use.

He emphasized repeatedly that information sharing is appropriate around the world on biometric methods of identifying terrorists who pose a risk to the public. Noting that his organization already receives information about terrorist threats from around the globe, Mocny said, “We have a responsibility to make a Global Security Envelope [that would coordinate information policies and technical standards.]”

Mocny conceded that each of the 10 privacy laws currently in effect in the United States has an exemption clause for national-security purposes. He added that the department only resorts to its essentially unlimited authority under those clauses when officials decide that there are compelling reasons to do so.

Anyone think that this will be any better than the no-fly list?

Posted on November 30, 2006 at 12:51 PM30 Comments

Separating Data Ownership and Device Ownership

Consider two different security problems. In the first, you store your valuables in a safe in your basement. The threat is burglars, of course. But the safe is yours, and the house is yours, too. You control access to the safe, and probably have an alarm system.

The second security problem is similar, but you store your valuables in someone else’s safe. Even worse, it’s someone you don’t trust. He doesn’t know the combination, but he controls access to the safe. He can try to break in at his leisure. He can transport the safe anyplace he needs to. He can use whatever tools he wants. In the first case, the safe needs to be secure, but it’s still just a part of your overall home security. In the second case, the safe is the only security device you have.

This second security problem might seem contrived, but it happens regularly in our information society: Data controlled by one person is stored on a device controlled by another. Think of a stored-value smart card: If the person owning the card can break the security, he can add money to the card. Think of a DRM system: Its security depends on the person owning the computer not being able to get at the insides of the DRM security. Think of the RFID chip on a passport. Or a postage meter. Or SSL traffic being sent over a public network.

These systems are difficult to secure, and not just because you give your attacker the device and let him utilize whatever time, equipment and expertise he needs to break it. It’s difficult to secure because breaks are generally “class breaks.” The expert who figures out how to do it can build hardware—or write software—to do it automatically. Only one person needs to break a given DRM system; the software can break every other device in the same class.

This means that the security needs to be secure not against the average attacker, but against the smartest, most motivated and best funded attacker.

I was reminded of this problem earlier this month, when researchers announced a new attack (.pdf) against implementations of the RSA cryptosystem. The attack exploits the fact that different operations take different times on modern CPUs. By closely monitoring—and actually affecting—the CPU during an RSA operation, an attacker can recover the key. The most obvious applications for this attack are DRM systems that try to use a protected partition in the CPU to prevent the computer’s owner from learning the DRM system’s cryptographic keys.

These sorts of attacks are not new. In 1995, researchers discovered they could recover cryptographic keys by comparing relative timings on chips. In later years, both power and radiation were used to break cryptosystems. I called these “side-channel attacks,” because they made use of information other than the plaintext and ciphertext. And where are they most useful? To recover secrets from smart cards.

Whenever I see security systems with this data/device separation, I try to solve the security problem by removing the separation. This means completely redesigning the system and the security assumptions behind it.

Compare a stored-value card with a debit card. In the former case, the card owner can create money by changing the value on the card. For this system to be secure, the card needs to be protected by a variety of security countermeasures. In the latter case, there aren’t any secrets on the card. Your bank doesn’t care that you can read the account number off the front of the card, or the data off the magnetic stripe off the back—the real data, and the security, are in the bank’s databases.

Or compare a DRM system with a financial model that doesn’t care about copying. The former is impossible to secure, the latter easy.

While common in digital systems, this kind of security problem isn’t limited to them. Last month, the province of Ontario started investigating insider fraud in their scratch-and-win lottery systems, after the CBC aired allegations that people selling the tickets are able to figure out which tickets are winners, and not sell them. It’s the same problem: the owners of the data on the tickets—the lottery commission—tried to keep that data secret from those who had physical control of the tickets. And they failed.

Compare that with a traditional drawing-at-the-end-of-the-week lottery system. The attack isn’t possible, because there are no secrets on the tickets for an attacker to learn.

Separating data ownership and device ownership doesn’t mean that security is impossible, only much more difficult. You can buy a safe so strong that you can lock your valuables in it and give it to your attacker—with confidence. I’m not so sure you can design a smart card that keeps secrets from its owner, or a DRM system that works on a general-purpose computer—especially because of the problem of class breaks. But in all cases, the best way to solve the security problem is not to have it in the first place.

This essay originally appeared on Wired.com.

EDITED TO ADD (12/1): I completely misunderstood the lottery problem in Ontario. The frauds reported were perpetrated by lottery machine operators at convenience stores and the like stealing end-of-week draw tickets from unsuspecting customers. The customer would hand their ticket over the counter to be scanned to see if it was a winner. The clerk (knowing what the winning numbers actually were) would palm a non-winning ticket into the machine, inform the customer “sorry better luck next time” and claim the prize on their own at a later date.

Nice scam, but nothing to do with the point of this essay.

Posted on November 30, 2006 at 6:36 AM36 Comments

Notary Fraud

Many countries have the concept of a “notary public.” Their training and authority varies from country to country; in the United States, their primary role is to witness the signature of legal documents. Many important legal documents require notarization in addition to a signature, primarily as a security device.

When I get a document notarized, I present my photo ID to a notary public. Generally, I go to my local bank, where many of the employees are notary publics and I don’t have to pay a fee for the service. I sign the document while the notary watches, and he then signs an attestation to the fact that he saw me sign it. He doesn’t read the document; that’s not his job. And then I send my notarized document to whoever needed it: another bank, the patent office, my mortgage company, whatever.

It’s an eminently hackable system. Sure, you can always present a fake ID—I’ll bet my bank employee has never seen a West Virginia driver’s license, for example—but that takes work. The easiest way to hack the system is through social engineering.

Bring a small pile of documents to be notarized. In the middle of the pile, slip in a document with someone else’s signature. Since he’s busy with his own signing and stamping—and you’re engaging him in slightly distracting conversation—he’s probably not going to notice that he’s notarizing something “someone else” signed. If he does, apologize for your honest mistake and try again elsewhere.

Of course, you’re better off visiting a notary who charges by the document: he’ll be more likely to appreciate the stack of documents you’ve brought to him and less likely to ask questions. And pick a location—not like a bank—that isn’t filled with security cameras.

Of course, this won’t be enough if the final recipient of the document checks the signature; you’re on your own when it comes to forgery. And in my state the notary has to keep a record of the document he signs; this one won’t be in his records if he’s ever asked. But if you need to switch the deed on a piece of property, change ownership of a bank account, or give yourself power of attorney over someone else, hacking the notary system makes the job a lot easier.

Anyone know how often this kind of thing happens in real life?

Posted on November 29, 2006 at 7:19 AM83 Comments

Erasable Ink Scam

Someone goes door-to-door, soliciting contributions to a charity. He prefers a check—it’s safer for you, after all. But he offers his pen for you to sign your check, and the pen is filled with erasable ink. Later, he changes both the payee and the amount, and cashes the check.

This surely isn’t a new scam, but it’s happening in the UK right now. I’ve already written about attackers using different solvents to wash ink off checks, but this one is even more basic—the attacker gives the victim a bad pen to start with.

I thought checks were printed with ink that also erased, voiding the check. Why does this sort of attack still work?

Posted on November 28, 2006 at 12:30 PM46 Comments

Interesting Bioterrorism Drill

Earlier this month there was a bioterrorism drill in Seattle. Postal carriers delivered dummy packages to “nearly thousands” of people (yes, that’s what the article said; my guess is “nearly a thousand”), testing how the postal system could be used to quickly deliver medications. (Here’s a reaction from a recipient.)

Sure, there are lots of scenarios where this kind of delivery system isn’t good enough, but that’s not the point. In general, I think emergency response is one of the few areas where we need to spend more money. And, in general, I think tests and drills like this are good—how else will we know if the systems will work the way we think they will?

Posted on November 27, 2006 at 1:44 PM33 Comments

Fighting Fraudulent Transactions

Last March I wrote that two-factor authentication isn’t going to reduce financial fraud or identity theft, that all it will do is force the criminals to change their tactics:

Unfortunately, the nature of attacks has changed over those two decades. Back then, the threats were all passive: eavesdropping and offline password guessing. Today, the threats are more active: phishing and Trojan horses.

Here are two new active attacks we’re starting to see:

  • Man-in-the-Middle attack. An attacker puts up a fake bank website and entices user to that website. User types in his password, and the attacker in turn uses it to access the bank’s real website. Done right, the user will never realize that he isn’t at the bank’s website. Then the attacker either disconnects the user and makes any fraudulent transactions he wants, or passes along the user’s banking transactions while making his own transactions at the same time.
  • Trojan attack. Attacker gets Trojan installed on user’s computer. When user logs into his bank’s website, the attacker piggybacks on that session via the Trojan to make any fraudulent transaction he wants.

See how two-factor authentication doesn’t solve anything? In the first case, the attacker can pass the ever-changing part of the password to the bank along with the never-changing part. And in the second case, the attacker is relying on the user to log in.

The solution is not to better authenticate the person, but to authenticate the transaction. (Think credit cards. No one checks your signature. They really don’t care if you’re you. They maintain security by authenticating the transactions.)

Of course, no one listens to me. U.S. regulators required banks to implement two-factor authentication by the end of this year. But customers are rebelling, and banks are scrambling to figure out something—anything—else. And, amazingly enough and purely by accident it seems, they’ve stumbled on security solutions that actually work:

Instead, to comply with new banking regulations and stem phishing losses, banks and the vendors who serve them are hurriedly putting together multipronged strategies that they say amount to “strong” authentication. The emerging approach generally consists of somehow recognizing a customer’s computer, asking additional challenge questions for risky behavior and putting in place back-end fraud detection.

[…]

Despite the FFIEC guidance about authentication, the emerging technologies that actually seem to hold the most promise for protecting the funds in consumer banking accounts aren’t authentication systems at all. They’re back-end systems that monitor for suspicious behavior.

Some of these tools are rule-based: If a customer from Nebraska signs on from, say, Romania, the bank can determine that the log-on always be considered suspect. Others are based on a risk score: That log-on from Romania would add points to a risk score, and when the score reaches a certain threshold, the bank takes action.

Flagged transactions can get bumped to second-factor authentication—usually, a call on the telephone, something the user has. This has long been done manually in the credit card world. Just think about the last phone call you got from your credit card company’s fraud department when you (or someone else) tried to make a large purchase with your credit card in Europe. Some banks, including Washington Mutual, are in the process of automating out-of-band phone calls for risky online transactions.

Exactly. That’s how you do it.

EDITED TO ADD (12/6): Another example.

Posted on November 27, 2006 at 6:07 AM55 Comments

Profile of Schneier

There was a profile of me in the St. Paul Pioneer Press on Sunday.

I’m pretty pleased with the article, but this is—by far—my favorite line, about Applied Cryptography:

“The first seven or eight chapters you can read without knowing any math at all,” Walker said. “The second half of the book you can’t export overseas—it’s classified as munitions.”

It’s not true, of course, but it’s a great line.

There’s also this in the Providence Journal.

Posted on November 24, 2006 at 12:18 PM8 Comments

David Kahn Donates his Cryptology Library

According to The New York Times:

The National Cryptologic Museum, at Fort Meade, Md., home of thousands of code-breaking and code-making artifacts dating back to the 1500s, has acquired a major collection of books on codes and ciphers, the museum said. It was donated by David Kahn, a leading American scholar of cryptology and the author of “The Codebreakers: The Story of Secret Writing.” The collection includes “Polygraphiae Libri Sex” (1518) by Johannes Trithemius, the first known printed book on cryptology, along with notes of interviews with modern cryptologists, memos, photocopies and pamphlets. About a dozen items from the collection are currently on display.

Posted on November 24, 2006 at 7:55 AM18 Comments

Truth Serums

Interesting article on the history and current search for a drug that compels people to tell the truth:

There is no pharmaceutical compound today whose proven effect is the consistent or predictable enhancement of truth-telling.

[…]

Whether a search for truth serums has occurred in recent decades, and especially since the terrorist attacks of Sept. 11, 2001, is a matter of differing opinion.

Gordon H. Barland was a captain in the U.S. Army Combat Development Command’s intelligence agency in the 1960s. Before leaving active duty in 1967 he was asked to write up “materiel objectives.” He put on the wish list a drug that would aid interrogation.

He later became a research psychologist and spent 14 years working at the Department of Defense Polygraph Institute. While psychopharmacology was not his specialty, trying to catch liars was.

“I would have expected that if there was some sort of truth drug in general use I would have heard rumors of it. I never did,” said Barland, who retired in 2000 and now lives in Utah. He further doubts that the government would again engage in such experiments, given the MK-ULTRA experience.

“It would be very difficult to get a project like that off the ground,” he speculated.

Another psychologist who spent 20 years in military research said he also “never heard anything like that or knew of anyone who was doing that work.” He spoke on the condition of anonymity because interrogation is not his specialty.

Some doubt the practicality of running, or keeping secret, such a research agenda.

“I can’t imagine it,” said Tara O’Toole, director of the Center for Biosecurity of the University of Pittsburgh Medical Center.

“We haven’t been able as a government to create anthrax vaccine. The idea that we could develop a [truth] drug de novo strikes me as outlandish,” she said. “That would be a really major research and development project that would be hard to hide.”

For the record, spokesmen for the Army medical research command, the Defense Advanced Research Projects Agency (DARPA) and the CIA say there is no work underway on truth serums.

Posted on November 22, 2006 at 8:43 AM41 Comments

TSA Security Round-Up

Innocent passenger arrested for trying to bring a rubber-band ball onto an airplane.

Woman passes out on plane after her drugs are confiscated.

San Francisco International Airport screeners were warned in advance of undercover test.

And a cartoon.

We have a serious problem in this country. The TSA operates above, and outside, the law. There’s no due process, no judicial review, no appeal.

EDITED TO ADD (11/21): And six Muslim imams removed from a plane by US Airways because…well because they’re Muslim and that scares people. After they were cleared by the authorities, US Airways refused to sell them a ticket. Refuse to be terrorized, people!

Note that US Airways is the culprit here, not the TSA.

EDITED TO ADD (11/22): Frozen spaghetti sauce confiscated:

You think this is silly, and it is, but a week ago my mother caused a small commotion at a checkpoint at Boston-Logan after screeners discovered a large container of homemade tomato sauce in her bag. What with the preponderance of spaghetti grenades and lasagna bombs, we can all be proud of their vigilance. And, as a liquid, tomato sauce is in clear violation of the Transportation Security Administration’s carry-on statutes. But this time, there was a wrinkle: The sauce was frozen.

No longer in its liquid state, the sauce had the guards in a scramble. According to my mother’s account, a supervisor was called over to help assess the situation. He spent several moments stroking his chin. “He struck me as the type of person who spent most of his life traveling with the circus,” says Mom, who never pulls a punch, “and was only vaguely familiar with the concept of refrigeration.” Nonetheless, drawing from his experiences in grade-school chemistry and at the TSA academy, he sized things up. “It’s not a liquid right now,” he observantly noted. “But it will be soon.”

In the end, the TSA did the right thing and let the woman on with her frozen food.

Posted on November 21, 2006 at 12:51 PM92 Comments

New Timing Attack Against RSA

A new paper describes a timing attack against RSA, one that bypasses existing security measures against these sorts of attacks. The attack described is optimized for the Pentium 4, and is particularly suited for applications like DRM.

Meta moral: If Alice controls the device, and Bob wants to control secrets inside the device, Bob has a very difficult security problem. These “side-channel” attacks—timing, power, radiation, etc.—allow Alice to mount some very devastating attacks against Bob’s secrets.

I’m going to write more about this for Wired next week, but for now you can read the paper, the Slashdot thread, and the essay I wrote in 1998 about side-channel attacks (also this academic paper).

Posted on November 21, 2006 at 7:24 AM37 Comments

RFID Passports Less Reliable than Traditional Passports

From EPIC:

A document obtained by EPIC from the State Department reveals that 2004 government tests found passports with radio frequency identification (RFID) chips that are read 27% to 43% less successfully than the previous Machine Readable Zone technology (two lines of text printed at the bottom of the first page of a passport).

I’ve written about RFID passports before.

Posted on November 20, 2006 at 11:38 AM17 Comments

Attacking Bank-Card PINs

Research paper by Omer Berkman and Odelia Moshe Ostrovsky: “The Unbearable Lightness of PIN Cracking“:

Abstract. We describe new attacks on the financial PIN processing API. The attacks apply to switches as well as to verification facilities. The attacks are extremely severe allowing an attacker to expose customer PINs by executing only one or two API calls per exposed PIN. One of the attacks uses only the translate function which is a required function in every switch. The other attacks abuse functions that are used to allow customers to select their PINs online. Some of the attacks can be applied on a switch even though the attacked functions require issuer’s keys which do not exist on a switch. This is particularly disturbing as it was widely believed that functions requiring issuer’s keys cannot do any harm if the respective keys are unavailable.

Basically, the paper describes an inherent flaw with the way ATM PINs are encrypted and transmitted on the international financial networks, making them vulnerable to attack from malicious insiders in a bank.

One of the most disturbing aspects of the attack is that you’re only as secure as the most untrusted bank on the network. Instead of just having to trust your own issuer bank that they have good security against insider fraud, you have to trust every other financial institution on the network as well. An insider at another bank can crack your ATM PIN if you withdraw money from any of the other bank’s ATMs.

The authors tell me that they’ve contacted the major credit card companies and banks with this information, and haven’t received much of a response. They believe it is now time to alert the public.

Posted on November 17, 2006 at 7:31 AM45 Comments

Revoting

In the world of voting, automatic recount laws are not uncommon. Virginia, where George Allen lost to James Webb in the Senate race by 7,800 out of over 2.3 million votes, or 0.33 percent percent, is an example. If the margin of victory is 1 percent or less, the loser is allowed to ask for a recount. If the margin is 0.5 percent or less, the government pays for it. If the margin is between 0.5 percent and 1 percent, the loser pays for it.

We have recounts because vote counting is—to put it mildly—sloppy. Americans like their election results fast, before they go to bed at night. So we’re willing to put up with inaccuracies in our tallying procedures, and ignore the fact that the numbers we see on television correlate only roughly with reality.

Traditionally, it didn’t matter very much, because most voting errors were “random errors.”

There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random—equally likely to happen to anyone. In a close race, random errors won’t change the result because votes intended for candidate A that mistakenly go to candidate B happen at the same rate as votes intended for B that mistakenly go to A. (Mathematically, as candidate A’s margin of victory increases, random errors slightly decrease it.)

This is why, historically, recounts in close elections rarely change the result. The recount will find the few percent of the errors in each direction, and they’ll cancel each other out. In an extremely close election, a careful recount will yield a different result—but that’s a rarity.

The other kind of voting error is a systemic error. These are errors in the voting process—the voting machines, the procedures—that cause votes intended for A to go to B at a different rate than the reverse.

An example would be a voting machine that mysteriously recorded more votes for A than there were voters. (Sadly, this kind of thing is not uncommon with electronic voting machines.) Another example would be a random error that only occurs in voting equipment used in areas with strong A support. Systemic errors can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A.

Even worse, systemic errors can introduce errors out of proportion to any actual randomness in the vote-counting process. That is, the closeness of an election is not any indication of the presence or absence of systemic errors.

When a candidate has evidence of systemic errors, a recount can fix a wrong result—but only if the recount can catch the error. With electronic voting machines, all too often there simply isn’t the data: there are no votes to recount.

This year’s election in Florida’s 13th Congressional District is such an example. The winner won by a margin of 373 out of 237,861 total votes, but as many as 18,000 votes were not recorded by the electronic voting machines. These votes came from areas where the loser was favored over the winner, and would have likely changed the result.

Or imagine this—as far as we know—hypothetical situation: After the election, someone discovers rogue software in the voting machines that flipped some votes from A to B. Or someone gets caught vote tampering—changing the data on electronic memory cards. The problem is that the original data is lost forever; all we have is the hacked vote.

Faced with problems like this, we can do one of two things. We can certify the result anyway, regretful that people were disenfranchised but knowing that we can’t undo that wrong. Or, we can tell everyone to come back and vote again.

To be sure, the very idea of revoting is rife with problems. Elections are a snapshot in time—election day—and a revote will not reflect that. If Virginia revoted for the Senate this year, the election would not just be for the junior senator from Virginia, but for control of the entire Senate. Similarly, in the 2000 presidential election in Florida, or the 2004 presidential election in Ohio, single-state revotes would have decided the presidency.

And who should be allowed to revote? Should only people in those precincts where there were problems revote, or should the entire election be rerun? In either case, it is certain that more voters will find their way to the polls, possibly changing the demographic and swaying the result in a direction different than that of the initial set of voters. Is that a bad thing, or a good thing?

Should only people who actually voted—records are kept—or who could demonstrate that they were erroneously turned away from the polls be allowed to revote? In this case, the revote will almost certainly have fewer voters, as some of the original voters will be unable to vote a second time. That’s probably a bad thing—but maybe it’s not.

The only analogy we have for this are run-off elections, which are required in some jurisdictions if the winning candidate didn’t get 50 percent of the vote. But it’s easy to know when you need to have a run-off. Who decides, and based on what evidence, that you need to have a revote?

I admit that I don’t have the answers here. They require some serious thinking about elections, and what we’re trying to achieve. But smart election security not only tries to prevent vote hacking—or even systemic electronic voting-machine errors—it prepares for recovery after an election has been hacked. We have to start discussing these issues now, when they’re non-partisan, instead of waiting for the inevitable situation, and the pre-drawn battle lines those results dictate.

This essay originally appeared on Wired.com.

Posted on November 16, 2006 at 6:07 AM48 Comments

A Classified Wikipedia

A good idea:

The office of U.S. intelligence czar John Negroponte announced Intellipedia, which allows intelligence analysts and other officials to collaboratively add and edit content on the government’s classified Intelink Web much like its more famous namesake on the World Wide Web.

A “top secret” Intellipedia system, currently available to the 16 agencies that make up the U.S. intelligence community, has grown to more than 28,000 pages and 3,600 registered users since its introduction on April 17. Less restrictive versions exist for “secret” and “sensitive but unclassified” material.

Posted on November 15, 2006 at 6:41 AM36 Comments

UK Car Rentals to Require Fingerprints

Welcome to a surveillance society:

If you want to hire a car at Stansted Airport, you now need to give a fingerprint.

The scheme being tested by Essex police and car hire firms, is not voluntary. Every car rental customer must take part.

No fingerprint, no car hire at Stansted airport.

These are stored by the hire firms—and will be handed over to the police if the car is stolen or used for another crime.

This is the most amusing bit:

“It’s not intrusive really. It’s different—and people need to adjust to it. It’s not Big Brother, it’s about protecting people’s identities. The police will never see these thumbprints unless a crime is committed.”

What are the odds that no crime will ever be committed?

Fingerprints are becoming more common in the UK:

But regardless of any ideological arguments, the use of biometric technology—where someone is identified by a physical characteristic—is already entering the mainstream.

Biometric UK passports were introduced this year, using facial mapping information stored on a microchip, and more than a million have already been issued.

A shop in the Bluewater centre in Kent has used a fingerprint checking scheme to tackle credit card fraud. And in Yeovil, Somerset, fingerprinting has been used to cut town-centre violence, with scanners helping pick out troublemakers.

It’s not just about crime. Biometric recognition is also being pitched as more convenient for shoppers.

Pay By Touch allows customers to settle their supermarket bill with a fingerprint rather than a credit card. With three million customers in the United States, this payment system is now being tested in the UK, in three Co-op supermarkets in Oxfordshire.

Posted on November 14, 2006 at 7:37 AM59 Comments

The Need for Professional Election Officials

In the U.S., elections are run by an army of hundreds of thousands of volunteers. These are both Republicans and Democrats, and the idea is that the one group watches the other: security by competing interests. But at the top are state-elected or -appointed officials, and many election shenanigans in the past several years have been perpetrated by them.

In yet another New York Times op-ed, Loyola Law School professor Richard Hansen argues for professional, non-partisan election officials:

The United States should join the rest of the world’s advanced democracies and put nonpartisan professionals in charge. We need officials whose ultimate allegiance is to the fairness, integrity and professionalism of the election process, not to helping one party or the other gain political advantage. We don’t need disputes like the current one in Florida being resolved by party hacks.

[…]

To improve the chances that states will choose an independent and competent chief elections officer, states should enact laws making that officer a long-term gubernatorial appointee who takes office only upon confirmation by a 75 percent vote of the legislature—a supermajority requirement that would ensure that a candidate has true bipartisan support. Nonpartisanship in election administration is no dream. It is how Canada and Australia run their national elections.

To me, this is easier said than done. Where are these hundreds of thousands of disinterested election officials going to come from? And how do we ensure that they’re disinterested and fair, and not just partisans in disguise? I actually like security by competing interests.

But I do like his idea of a supermajority-confirmed chief elections officer for each state. And at least he’s starting the debate about better election procedures in the U.S.

Posted on November 13, 2006 at 2:57 PM58 Comments

The Inherent Inaccuracy of Voting

In a New York Times op-ed, New York University sociology professor Dalton Conley points out that vote counting is inherently inaccurate:

The rub in these cases is that we could count and recount, we could examine every ballot four times over and we’d get—you guessed it—four different results. That’s the nature of large numbers—there is inherent measurement error. We’d like to think that there is a “true” answer out there, even if that answer is decided by a single vote. We so desire the certainty of thinking that there is an objective truth in elections and that a fair process will reveal it.

But even in an absolutely clean recount, there is not always a sure answer. Ever count out a large jar of pennies? And then do it again? And then have a friend do it? Do you always converge on a single number? Or do you usually just average the various results you come to? If you are like me, you probably settle on an average. The underlying notion is that each election, like those recounts of the penny jar, is more like a poll of some underlying voting population.

He’s right, but it’s more complicated than that.

There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random. Votes intended for A that mistakenly go to B are just as likely as votes intended for B that mistakenly go to A. This is why, traditionally, recounts in close elections are unlikely to change things. The recount will find the few percent of the errors in each direction, and they’ll cancel each other out. But in a very close election, a careful recount will yield a more accurate—but almost certainly not perfectly accurate—result.

Systemic errors are more important, because they will cause votes intended for A to go to B at a different rate than the reverse. Those can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A. These errors can either be a particular problem in the system—a badly designed ballot, for example—or a random error that only occurs in precincts where A has more supporters than B.

Here’s where the problems of electronic voting machines become critical: they’re more likely to be systemic problems. Vote flipping, for example, seems to generally affect one candidate more than another. Even individual machine failures are going to affect supporters of one candidate more than another, depending on where the particular machine is. And if there are no paper ballots to fall back on, no recount can undo these problems.

Conley proposes to nullify any election where the margin of victory is less than 1%, and have everyone vote again. I agree, but I think his margin is too large. In the Virginia Senate race, Allen was right not to demand a recount. Even though his 7,800-vote loss was only 0.33%, in the absence of systemic flaws it is unlikely that a recount would change things. I think an automatic revote if the margin of victory is less than 0.1% makes more sense.

Conley again:

Yes, it costs more to run an election twice, but keep in mind that many places already use runoffs when the leading candidate fails to cross a particular threshold. If we are willing to go through all that trouble, why not do the same for certainty in an election that teeters on a razor’s edge? One counter-argument is that such a plan merely shifts the realm of debate and uncertainty to a new threshold—the 99 percent threshold. However, candidates who lose by the margin of error have a lot less rhetorical power to argue for redress than those for whom an actual majority is only a few votes away.

It may make us existentially uncomfortable to admit that random chance and sampling error play a role in our governance decisions. But in reality, by requiring a margin of victory greater than one, seemingly arbitrary vote, we would build in a buffer to democracy, one that offers us a more bedrock sense of security that the “winner” really did win.

This is a good idea, but it doesn’t address the systemic problems with voting. If there are systemic problems, there should be another election day limited to only those precincts that had the problem and only those people who can prove they voted—or tried to vote and failed—during the first election day. (Although I could be persuaded that another re-voting protocol would make more sense.)

But most importantly, we need better voting machines and better voting procedures.

EDITED TO ADD (11/17): I mistakenly mischaracterized Conley’s position. He says that there should be a revote when the margin of error is greater than 1 per cent, not a 1 per cent margin of victory.

In terms of a two-candidate race in which each has attained around 50 percent of the vote, a 1 percent margin of error would be represented by 1.29 divided by the square root of the number of votes

That’s a really good system, although it will be impossible to explain to the general public.

Posted on November 13, 2006 at 12:03 PM56 Comments

More on Electronic Voting Machines

Seems like every election I write something about voting machines. I wrote this and this in 2004, this and this in 2003, and this way back in 2000.

This year I wrote an essay for Forbes.com. It’s really nothing that I, and others, haven’t already said previously.

Florida 13 is turning out to be a bigger problem than I described:

The Democrat, Christine Jennings, lost to her Republican opponent, Vern Buchanan, by just 373 votes out of a total 237,861 cast -­ one of the closest House races in the nation. More than 18,000 voters in Sarasota County, or 13 percent of those who went to the polls Tuesday, did not seem to vote in the Congressional race when they cast ballots, a discrepancy that Kathy Dent, the county elections supervisor, said she could not explain.

In comparison, only 2 percent of voters in one neighboring county within the same House district and 5 percent in another skipped the Congressional race, according to The Herald-Tribune of Sarasota. And many of those who did not seem to cast a vote in the House race did vote in more obscure races, like for the hospital board.

And the absentee ballots collected for the same race show only a 2.5% difference in the number of voters that voted for candidates in other races but not for Congress.

There’ll be a recount, and with that close a margin it’s pretty random who will eventually win. But because so many votes were not recorded—and I don’t see how anyone who has any understanding of statistics can look at this data and not conclude that votes were not recorded—we’ll never know who should really win this district.

In Pennsylvania, the Republican State Committee is asking the Secretary of State to impound voting machines because of potential voting errors:

Pennsylvania GOP officials claimed there were reports that some machines were changing Republican votes to Democratic votes. They asked the state to investigate and said they were not ruling out a legal challenge.

According to Santorum’s camp, people are voting for Santorum, but the vote either registered as invalid or a vote for Casey.

RedState.com describes some of the problems:

RedState is getting widespread reports of an electoral nightmare shaping up in Pennsylvania with certain types of electronic voting machines.

In some counties, machines are crashing. In other counties, we have enough reports to treat as credible that fact that some Rendell votes are being tabulated by the machines for Swann and vice versa. The same is happening with Santorum and Casey. Reports have been filed with the Pennsylvania Secretary of State, but nothing has happened.

I’m happy to see a Republican at the receiving end of the problems.

Actually, that’s not true. I’m not happy to see anyone at the receiving end of voting problems. But I am sick and tired of this being perceived as a partisan issue, and I hope some high-profile Republican losses that might be attributed to electronic voting-machine malfunctions (or even fraud) will change that perception. This is a serious problem that affects everyone, and it is in everyone’s interest to fix it.

FL-13 was the big voting-machine disaster, but there were other electronic voting-machine problems reported:

The types of machine problems reported to EFF volunteers were wide-ranging in both size and scope. Polls opened late for machine-related reasons in polling places throughout the country, including Ohio, Florida, Georgia, Virginia, Utah, Indiana, Illinois, Tennessee, and California. In Broward County, Florida, voting machines failed to start up at one polling place, leaving some citizens unable to cast votes for hours. EFF and the Election Protection Coalition sought to keep the polling place open late to accommodate voters frustrated by the delays, but the officials refused. In Utah County, Utah, more than 100 precincts opened one to two hours late on Tuesday due to problems with machines. Both county and state election officials refused to keep polling stations open longer to make up for the lost time, and a judge also turned down a voter’s plea for extended hours brought by EFF.

And there’s this election for mayor, where one of the candidates received zero votes—even though that candidate is sure he voted for himself.

ComputerWorld is also reporting problems across the country, as is The New York Times. Avi Rubin, whose writings on electronic voting security are always worth reading, writes about a problem he witnessed in Maryland:

The voter had made his selections and pressed the “cast ballot” button on the machine. The machine spit out his smartcard, as it is supposed to do, but his summary screen remained, and it did not appear that his vote had been cast. So, he pushed the smartcard back in, and it came out saying that he had already voted. But, he was still in the screen that showed he was in the process of voting. The voter then pressed the “cast ballot” again, and an error message appeared on the screen that said that he needs to call a judge for assistance. The voter was very patient, but was clearly taking this very seriously, as one would expect. After discussing the details about what happened with him very carefully, I believed that there was a glitch with his machine, and that it was in an unexpected state after it spit out the smartcard. The question we had to figure out was whether or not his vote had been recorded. The machine said that there had been 145 votes cast. So, I suggested that we count the voter authority cards in the envelope attached to the machine. Since we were grouping them into bundles of 25 throughout the day, that was pretty easy, and we found that there were 146 authority cards. So, this meant that either his vote had not been counted, or that the count was off for some other reason. Considering that the count on that machine had been perfect all day, I thought that the most likely thing is that this glitch had caused his vote not to count. Unfortunately, because while this was going on, all the other voters had left, other election judges had taken down and put away the e-poll books, and we had no way to encode a smartcard for him. We were left with the possibility of having the voter vote on a provisional ballot, which is what he did. He was gracious, and understood our predicament.

The thing is, that I don’t know for sure now if this voter’s vote will be counted once or twice (or not at all if the board of election rejects his provisional ballot). In fact, the purpose of counting the voter authority cards is to check the counts on the machines hourly. What we had done was to use the number of cards to conclude something about whether a particular voter had voted, and that is not information that these cards can provide. Unfortunately, I believe there are an unimaginable number of problems that could crop up with these machines where we would not know for sure if a voter’s vote had been recorded, and the machines provide no way to check on such questions. If we had paper ballots that were counted by optical scanners, this kind of situation could never occur.

How many hundreds of these stories do we need before we conclude that electronic voting machines aren’t accurate enough for elections?

On the plus side, the FL-13 problems have convinced some previous naysayers in that district:

Supervisor of Elections Kathy Dent now says she will comply with voters who want a new voting system—one that produces a paper trail…. Her announcement Friday marks a reversal for the elections supervisor, who had promoted and adamantly defended the touch-screen system the county purchased for $4.5 million in 2001.

One of the dumber comments I hear about electronic voting goes something like this: “If we can secure multi-million-dollar financial transactions, we should be able to secure voting.” Most financial security comes through audit: names are attached to every transaction, and transactions can be unwound if there are problems. Voting requires an anonymous ballot, which means that most of our anti-fraud systems from the financial world don’t apply to voting. (I first explained this back in 2001.)

In Minnesota, we use paper ballots counted by optical scanners, and we have some of the most well-run elections in the country. To anyone reading this who needs to buy new election equipment, this is what to buy.

On the other hand, I am increasingly of the opinion that an all mail-in election—like Oregon has—is the right answer. Yes, there are authentication issues with mail-in ballots, but these are issues we have to solve anyway, as long as we allow absentee ballots. And yes, there are vote-buying issues, but almost everyone considers them to be secondary. The combined benefits of 1) a paper ballot, 2) no worries about long lines due to malfunctioning or insufficient machines, 3) increased voter turnout, and 4) a dampening of the last-minute campaign frenzy make Oregon’s election process very appealing.

Posted on November 13, 2006 at 9:29 AM54 Comments

Voting Technology and Security

Last week in Florida’s 13th Congressional district, the victory margin was only 386 votes out of 153,000. There’ll be a mandatory lawyered-up recount, but it won’t include the almost 18,000 votes that seem to have disappeared. The electronic voting machines didn’t include them in their final tallies, and there’s no backup to use for the recount. The district will pick a winner to send to Washington, but it won’t be because they are sure the majority voted for him. Maybe the majority did, and maybe it didn’t. There’s no way to know.

Electronic voting machines represent a grave threat to fair and accurate elections, a threat that every American—Republican, Democrat or independent—should be concerned about. Because they’re computer-based, the deliberate or accidental actions of a few can swing an entire election. The solution: Paper ballots, which can be verified by voters and recounted if necessary.

To understand the security of electronic voting machines, you first have to consider election security in general. The goal of any voting system is to capture the intent of each voter and collect them all into a final tally. In practice, this occurs through a series of transfer steps. When I voted last week, I transferred my intent onto a paper ballot, which was then transferred to a tabulation machine via an optical scan reader; at the end of the night, the individual machine tallies were transferred by election officials to a central facility and combined into a single result I saw on television.

All election problems are errors introduced at one of these steps, whether it’s voter disenfranchisement, confusing ballots, broken machines or ballot stuffing. Even in normal operations, each step can introduce errors. Voting accuracy, therefore, is a matter of 1) minimizing the number of steps, and 2) increasing the reliability of each step.

Much of our election security is based on “security by competing interests.” Every step, with the exception of voters completing their single anonymous ballots, is witnessed by someone from each major party; this ensures that any partisan shenanigans—or even honest mistakes—will be caught by the other observers. This system isn’t perfect, but it’s worked pretty well for a couple hundred years.

Electronic voting is like an iceberg; the real threats are below the waterline where you can’t see them. Paperless electronic voting machines bypass that security process, allowing a small group of people—or even a single hacker—to affect an election. The problem is software—programs that are hidden from view and cannot be verified by a team of Republican and Democrat election judges, programs that can drastically change the final tallies. And because all that’s left at the end of the day are those electronic tallies, there’s no way to verify the results or to perform a recount. Recounts are important.

This isn’t theoretical. In the U.S., there have been hundreds of documented cases of electronic voting machines distorting the vote to the detriment of candidates from both political parties: machines losing votes, machines swapping the votes for candidates, machines registering more votes for a candidate than there were voters, machines not registering votes at all. I would like to believe these are all mistakes and not deliberate fraud, but the truth is that we can’t tell the difference. And these are just the problems we’ve caught; it’s almost certain that many more problems have escaped detection because no one was paying attention.

This is both new and terrifying. For the most part, and throughout most of history, election fraud on a massive scale has been hard; it requires very public actions or a highly corrupt government—or both. But electronic voting is different: a lone hacker can affect an election. He can do his work secretly before the machines are shipped to the polling stations. He can affect an entire area’s voting machines. And he can cover his tracks completely, writing code that deletes itself after the election.

And that assumes well-designed voting machines. The actual machines being sold by companies like Diebold, Sequoia Voting Systems and Election Systems & Software are much worse. The software is badly designed. Machines are “protected” by hotel minibar keys. Vote tallies are stored in easily changeable files. Machines can be infected with viruses. Some voting software runs on Microsoft Windows, with all the bugs and crashes and security vulnerabilities that introduces. The list of inadequate security practices goes on and on.

The voting machine companies counter that such attacks are impossible because the machines are never left unattended (they’re not), the memory cards that hold the votes are carefully controlled (they’re not), and everything is supervised (it isn’t). Yes, they’re lying, but they’re also missing the point.

We shouldn’t—and don’t—have to accept voting machines that might someday be secure only if a long list of operational procedures are followed precisely. We need voting machines that are secure regardless of how they’re programmed, handled and used, and that can be trusted even if they’re sold by a partisan company, or a company with possible ties to Venezuela.

Sounds like an impossible task, but in reality, the solution is surprisingly easy. The trick is to use electronic voting machines as ballot-generating machines. Vote by whatever automatic touch-screen system you want: a machine that keeps no records or tallies of how people voted, but only generates a paper ballot. The voter can check it for accuracy, then process it with an optical-scan machine. The second machine provides the quick initial tally, while the paper ballot provides for recounts when necessary. And absentee and backup ballots can be counted the same way.

You can even do away with the electronic vote-generation machines entirely and hand-mark your ballots like we do in Minnesota. Or run a 100% mail-in election like Oregon does. Again, paper ballots are the key.

Paper? Yes, paper. A stack of paper is harder to tamper with than a number in a computer’s memory. Voters can see their vote on paper, regardless of what goes on inside the computer. And most important, everyone understands paper. We get into hassles over our cellphone bills and credit card mischarges, but when was the last time you had a problem with a $20 bill? We know how to count paper. Banks count it all the time. Both Canada and the U.K. count paper ballots with no problems, as do the Swiss. We can do it, too. In today’s world of computer crashes, worms and hackers, a low-tech solution is the most secure.

Secure voting machines are just one component of a fair and honest election, but they’re an increasingly important part. They’re where a dedicated attacker can most effectively commit election fraud (and we know that changing the results can be worth millions). But we shouldn’t forget other voter suppression tactics: telling people the wrong polling place or election date, taking registered voters off the voting rolls, having too few machines at polling places, or making it onerous for people to register. (Oddly enough, ineligible people voting isn’t a problem in the U.S., despite political rhetoric to the contrary; every study shows their numbers to be so small as to be insignificant. And photo ID requirements actually cause more problems than they solve.)

Voting is as much a perception issue as it is a technological issue. It’s not enough for the result to be mathematically accurate; every citizen must also be confident that it is correct. Around the world, people protest or riot after an election not when their candidate loses, but when they think their candidate lost unfairly. It is vital for a democracy that an election both accurately determine the winner and adequately convince the loser. In the U.S., we’re losing the perception battle.

The current crop of electronic voting machines fail on both counts. The results from Florida’s 13th Congressional district are neither accurate nor convincing. As a democracy, we deserve better. We need to refuse to vote on electronic voting machines without a voter-verifiable paper ballot, and to continue to pressure our legislatures to implement voting technology that works.

This essay originally appeared on Forbes.com.

Avi Rubin wrote a good essay on voting for Forbes as well.

Posted on November 13, 2006 at 5:47 AM60 Comments

Friday Squid Blogging: Squid Egg Sac Baffles Researchers

From Norway:

“It was 50-70 centimeters (19.5-27.5 inches) in diameter and looked like a huge beach ball. It was transparent but had a kind of thick, red cord in the middle. It was a bit science-fiction,” Svensen told newspaper Bergens Tidende’s web site.

The Svensens contacted associate professor Torleiv Brattegard at the University of Bergen, and other experts were notified to try and solve the mystery.

[…]

Colleague Arne Fjellheim, who works with Stavanger Museum, tipped off Brattegard that the organism resembled a photograph from New Zealand that he had seen. A zoology professor and squid expert in New Zealand corroborated by email – the peculiar gelatinous ball was a large squid egg sack.

“The gelatinous lump contains several fertilized eggs. This is not at all a common sight, because squids are some of the most inaccessible animals known,” Fjellheim told iBergen.no.

Fjellheim told Aftenposten.no that squid are found in such numbers along the Norwegian coast that they are a commercial catch, and used mostly as bait. Despite this, extremely little is known about their biology.

Posted on November 10, 2006 at 2:14 PM6 Comments

FIDIS on RFID Passports

The “Budapest Declaration on Machine Readable Travel Documents“:

Abstract:

By failing to implement an appropriate security architecture, European governments have effectively forced citizens to adopt new international Machine Readable Travel Documents which dramatically decrease their security and privacy and increases risk of identity theft. Simply put, the current implementation of the European passport utilises technologies and standards that are poorly conceived for its purpose. In this declaration, researchers on Identity and Identity Management (supported by a unanimous move in the September 2006 Budapest meeting of the FIDIS “Future of Identity in the Information Society” Network of Excellence[1]) summarise findings from an analysis of MRTDs and recommend corrective measures which need to be adopted by stakeholders in governments and industry to ameliorate outstanding issues.

EDITED TO ADD (11/9): Slashdot thread.

Posted on November 9, 2006 at 12:26 PM31 Comments

Keyboards and Covert Channels

Interesting research.

Abstract:

This paper introduces JitterBugs, a class of inline interception mechanisms that covertly transmit data by perturbing the timing of input events likely to affect externally observable network traffic. JitterBugs positioned at input devices deep within the trusted environment (e.g., hidden in cables or connectors) can leak sensitive data without compromising the host or its software. In particular, we show a practical Keyboard JitterBug that solves the data exfiltration problem for keystroke loggers by leaking captured passwords through small variations in the precise times at which keyboard events are delivered to the host. Whenever an interactive communication application (such as SSH, Telnet, instant messaging, etc) is running, a receiver monitoring the host’s network traffic can recover the leaked data, even when the session or link is encrypted. Our experiments suggest that simple Keyboard JitterBugs can be a practical technique for capturing and exfiltrating typed secrets under conventional OSes and interactive network applications, even when the receiver is many hops away on the Internet.

Posted on November 8, 2006 at 1:26 PM

Why Management Doesn't Get IT Security

At the request of the Department of Homeland Security, a group called The Conference Board completed a study about senior management and their perceptions of IT security. The results aren’t very surprising.

Most C-level executives view security as an operational issue—kind of like facilities management—and not as a strategic review. As such, they don’t have direct responsibility for security.

Such attitudes about security have caused many organizations to distance their security teams from other parts of the business as well. “Security directors appear to be politically isolated within their companies,” Cavanagh says. Security pros often do not talk to business managers or other departments, he notes, so they don’t have many allies in getting their message across to upper management.

What to do? The report has some suggestions, the same ones you can hear at any security conference anywhere.

Security managers need to reach out more aggressively to other areas of the business to help them make their case, Cavanagh says. “Risk managers are among the best potential allies,” he observes, because they are usually tasked with measuring the financial impact of various threats and correlating them with the likelihood that those threats will happen.

“That can be tricky, because most risk managers come from a financial background, and they don’t speak the same language as the security people,” Cavanagh notes. “It’s also difficult because security presents some unusual risk scenarios. There are some franchise events that could destroy the company’s business, but have a very low likelihood of occurrence, so it’s very hard to gauge the risk.”

Getting attention (and budget) from top executives such as risk managers, CFOs, and CEOs, means creating metrics that help measure the value of the security effort, Cavanagh says. In the study, The Conference Board found that the cost of business interruption was the most helpful metric, cited by almost 64 percent of respondents. That metric was followed by vulnerability assessments (60 percent), benchmarks against industry standards (49 percent), the value of the facilities (43.5 percent), and the level of insurance premiums (39 percent).

Face time is another important way to gain attention in mahogany row, the report says. In industries where there are critical infrastructure issues, such as financial services, about 66 percent of top executives meet at least once a month with their security director, according to the study. That figure dropped to around 44 percent in industries without critical infrastructure issues.

I guess it’s more confirmation of the conventional wisdom.

The full report is available, but it costs $125 if you’re something called a Conference Board associate, and $495 if you’re not. But my guess is that you’ve already heard everything that’s in it.

Posted on November 8, 2006 at 6:15 AM37 Comments

Skimming RFID Credit Cards

It’s easy to skim personal information off an RFID credit card.

From The New York Times:

They could skim and store the information from a card with a device the size of a couple of paperback books, which they cobbled together from readily available computer and radio components for $150. They say they could probably make another one even smaller and cheaper: about the size of a pack of gum for less than $50. And because the cards can be read even through a wallet or an item of clothing, the security of the information, the researchers say, is startlingly weak. ‘Would you be comfortable wearing your name, your credit card number and your card expiration date on your T-shirt?’ Mr. Heydt-Benjamin, a graduate student, asked.

And from The Register:

The attack uses off-the-shelf radio and card reader equipment that could cost as little as $150. Although the attack fails to yield verification codes normally needed to make online purchases, it would still be potentially possible for crooks to use the data to order goods and services from online stores that don’t request this information.

Despite assurances by the issuing companies that data contained on RFID-based credit cards would be encrypted, the researchers found that the majority of cards they tested did not use encryption or other data protection technology.

And from the RFID Journal:

I don’t think the exposing of potential vulnerabilities of these cards is a huge black eye for the credit-card industry or for the RFID industry. Millions of people won’t suddenly have their credit-card numbers exposed to thieves the way they do when someone hacks a bank’s database or an employee loses a laptop with the card numbers on it. But it is likely that these vulnerabilities will need to be addressed as the technology becomes more mature and criminals start figuring out ways to abuse it.

Posted on November 7, 2006 at 12:49 PM30 Comments

Seagate Encrypted Drive

Seagate has announced a product called DriveTrust, which provides hardware-based encryption on the drive itself. The technology is proprietary, but they use standard algorithms: AES and triple-DES, RSA, and SHA-1. Details on the key management are sketchy, but the system requires a pre-boot password and/or combination of biometrics to access the disk. And Seagate is working on some sort of enterprise-wide key management system to make it easier to deploy the technology company-wide.

The first target market is laptop computers. No computer manufacturer has announced support for DriveTrust yet.

More details in these articles.

Posted on November 7, 2006 at 7:04 AM40 Comments

The Zotob Worm and the DHS

On August 18 of last year, the Zotob worm badly infected computers at the Department of Homeland Security, particularly the 1,300 workstations running the US-VISIT application at border crossings. Wired News filed a Freedom of Information Act request for details, which was denied.

After we sued, CBP released three internal documents, totaling five pages, and a copy of Microsoft’s security bulletin on the plug-and-play vulnerability. Though heavily redacted, the documents were enough to establish that Zotob had infiltrated US-VISIT after CBP made the strategic decision to leave the workstations unpatched. Virtually every other detail was blacked out. In the ensuing court proceedings, CBP claimed the redactions were necessary to protect the security of its computers, and acknowledged it had an additional 12 documents, totaling hundreds of pages, which it withheld entirely on the same grounds.

U.S. District Judge Susan Illston reviewed all the documents in chambers, and ordered an additional four documents to be released last month. The court also directed DHS to reveal much of what it had previously hidden beneath thick black pen strokes in the original five pages.

“Although defendant repeatedly asserts that this information would render the CBP computer system vulnerable, defendant has not articulated how this general information would do so,” Illston wrote in her ruling (emphasis is lllston’s).

The details say nothing about the technical details of the computer systems, and only point to the incompetence of the DHS in handling the incident.

Details are in the Wired News article.

Posted on November 6, 2006 at 12:11 PM21 Comments

Classical Crypto with Lasers

I simply don’t have the physics background to evaluate this:

Scheuer and Yariv’s concept for key distribution involves establishing a laser oscillation between the two users, who each decide how to reflect the light at their end by choosing one of three mirrors that peak at different frequencies.

Before a key is exchanged, the users reset the system by using the first mirror. Then they both randomly select a bit (either 1 or 0) and choose the corresponding mirror out of the other two, causing the lasing properties (wavelength and intensity) to shift in accordance with the mirror they chose. Because each user knows his or her own bit, they can determine the value of each other’s bits; but an eavesdropper, who doesn’t know either bit, could only figure out the correlation between bits, but not the bits themselves. Similar to quantum key distribution systems, the bit exchange is successful in about 50% of the cases.

“For a nice analogy, consider a very large ‘justice scale’ where Alice is at one side and Bob is at the other,” said Scheuer. “Both Alice and Bob have a set of two weights (say one pound representing ‘0’ and two pounds representing ‘1’). To exchange a bit, Alice and Bob randomly select a bit and put the corresponding weight on the scales. If they pick different bits, the scales will tilt toward the heavy weight, thus indicating who picked ‘1’ and who picked ‘0.’ If however, they choose the same bit, the scales will remain balanced, regardless whether they (both) picked ‘0’ or ‘1.’ These bits can be used for the key because Eve, who in this analogy can only observe the tilt of the scales, cannot deduce the exchanged bit (in the previous case, Eve could deduce the bits). Of course, there are some differences between the laser concept and the scales analogy: in the laser system, the successful bit exchanges occur when Alice and Bob pick opposite bits, and not identical; also, there is the third state needed for resetting the laser, etc. But the underlying concept is the same: the system uses some symmetry properties to ‘calculate’ the correlation between the bits selected in each side, and it reveals only the correlation. For Alice and Bob, this is enough—but not for Eve.”

But this quote gives me pause:

Although users can’t easily detect an eavesdropper here, the system increases the difficulty of eavesdropping “almost arbitrarily,” making detecting eavesdroppers almost unnecessary.

EDITED TO ADD (11/6): Here’s the paper.

Posted on November 6, 2006 at 7:49 AM41 Comments

New U.S. Customs Database on Trucks and Travellers

It’s yet another massive government surveillance program:

US Customs and Border Protection issued a notice in the Federal Register yesterday which detailed the agency’s massive database that keeps risk assessments on every traveler entering or leaving the country. Citizens who are concerned that their information is inaccurate are all but out of luck: the system “may not be accessed under the Privacy Act for the purpose of contesting the content of the record.”

The system in question is the Automated Targeting System, which is associated with the previously-existing Treasury Enforcement Communications System. TECS was built to screen people and assets that moved in and out of the US, and its database contains more than one billion records that are accessible by more than 30,000 users at 1,800 sites around the country. Customs has adapted parts of the TECS system to its own use and now plans to screen all passengers, inbound and outbound cargo, and ships.

The system creates a risk assessment for each person or item in the database. The assessment is generated from information gleaned from federal and commercial databases, provided by people themselves as they cross the border, and the Passenger Name Record information recorded by airlines. This risk assessment will be maintained for up to 40 years and can be pulled up by agents at a moment’s notice in order to evaluate potential threats against the US.

If you leave the country, the government will suddenly know a lot about you. The Passenger Name Record alone contains names, addresses, telephone numbers, itineraries, frequent-flier information, e-mail addresses—even the name of your travel agent. And this information can be shared with plenty of people:

  • Federal, state, local, tribal, or foreign governments
  • A court, magistrate, or administrative tribunal
  • Third parties during the course of a law enforcement investigation
  • Congressional office in response to an inquiry
  • Contractors, grantees, experts, consultants, students, and others performing or working on a contract, service, or grant
  • Any organization or person who might be a target of terrorist activity or conspiracy
  • The United States Department of Justice
  • The National Archives and Records Administration
  • Federal or foreign government intelligence or counterterrorism agencies
  • Agencies or people when it appears that the security or confidentiality of their information has been compromised.

That’s a lot of people who could be looking at your information and your government-designed risk assessment. The one person who won’t be looking at that information is you. The entire system is exempt from inspection and correction under provision 552a (j)(2) and (k)(2) of US Code Title 5, which allows such exemptions when the data in question involves law enforcement or intelligence information.

This means you can’t review your data for accuracy, and you can’t correct any errors.

But the system can be used to give you a risk assessment score, which presumably will affect how you’re treated when you return to the U.S.

I’ve already explained why data mining does not find terrorists or terrorist plots. So have actual math professors. And we’ve seen this kind of “risk assessment score” idea and the problems it causes with Secure Flight.

This needs some mainstream press attention.

EDITED TO ADD (11/4): More commentary here, here, and here.

EDITED TO ADD (11/5): It’s buried in the back pages, but at least The Washington Post wrote about it.

Posted on November 4, 2006 at 9:19 AM26 Comments

Bulletproof Textbooks

You can’t make this stuff up:

A retired veteran and candidate for Oklahoma State School Superintendent says he wants to make schools safer by creating bulletproof textbooks.

Bill Crozier says the books could give students and teachers a fighting chance if there’s a shooting at their school.

Can you just imagine the movie-plot scenarios going through his head? Does he really think this is a smart way to spend security dollars?

I just shake my head in wonder….

Posted on November 3, 2006 at 12:11 PM50 Comments

Perceived Risk vs. Actual Risk

I’ve written repeatedly about the difference between perceived and actual risk, and how it explains many seemingly perverse security trade-offs. Here’s a Los Angeles Times op-ed that does the same. The author is Daniel Gilbert, psychology professor at Harvard. (I just recently finished his book Stumbling on Happiness, which is not a self-help book but instead about how the brain works. Strongly recommended.)

The op-ed is about the public’s reaction to the risks of global warming and terrorism, but the points he makes are much more general. He gives four reasons why some risks are perceived to be more or less serious than they actually are:

  1. We over-react to intentional actions, and under-react to accidents, abstract events, and natural phenomena.

    That’s why we worry more about anthrax (with an annual death toll of roughly zero) than influenza (with an annual death toll of a quarter-million to a half-million people). Influenza is a natural accident, anthrax is an intentional action, and the smallest action captures our attention in a way that the largest accident doesn’t. If two airplanes had been hit by lightning and crashed into a New York skyscraper, few of us would be able to name the date on which it happened.

  2. We over-react to things that offend our morals.

    When people feel insulted or disgusted, they generally do something about it, such as whacking each other over the head, or voting. Moral emotions are the brain’s call to action.

    He doesn’t say it, but it’s reasonable to assume that we under-react to things that don’t.

  3. We over-react to immediate threats and under-react to long-term threats.

    The brain is a beautifully engineered get-out-of-the-way machine that constantly scans the environment for things out of whose way it should right now get. That’s what brains did for several hundred million years—and then, just a few million years ago, the mammalian brain learned a new trick: to predict the timing and location of dangers before they actually happened.

    Our ability to duck that which is not yet coming is one of the brain’s most stunning innovations, and we wouldn’t have dental floss or 401(k) plans without it. But this innovation is in the early stages of development. The application that allows us to respond to visible baseballs is ancient and reliable, but the add-on utility that allows us to respond to threats that loom in an unseen future is still in beta testing.

  4. We under-react to changes that occur slowly and over time.

    The human brain is exquisitely sensitive to changes in light, sound, temperature, pressure, size, weight and just about everything else. But if the rate of change is slow enough, the change will go undetected. If the low hum of a refrigerator were to increase in pitch over the course of several weeks, the appliance could be singing soprano by the end of the month and no one would be the wiser.

It’s interesting to compare this to what I wrote in Beyond Fear (pages 26-27) about perceived vs. actual risk:

  • People exaggerate spectacular but rare risks and downplay common risks. They worry more about earthquakes than they do about slipping on the bathroom floor, even though the latter kills far more people than the former. Similarly, terrorism causes far more anxiety than common street crime, even though the latter claims many more lives. Many people believe that their children are at risk of being given poisoned candy by strangers at Halloween, even though there has been no documented case of this ever happening.
  • People have trouble estimating risks for anything not exactly like their normal situation. Americans worry more about the risk of mugging in a foreign city, no matter how much safer it might be than where they live back home. Europeans routinely perceive the U.S. as being full of guns. Men regularly underestimate how risky a situation might be for an unaccompanied woman. The risks of computer crime are generally believed to be greater than they are, because computers are relatively new and the risks are unfamiliar. Middle-class Americans can be particularly naïve and complacent; their lives are incredibly secure most of the time, so their instincts about the risks of many situations have been dulled.
  • Personified risks are perceived to be greater than anonymous risks. Joseph Stalin said, “A single death is a tragedy, a million deaths is a statistic.” He was right; large numbers have a way of blending into each other. The final death toll from 9/11 was less than half of the initial estimates, but that didn’t make people feel less at risk. People gloss over statistics of automobile deaths, but when the press writes page after page about nine people trapped in a mine—complete with human-interest stories about their lives and families—suddenly everyone starts paying attention to the dangers with which miners have contended for centuries. Osama bin Laden represents the face of Al Qaeda, and has served as the personification of the terrorist threat. Even if he were dead, it would serve the interests of some politicians to keep him “alive” for his effect on public opinion.
  • People underestimate risks they willingly take and overestimate risks in situations they can’t control. When people voluntarily take a risk, they tend to underestimate it. When they have no choice but to take the risk, they tend to overestimate it. Terrorists are scary because they attack arbitrarily, and from nowhere. Commercial airplanes are perceived as riskier than automobiles, because the controls are in someone else’s hands—even though they’re much safer per passenger mile. Similarly, people overestimate even more those risks that they can’t control but think they, or someone, should. People worry about airplane crashes not because we can’t stop them, but because we think as a society we should be capable of stopping them (even if that is not really the case). While we can’t really prevent criminals like the two snipers who terrorized the Washington, DC, area in the fall of 2002 from killing, most people think we should be able to.
  • Last, people overestimate risks that are being talked about and remain an object of public scrutiny. News, by definition, is about anomalies. Endless numbers of automobile crashes hardly make news like one airplane crash does. The West Nile virus outbreak in 2002 killed very few people, but it worried many more because it was in the news day after day. AIDS kills about 3 million people per year worldwide—about three times as many people each day as died in the terrorist attacks of 9/11. If a lunatic goes back to the office after being fired and kills his boss and two coworkers, it’s national news for days. If the same lunatic shoots his ex-wife and two kids instead, it’s local news…maybe not even the lead story.

Posted on November 3, 2006 at 7:18 AM101 Comments

How to Steal an Election

Good article. (Here is the full article in pdf.)

EDITED TO ADD (11/2): Here are some additional resources. “E-Voting: State by State,” a guide to e-voting vendors, and a review of HBO’s “Hacking Democracy” documentary. Also, a debate from The Wall Street Journal on electronic voting, and an Ars Technica article on current-year problems with electronic voting.

EDITED TO ADD (11/2): Another review of the documentary.

EDITED TO ADD (11/3): And two items from The Brad Blog.

Posted on November 2, 2006 at 2:26 PM35 Comments

Insider Identity Theft

CEO arrested for stealing the identities of his employees:

Terrence D. Chalk, 44, of White Plains was arraigned in federal court in White Plains, along with his nephew, Damon T. Chalk, 35, after an FBI investigation turned up the curious lending and spending habits. The pair are charged with submitting some $1 million worth of credit applications using the names and personal information—names, addresses and social security numbers—of some of Compulinx’s 50 employees. According to federal prosecutors, the employees’ information was used without their knowledge; the Chalks falsely represented to the lending institutions, in writing and in face-to-face meetings, that the employees were actually officers of the company.

Posted on November 2, 2006 at 12:15 PM25 Comments

Forge Your Own Boarding Pass

Last week Christopher Soghoian created a Fake Boarding Pass Generator website, allowing anyone to create a fake Northwest Airlines boarding pass: any name, airport, date, flight. This action got him visited by the FBI, who later came back, smashed open his front door, and seized his computers and other belongings. It resulted in calls for his arrest—the most visible by Rep. Edward Markey (D-Massachusetts)—who has since recanted. And it’s gotten him more publicity than he ever dreamed of.

All for demonstrating a known and obvious vulnerability in airport security involving boarding passes and IDs.

This vulnerability is nothing new. There was an article on CSOonline from February 2006. There was an article on Slate from February 2005. Sen. Chuck Schumer spoke about it as well. I wrote about it in the August 2003 issue of Crypto-Gram. It’s possible I was the first person to publish it, but I certainly wasn’t the first person to think of it.

It’s kind of obvious, really. If you can make a fake boarding pass, you can get through airport security with it. Big deal; we know.

You can also use a fake boarding pass to fly on someone else’s ticket. The trick is to have two boarding passes: one legitimate, in the name the reservation is under, and another phony one that matches the name on your photo ID. Use the fake boarding pass in your name to get through airport security, and the real ticket in someone else’s name to board the plane.

This means that a terrorist on the no-fly list can get on a plane: He buys a ticket in someone else’s name, perhaps using a stolen credit card, and uses his own photo ID and a fake ticket to get through airport security. Since the ticket is in an innocent’s name, it won’t raise a flag on the no-fly list.

You can also use a fake boarding pass instead of your real one if you have the “SSSS” mark and want to avoid secondary screening, or if you don’t have a ticket but want to get into the gate area.

Historically, forging a boarding pass was difficult. It required special paper and equipment. But since Alaska Airlines started the trend in 1999, most airlines now allow you to print your boarding pass using your home computer and bring it with you to the airport. This program was temporarily suspended after 9/11, but was quickly brought back because of pressure from the airlines. People who print the boarding passes at home can go directly to airport security, and that means fewer airline agents are required.

Airline websites generate boarding passes as graphics files, which means anyone with a little bit of skill can modify them in a program like Photoshop. All Soghoian’s website did was automate the process with a single airline’s boarding passes.

Soghoian claims that he wanted to demonstrate the vulnerability. You could argue that he went about it in a stupid way, but I don’t think what he did is substantively worse than what I wrote in 2003. Or what Schumer described in 2005. Why is it that the person who demonstrates the vulnerability is vilified while the person who describes it is ignored? Or, even worse, the organization that causes it is ignored? Why are we shooting the messenger instead of discussing the problem?

As I wrote in 2005: “The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler’s identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record—the third leg of the triangle—it’s possible to create two different boarding passes and have no one notice. That’s why the attack works.”

The way to fix it is equally obvious: Verify the accuracy of the boarding passes at the security checkpoints. If passengers had to scan their boarding passes as they went through screening, the computer could verify that the boarding pass already matched to the photo ID also matched the data in the computer. Close the authentication triangle and the vulnerability disappears.

But before we start spending time and money and Transportation Security Administration agents, let’s be honest with ourselves: The photo ID requirement is no more than security theater. Its only security purpose is to check names against the no-fly list, which would still be a joke even if it weren’t so easy to circumvent. Identification is not a useful security measure here.

Interestingly enough, while the photo ID requirement is presented as an antiterrorism security measure, it is really an airline-business security measure. It was first implemented after the explosion of TWA Flight 800 over the Atlantic in 1996. The government originally thought a terrorist bomb was responsible, but the explosion was later shown to be an accident.

Unlike every other airplane security measure—including reinforcing cockpit doors, which could have prevented 9/11—the airlines didn’t resist this one, because it solved a business problem: the resale of non-refundable tickets. Before the photo ID requirement, these tickets were regularly advertised in classified pages: “Round trip, New York to Los Angeles, 11/21-30, male, $100.” Since the airlines never checked IDs, anyone of the correct gender could use the ticket. Airlines hated that, and tried repeatedly to shut that market down. In 1996, the airlines were finally able to solve that problem and blame it on the FAA and terrorism.

So business is why we have the photo ID requirement in the first place, and business is why it’s so easy to circumvent it. Instead of going after someone who demonstrates an obvious flaw that is already public, let’s focus on the organizations that are actually responsible for this security failure and have failed to do anything about it for all these years. Where’s the TSA’s response to all this?

The problem is real, and the Department of Homeland Security and TSA should either fix the security or scrap the system. What we’ve got now is the worst security system of all: one that annoys everyone who is innocent while failing to catch the guilty.

This essay—my 30th for Wired.com—appeared today.

EDITED TO ADD (11/4): More news and commentary.

EDITED TO ADD (1/10): Great essay by Matt Blaze.

Posted on November 2, 2006 at 6:21 AM55 Comments

Online ID Theft Hyped

Does this surprise anyone?

While keylogging software, phishing e-mails that impersonate official bank messages and hackers who break into customer databases may dominate headlines, more than 90% of identity fraud starts off conventionally, with stolen bank statements, misplaced passwords or other similar means, according to Javelin Strategy & Research.

“An insignificant portion of identity fraud actually starts with the Internet,” said James Van Dyke, president of Javelin, who pointed out that many firms still rely on simple security questions such as one’s mother’s maiden name. “The Internet always grabs the headlines, but it is individuals who are close to the victims, such as family and friends, that are doing most of it,” he said.

[…]

While fraudsters often use the Internet to access existing bank, phone or brokerage accounts or to create new ones using stolen details, in only one out of 10 of those incidents did the actual theft of the personal data take place through e-mail or the Web or somewhere else on the Internet, according to Javelin. “No matter how you slice the data, it’s really hard to arrive at a scenario where the Internet could be the source of the majority of identity fraud,” Van Dyke said.

All told, 4% of Americans were affected by identity fraud in 2005, a statistic that is slowly shrinking, though the value of each fraud incident is growing, Van Dyke said. The total losses attributed to identity fraud has held steady the past three years.

Posted on November 1, 2006 at 2:07 PM26 Comments

DHS Privacy Committee Recommends Against RFID Cards

The Data Privacy and Integrity Advisory Committee of the Department of Homeland Security recommended against putting RFID chips in identity cards. It’s only a draft report, but what it says is so controversial that a vote on the final report is being delayed.

Executive Summary:

Automatic identification technologies like RFID have valuable uses, especially in connection with tracking things for purposes such as inventory management. RFID is particularly useful where it can be embedded within an object, such as a shipping container.

There appear to be specific, narrowly defined situations in which RFID is appropriate for human identification. Miners or firefighters might be appropriately identified using RFID because speed of identification is at a premium in dangerous situations and the need to verify the connection between a card and bearer is low.

But for other applications related to human beings, RFID appears to offer little benefit when compared to the consequences it brings for privacy and data integrity. Instead, it increases risks to personal privacy and security, with no commensurate benefit for performance or national security. Most difficult and troubling is the situation in which RFID is ostensibly used for tracking objects (medicine containers, for example), but can be in fact used for monitoring human behavior. These types of uses are still being explored and remain difficult to predict.

For these reasons, we recommend that RFID be disfavored for identifying and tracking human beings. When DHS does choose to use RFID to identify and track individuals, we recommend the implementation of the specific security and privacy safeguards described herein.

Posted on November 1, 2006 at 7:29 AM39 Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.