Entries Tagged "anonymity"

Page 8 of 8

TrackMeNot

In the wake of AOL’s publication of search data, and the New York Times article demonstrating how easy it is to figure out who did the searching, we have TrackMeNot:

TrackMeNot runs in Firefox as a low-priority background process that periodically issues randomized search-queries to popular search engines, e.g., AOL, Yahoo!, Google, and MSN. It hides users’ actual search trails in a cloud of indistinguishable ‘ghost’ queries, making it difficult, if not impossible, to aggregate such data into accurate or identifying user profiles. TrackMeNot integrates into the Firefox ‘Tools’ menu and includes a variety of user-configurable options.

Let’s count the ways this doesn’t work.

One, it doesn’t hide your searches. If the government wants to know who’s been searching on “al Qaeda recruitment centers,” it won’t matter that you’ve made ten thousand other searches as well—you’ll be targeted.

Two, it’s too easy to spot. There are only 1,673 search terms in the program’s dictionary. Here, as a random example, are the program’s “G” words:

gag, gagged, gagging, gags, gas, gaseous, gases, gassed, gasses, gassing, gen, generate, generated, generates, generating, gens, gig, gigs, gillion, gillions, glass, glasses, glitch, glitched, glitches, glitching, glob, globed, globing, globs, glue, glues, gnarlier, gnarliest, gnarly, gobble, gobbled, gobbles, gobbling, golden, goldener, goldenest, gonk, gonked, gonking, gonks, gonzo, gopher, gophers, gorp, gorps, gotcha, gotchas, gribble, gribbles, grind, grinding, grinds, grok, grokked, grokking, groks, ground, grovel, groveled, groveling, grovelled, grovelling, grovels, grue, grues, grunge, grunges, gun, gunned, gunning, guns, guru, gurus

The program’s authors claim that this list is temporary, and that there will eventually be a TrackMeNot server with an ever-changing word list. Of course, that list can be monitored by any analysis program—as could any queries to that server.

In any case, every twelve seconds—exactly—the program picks a random pair of words and sends it to either AOL, Yahoo, MSN, or Google. My guess is that your searches contain more than two words, you don’t send them out in precise twelve-second intervals, and you favor one search engine over the others.

Three, some of the program’s searches are worse than yours. The dictionary includes:

HIV, atomic, bomb, bible, bibles, bombing, bombs, boxes, choke, choked, chokes, choking, chain, crackers, empire, evil, erotics, erotices, fingers, knobs, kicking, harier, hamster, hairs, legal, letterbomb, letterbombs, mailbomb, mailbombing, mailbombs, rapes, raping, rape, raper, rapist, virgin, warez, warezes, whack, whacked, whacker, whacking, whackers, whacks, pistols

Does anyone reall think that searches on “erotic rape,” “mailbombing bibles,” and “choking virgins” will make their legitimate searches less noteworthy?

And four, it wastes a whole lot of bandwidth. A query every twelve seconds translates into 2,400 queries a day, assuming an eight-hour workday. A typical Google response is about 25K, so we’re talking 60 megabytes of additional traffic daily. Imagine if everyone in the company used it.

I suppose this kind of thing would stop someone who has a paper printout of your searches and is looking through them manually, but it’s not going to hamper computer analysis very much. Or anyone who isn’t lazy. But it wouldn’t be hard for a computer profiling program to ignore these searches.

As one commentator put it:

Imagine a cop pulls you over for speeding. As he approaches, you realize you left your wallet at home. Without your driver’s license, you could be in a lot of trouble. When he approaches, you roll down your window and shout. “Hello Officer! I don’t have insurance on this vehicle! This car is stolen! I have weed in my glovebox! I don’t have my driver’s license! I just hit an old lady minutes ago! I’ve been running stop lights all morning! I have a dead body in my trunk! This car doesn’t pass the emissions tests! I’m not allowed to drive because I am under house arrest! My gas tank runs on the blood of children!” You stop to catch a breath, confident you have supplied so much information to the cop that you can’t possibly be caught for not having your license now.

Yes, data mining is a signal-to-noise problem. But artificial noise like this isn’t going to help much. If I were going to improve on this idea, I would make the plugin watch the user’s search patterns. I would make it send queries only to the search engines the user does, only when he is actually online doing things. I would randomize the timing. (There’s a comment to that effect in the code, so presumably this will be fixed in a later version of the program.) And I would make it monitor the web pages the user looks at, and send queries based on keywords it finds on those pages. And I would make it send queries in the form the user tends to use, whether it be single words, pairs of words, or whatever.

But honestly, I don’t know that I would use it even then. The way serious people protect their web-searching privacy is through anonymization. Use Tor for serious web anonymization. Or Black Box Search for simple anonymous searching (here’s a Greasemonkey extension that does that automatically.) And set your browser to delete search engine cookies regularly.

Posted on August 23, 2006 at 6:53 AMView Comments

Flying Without ID

According to the TSA, in the 9th Circuit Case of John Gilmore, you are allowed to fly without showing ID—you’ll just have to submit yourself to secondary screening.

The Identity Project wants you to try it out. If you have time, try to fly without showing ID.

Mr. Gilmore recommends that every traveler who is concerned with privacy or anonymity should opt to become a “selectee” rather than show an ID. We are very likely to lose the right to travel anonymously, if citizens do not exercise it. TSA and the airlines will attempt to make it inconvenient for you, by wasting your time and hassling you, but they can’t do much in that regard without compromising their avowed missions, which are to transport paying passengers, and to keep weapons off planes. If you never served in the armed services, this is a much easier way to spend some time keeping your society free. (Bring a copy of the court decision with you and point out some of the numerous places it says you can fly as a selectee rather than show ID. Paper tickets are also helpful, though not required.)

I’m curious what the results are.

EDITED TO ADD (11/25): Here’s someone who tried, and failed.

Posted on March 10, 2006 at 7:20 AMView Comments

Anonym.OS

This seems like a really important development: an anonymous operating system:

Titled Anonym.OS, the system is a type of disc called a “live CD”—meaning it’s a complete solution for using a computer without touching the hard drive. Developers say Anonym.OS is likely the first live CD based on the security-heavy OpenBSD operating system.

OpenBSD running in secure mode is relatively rare among desktop users. So to keep from standing out, Anonym.OS leaves a deceptive network fingerprint. In everything from the way it actively reports itself to other computers, to matters of technical minutia such as TCP packet length, the system is designed to look like Windows XP SP1. “We considered part of what makes a system anonymous is looking like what is most popular, so you blend in with the crowd,” explains project developer Adam Bregenzer of Super Light Industry.

Booting the CD, you are presented with a text based wizard-style list of questions to answer, one at a time, with defaults that will work for most users. Within a few moments, a fairly naive user can be up and running and connected to an open Wi-Fi point, if one is available.

Once you’re running, you have a broad range of anonymity-protecting applications at your disposal.

Get yours here.

See also this Slashdot thread.

Posted on January 20, 2006 at 7:39 AMView Comments

Anonymity and Accountability

Last week I blogged Kevin Kelly’s rant against anonymity. Today I wrote about it for Wired.com:

And that’s precisely where Kelly makes his mistake. The problem isn’t anonymity; it’s accountability. If someone isn’t accountable, then knowing his name doesn’t help. If you have someone who is completely anonymous, yet just as completely accountable, then—heck, just call him Fred.

History is filled with bandits and pirates who amass reputations without anyone knowing their real names.

EBay’s feedback system doesn’t work because there’s a traceable identity behind that anonymous nickname. EBay’s feedback system works because each anonymous nickname comes with a record of previous transactions attached, and if someone cheats someone else then everybody knows it.

Similarly, Wikipedia’s veracity problems are not a result of anonymous authors adding fabrications to entries. They’re an inherent property of an information system with distributed accountability. People think of Wikipedia as an encyclopedia, but it’s not. We all trust Britannica entries to be correct because we know the reputation of that company, and by extension its editors and writers. On the other hand, we all should know that Wikipedia will contain a small amount of false information because no particular person is accountable for accuracy—and that would be true even if you could mouse over each sentence and see the name of the person who wrote it.

Please read the whole thing before you comment.

Posted on January 12, 2006 at 4:36 AMView Comments

Anonymous Internet Annoying Is Illegal in the U.S.

How bizarre:

Last Thursday, President Bush signed into law a prohibition on posting annoying Web messages or sending annoying e-mail messages without disclosing your true identity.

[…]

Buried deep in the new law is Sec. 113, an innocuously titled bit called “Preventing Cyberstalking.” It rewrites existing telephone harassment law to prohibit anyone from using the Internet “without disclosing his identity and with intent to annoy.”

What does this mean for the comment section of this blog? Or any blog? Or Usenet?

More importantly, what does it mean for our society when obviously stupid laws like this get passed, and we have to rely on the police being nice enough to not enforce them?

EDITED TO ADD (1/9) Some commenters to BoingBoing clarify the legal issues. This is from an anonymous attorney:

The anonymous harassment provision ( Link ) is the old telephone-annoyance statute that has been on the books for decades. It was updated in the widely (and in many respects deservedly) ridiculed Communications Decency Act to include new technologies, and the cases make clear its applicability to Internet communications. See, e.g., ACLU v. Reno, 929 F. Supp. 824, 829 n.5 (E.D. Pa. 1996) (text here), aff’d, 521 U.S. 824 (1997). Unlike the indecency provisions of the CDA, this scope update was not invalidated in the courts and remains fully effective.

In other words, the latest amendment, which supposedly adds Internet communications devices to the scope of the law, is meaningless surplusage.

Posted on January 9, 2006 at 2:38 PMView Comments

Kevin Kelly on Anonymity

He’s against it:

More anonymity is good: that’s a dangerous idea.

Fancy algorithms and cool technology make true anonymity in mediated environments more possible today than ever before. At the same time this techno-combo makes true anonymity in physical life much harder. For every step that masks us, we move two steps toward totally transparent unmasking. We have caller ID, but also caller ID Block, and then caller ID-only filters. Coming up: biometric monitoring and little place to hide. A world where everything about a person can be found and archived is a world with no privacy, and therefore many technologists are eager to maintain the option of easy anonymity as a refuge for the private.

However in every system that I have seen where anonymity becomes common, the system fails. The recent taint in the honor of Wikipedia stems from the extreme ease which anonymous declarations can be put into a very visible public record. Communities infected with anonymity will either collapse, or shift the anonymous to pseudo-anonymous, as in eBay, where you have a traceable identity behind an invented nickname. Or voting, where you can authenticate an identity without tagging it to a vote.

Anonymity is like a rare earth metal. These elements are a necessary ingredient in keeping a cell alive, but the amount needed is a mere hard-to-measure trace. In larger does these heavy metals are some of the most toxic substances known to a life. They kill. Anonymity is the same. As a trace element in vanishingly small doses, it’s good for the system by enabling the occasional whistleblower, or persecuted fringe. But if anonymity is present in any significant quantity, it will poison the system.

There’s a dangerous idea circulating that the option of anonymity should always be at hand, and that it is a noble antidote to technologies of control. This is like pumping up the levels of heavy metals in your body into to make it stronger.

Privacy can only be won by trust, and trust requires persistent identity, if only pseudo-anonymously. In the end, the more trust, the better. Like all toxins, anonymity should be keep as close to zero as possible.

I don’t even know where to begin. Anonymity is essential for free and fair elections. It’s essential for democracy and, I think, liberty. It’s essential to privacy in a large society, and so it is essential to protect the rights of the minority against the tyranny of the majority…and to protect individual self-respect.

Kelly makes the very valid point that reputation makes society work. But that doesn’t mean that 1) reputation can’t be anonymous, or 2) anonymity isn’t also essential for society to work.

I’m writing an essay on this for Wired News. Comments and arguments, pro or con, are appreciated.

Posted on January 5, 2006 at 1:20 PMView Comments

RFID Car Keys

RFID car keys (subscription required) are becoming more popular. Since these devices broadcast a unique serial number, it’s only a matter of time before a significant percentage of the population can be tracked with them.

Lexus has made what it calls the “SmartAccess” keyless-entry system standard on its new IS sedans, designed to compete with German cars like the BMW 3 series or the Audi A4, as well as rivals such as the Infiniti G35 or the U.S.-made Cadillac CTS. BMW offers what it calls “keyless go” as an option on the new 3 series, and on its higher-priced 5, 6 and 7 series sedans.

Volkswagen AG’s Audi brand offers keyless-start systems on its A6 and A8 sedans, but not yet on U.S.-bound A4s. Cadillac’s new STS sedan, big brother to the CTS, also offers a pushbutton start.

Starter buttons have a racy flair—European sports cars and race cars used them in the past. The proliferation of starter buttons in luxury sedans has its roots in theft protection. An increasing number of cars now come with theft-deterrent systems that rely on a chip in the key fob that broadcasts a code to a receiver in the car. If the codes don’t match, the car won’t start.

Cryptography can be used to make these devices anonymous, but there’s no business reason for automobile manufacturers to field such a system. Once again, the economic barriers to security are far greater than the technical ones.

Posted on October 5, 2005 at 8:13 AMView Comments

Sandia on Terrorism Security

I have very mixed feelings about this report:

Anticipating attacks from terrorists, and hardening potential targets against them, is a wearying and expensive business that could be made simpler through a broader view of the opponents’ origins, fears, and ultimate objectives, according to studies by the Advanced Concepts Group (ACG) of Sandia National Laboratories.

“Right now, there are way too many targets considered and way too many ways to attack them,” says ACG’s Curtis Johnson. “Any thinking person can spin up enemies, threats, and locations it takes billions [of dollars] to fix.”

That makes a lot of sense, and this way of thinking is sorely needed. As is this kind of thing:

“The game really starts when the bad guys are getting together to plan something, not when they show up at your door,” says Johnson. “Can you ping them to get them to reveal their hand, or get them to turn against themselves?”

Better yet is to bring the battle to the countries from which terrorists spring, and beat insurgencies before they have a foothold.

“We need to help win over the as-yet-undecided populace to the view it is their government that is legitimate and not the insurgents,” says the ACG’s David Kitterman. Data from Middle East polls suggest, perhaps surprisingly, that most respondents are favorable to Western values. Turbulent times, however, put that liking under stress.

A nation’s people and media can be won over, says Yonas, through global initiatives that deal with local problems such as the need for clean water and affordable energy.

Says Johnson, “U.S. security already is integrated with global security. We’re always helping victims of disaster like tsunami victims, or victims of oppressive governments. Perhaps our ideas on national security should be redefined to reflect the needs of these people.”

Remember right after 9/11, when that kind of thinking would get you vilified?

But the article also talks about security mechanisms that won’t work, cost too much in freedoms and liberties, and have dangerous side effects.

People in airports voluntarily might carry smart cards if the cards could be sweetened to perform additional tasks like helping the bearer get through security, or to the right gate at the right time.

Mall shoppers might be handed a sensing card that also would help locate a particular store, a special sale, or find the closest parking space through cheap distributed-sensor networks.

“Suppose every PDA had a sensor on it,” suggests ACG researcher Laura McNamara. “We would achieve decentralized surveillance.” These sensors could report by radio frequency to a central computer any signal from contraband biological, chemical, or nuclear material.

Universal surveillance to improve our security? Seems unlikely.

But the most chilling quote of all:

“The goal here is to abolish anonymity, the terrorist’s friend,” says Sandia researcher Peter Chew. “We’re not talking about abolishing privacy—that’s another issue. We’re only considering the effect of setting up an electronic situation where all the people in a mall, subway, or airport ‘know’ each other—via, say, Bluetooth—as they would have, personally, in a small town. This would help malls and communities become bad targets.”

Anonymity is now the terrorist’s friend? I like to think of it as democracy’s friend.

Security against terrorism is important, but it’s equally important to remember that terrorism isn’t the only threat. Criminals, police, and governments are also threats, and security needs to be viewed as a trade-off with respect to all the threats. When you analyze terrorism in isolation, you end up with all sorts of weird answers.

Posted on April 5, 2005 at 9:26 AMView Comments

Anonymity and the Internet

From Slate:

Anonymice on Anonymity Wendy.Seltzer.org (“Musings of a techie lawyer”) deflates the New York Times‘ breathless Saturday (March 19) piece about the menace posed by anonymous access to Wi-Fi networks (“Growth of Wireless Internet Opens New Path for Thieves” by Seth Schiesel). Wi-Fi pirates around the nation are using unsecured hotspots to issue anonymous death threats, download child pornography, and commit credit card fraud, Schiesel writes. Then he plays the terrorist card.

But unsecured wireless networks are nonetheless being looked at by the authorities as a potential tool for furtive activities of many sorts, including terrorism. Two federal law enforcement officials said on condition of anonymity that while they were not aware of specific cases, they believed that sophisticated terrorists might also be starting to exploit unsecured Wi-Fi connections.

Never mind the pod of qualifiers swimming through in those two sentences—”being looked at”; “potential tool”; “not aware of specific cases”; “might”—look at the sourcing. “Two federal law enforcement officials said on condition of anonymity. …” Seltzer points out the deep-dish irony of the Times citing anonymous sources about the imagined threats posed by anonymous Wi-Fi networks. Anonymous sources of unsubstantiated information, good. Anonymous Wi-Fi networks, bad.

This is the post from wendy.seltzer.org:

The New York Times runs an article in which law enforcement officials lament, somewhat breathlessly, that open wifi connections can be used, anonymously, by wrongdoers. The piece omits any mention of the benefits of these open wireless connections—no-hassle connectivity anywhere the “default” community network is operating, and anonymous browsing and publication for those doing good, too.

Without a hint of irony, however:

Two federal law enforcement officials said on condition of anonymity that while they were not aware of specific cases, they believed that sophisticated terrorists might also be starting to exploit unsecured Wi-Fi connections.

Yes, even law enforcement needs anonymity sometimes.

Open WiFi networks are a good thing. Yes, they allow bad guys to do bad things. But so do automobiles, telephones, and just about everything else you can think of. I like it when I find an open wireless network that I can use. I like it when my friends keep their home wireless network open so I can use it.

Scare stories like the New York Times one don’t help any.

Posted on March 25, 2005 at 12:49 PMView Comments

The Problem with Electronic Voting Machines

In the aftermath of the U.S.’s 2004 election, electronic voting machines are again in the news. Computerized machines lost votes, subtracted votes instead of adding them, and doubled votes. Because many of these machines have no paper audit trails, a large number of votes will never be counted. And while it is unlikely that deliberate voting-machine fraud changed the result of the presidential election, the Internet is buzzing with rumors and allegations of fraud in a number of different jurisdictions and races. It is still too early to tell if any of these problems affected any individual elections. Over the next several weeks we’ll see whether any of the information crystallizes into something significant.

The U.S has been here before. After 2000, voting machine problems made international headlines. The government appropriated money to fix the problems nationwide. Unfortunately, electronic voting machines—although presented as the solution—have largely made the problem worse. This doesn’t mean that these machines should be abandoned, but they need to be designed to increase both their accuracy, and peoples’ trust in their accuracy. This is difficult, but not impossible.

Before I can discuss electronic voting machines, I need to explain why voting is so difficult. Basically, a voting system has four required characteristics:

  1. Accuracy. The goal of any voting system is to establish the intent of each individual voter, and translate those intents into a final tally. To the extent that a voting system fails to do this, it is undesirable. This characteristic also includes security: It should be impossible to change someone else’s vote, ballot stuff, destroy votes, or otherwise affect the accuracy of the final tally.

  2. Anonymity. Secret ballots are fundamental to democracy, and voting systems must be designed to facilitate voter anonymity.

  3. Scalability. Voting systems need to be able to handle very large elections. One hundred million people vote for president in the United States. About 372 million people voted in India’s June elections, and over 115 million in Brazil’s October elections. The complexity of an election is another issue. Unlike many countries where the national election is a single vote for a person or a party, a United States voter is faced with dozens of individual election: national, local, and everything in between.

  4. Speed. Voting systems should produce results quickly. This is particularly important in the United States, where people expect to learn the results of the day’s election before bedtime. It’s less important in other countries, where people don’t mind waiting days—or even weeks—before the winner is announced.

Through the centuries, different technologies have done their best. Stones and pot shards dropped in Greek vases gave way to paper ballots dropped in sealed boxes. Mechanical voting booths, punch cards, and then optical scan machines replaced hand-counted ballots. New computerized voting machines promise even more efficiency, and Internet voting even more convenience.

But in the rush to improve speed and scalability, accuracy has been sacrificed. And to reiterate: accuracy is not how well the ballots are counted by, for example, a punch-card reader. It’s not how the tabulating machine deals with hanging chads, pregnant chads, or anything like that. Accuracy is how well the process translates voter intent into properly counted votes.

Technologies get in the way of accuracy by adding steps. Each additional step means more potential errors, simply because no technology is perfect. Consider an optical-scan voting system. The voter fills in ovals on a piece of paper, which is fed into an optical-scan reader. The reader senses the filled-in ovals and tabulates the votes. This system has several steps: voter to ballot to ovals to optical reader to vote tabulator to centralized total.

At each step, errors can occur. If the ballot is confusing, then some voters will fill in the wrong ovals. If a voter doesn’t fill them in properly, or if the reader is malfunctioning, then the sensor won’t sense the ovals properly. Mistakes in tabulation—either in the machine or when machine totals get aggregated into larger totals—also cause errors. A manual system—tallying the ballots by hand, and then doing it again to double-check—is more accurate simply because there are fewer steps.

The error rates in modern systems can be significant. Some voting technologies have a 5% error rate: one in twenty people who vote using the system don’t have their votes counted properly. This system works anyway because most of the time errors don’t matter. If you assume that the errors are uniformly distributed—in other words, that they affect each candidate with equal probability—then they won’t affect the final outcome except in very close races. So we’re willing to sacrifice accuracy to get a voting system that will more quickly handle large and complicated elections. In close races, errors can affect the outcome, and that’s the point of a recount. A recount is an alternate system of tabulating votes: one that is slower (because it’s manual), simpler (because it just focuses on one race), and therefore more accurate.

Note that this is only true if everyone votes using the same machines. If parts of town that tend to support candidate A use a voting system with a higher error rate than the voting system used in parts of town that tend to support candidate B, then the results will be skewed against candidate A. This is an important consideration in voting accuracy, although tangential to the topic of this essay.

With this background, the issue of computerized voting machines becomes clear. Actually, “computerized voting machines” is a bad choice of words. Many of today’s voting technologies involve computers. Computers tabulate both punch-card and optical-scan machines. The current debate centers around all-computer voting systems, primarily touch-screen systems, called Direct Record Electronic (DRE) machines. (The voting system used in India’s most recent election—a computer with a series of buttons—is subject to the same issues.) In these systems the voter is presented with a list of choices on a screen, perhaps multiple screens if there are multiple elections, and he indicates his choice by touching the screen. These machines are easy to use, produce final tallies immediately after the polls close, and can handle very complicated elections. They also can display instructions in different languages and allow for the blind or otherwise handicapped to vote without assistance.

They’re also more error-prone. The very same software that makes touch-screen voting systems so friendly also makes them inaccurate. And even worse, they’re inaccurate in precisely the worst possible way.

Bugs in software are commonplace, as any computer user knows. Computer programs regularly malfunction, sometimes in surprising and subtle ways. This is true for all software, including the software in computerized voting machines. For example:

In Fairfax County, VA, in 2003, a programming error in the electronic voting machines caused them to mysteriously subtract 100 votes from one particular candidates’ totals.

In San Bernardino County, CA in 2001, a programming error caused the computer to look for votes in the wrong portion of the ballot in 33 local elections, which meant that no votes registered on those ballots for that election. A recount was done by hand.

In Volusia County, FL in 2000, an electronic voting machine gave Al Gore a final vote count of negative 16,022 votes.

The 2003 election in Boone County, IA, had the electronic vote-counting equipment showing that more than 140,000 votes had been cast in the Nov. 4 municipal elections. The county has only 50,000 residents and less than half of them were eligible to vote in this election.

There are literally hundreds of similar stories.

What’s important about these problems is not that they resulted in a less accurate tally, but that the errors were not uniformly distributed; they affected one candidate more than the other. This means that you can’t assume that errors will cancel each other out and not affect the election; you have to assume that any error will skew the results significantly.

Another issue is that software can be hacked. That is, someone can deliberately introduce an error that modifies the result in favor of his preferred candidate. This has nothing to do with whether the voting machines are hooked up to the Internet on election day. The threat is that the computer code could be modified while it is being developed and tested, either by one of the programmers or a hacker who gains access to the voting machine company’s network. It’s much easier to surreptitiously modify a software system than a hardware system, and it’s much easier to make these modifications undetectable.

A third issue is that these problems can have further-reaching effects in software. A problem with a manual machine just affects that machine. A software problem, whether accidental or intentional, can affect many thousands of machines—and skew the results of an entire election.

Some have argued in favor of touch-screen voting systems, citing the millions of dollars that are handled every day by ATMs and other computerized financial systems. That argument ignores another vital characteristic of voting systems: anonymity. Computerized financial systems get most of their security from audit. If a problem is suspected, auditors can go back through the records of the system and figure out what happened. And if the problem turns out to be real, the transaction can be unwound and fixed. Because elections are anonymous, that kind of security just isn’t possible.

None of this means that we should abandon touch-screen voting; the benefits of DRE machines are too great to throw away. But it does mean that we need to recognize its limitations, and design systems that can be accurate despite them.

Computer security experts are unanimous on what to do. (Some voting experts disagree, but I think we’re all much better off listening to the computer security experts. The problems here are with the computer, not with the fact that the computer is being used in a voting application.) And they have two recommendations:

  1. DRE machines must have a voter-verifiable paper audit trails (sometimes called a voter-verified paper ballot). This is a paper ballot printed out by the voting machine, which the voter is allowed to look at and verify. He doesn’t take it home with him. Either he looks at it on the machine behind a glass screen, or he takes the paper and puts it into a ballot box. The point of this is twofold. One, it allows the voter to confirm that his vote was recorded in the manner he intended. And two, it provides the mechanism for a recount if there are problems with the machine.

  2. Software used on DRE machines must be open to public scrutiny. This also has two functions. One, it allows any interested party to examine the software and find bugs, which can then be corrected. This public analysis improves security. And two, it increases public confidence in the voting process. If the software is public, no one can insinuate that the voting system has unfairness built into the code. (Companies that make these machines regularly argue that they need to keep their software secret for security reasons. Don’t believe them. In this instance, secrecy has nothing to do with security.)

Computerized systems with these characteristics won’t be perfect—no piece of software is—but they’ll be much better than what we have now. We need to start treating voting software like we treat any other high-reliability system. The auditing that is conducted on slot machine software in the U.S. is significantly more meticulous than what is done to voting software. The development process for mission-critical airplane software makes voting software look like a slapdash affair. If we care about the integrity of our elections, this has to change.

Proponents of DREs often point to successful elections as “proof” that the systems work. That completely misses the point. The fear is that errors in the software—either accidental or deliberately introduced—can undetectably alter the final tallies. An election without any detected problems is no more a proof the system is reliable and secure than a night that no one broke into your house is proof that your door locks work. Maybe no one tried, or maybe someone tried and succeeded…and you don’t know it.

Even if we get the technology right, we still won’t be done. If the goal of a voting system is to accurately translate voter intent into a final tally, the voting machine is only one part of the overall system. In the 2004 U.S. election, problems with voter registration, untrained poll workers, ballot design, and procedures for handling problems resulted in far more votes not being counted than problems with the technology. But if we’re going to spend money on new voting technology, it makes sense to spend it on technology that makes the problem easier instead of harder.

This article originally appeared on openDemocracy.com.

Posted on November 10, 2004 at 9:15 AMView Comments

1 6 7 8

Sidebar photo of Bruce Schneier by Joe MacInnis.