Entries Tagged "voting"

Page 14 of 18

Security Analysis of a 13th Century Venetian Election Protocol

I love stuff like this: “Electing the Doge of Venice: Analysis of a 13th Century Protocol,” by Miranda Mowbray and Dieter Gollmann.

This paper discusses the protocol used for electing the Doge of Venice between 1268 and the end of the Republic in 1797. We will show that it has some useful properties that in addition to being interesting in themselves, also suggest that its fundamental design principle is worth investigating for application to leader election protocols in computer science. For example it gives some opportunities to minorities while ensuring that more popular candidates are more likely to win, and offers some resistance to corruption of voters. The most obvious feature of this protocol is that it is complicated and would have taken a long time to carry out. We will advance a hypothesis as to why it is so complicated, and describe a simplified protocol with very similar features.

Venice was very clever about working to avoid the factionalism that tore apart a lot of its Italian rivals, while making the various factions feel represented.

Posted on July 27, 2007 at 12:08 PMView Comments

Designing Voting Machines to Minimize Coercion

If someone wants to buy your vote, he’d like some proof that you’ve delivered the goods. Camera phones are one way for you to prove to your buyer that you voted the way he wants. Belgian voting machines have been designed to minimize that risk.

Once you have confirmed your vote, the next screen doesn’t display how you voted. So if one is coerced and has to deliver proof, one just has to take a picture of the vote one was coerced into, and then back out from the screen and change ones vote. The only workaround I see is for the coercer to demand a video of the complete voting process, in stead of a picture of the ballot.

The author is wrong that this is an advantage electronic ballots have over paper ballots. Paper voting systems can be designed with the same security features.

Posted on June 27, 2007 at 12:09 PMView Comments

Is Big Brother a Big Deal?

Big Brother isn’t what he used to be. George Orwell extrapolated his totalitarian state from the 1940s. Today’s information society looks nothing like Orwell’s world, and watching and intimidating a population today isn’t anything like what Winston Smith experienced.

Data collection in 1984 was deliberate; today’s is inadvertent. In the information society, we generate data naturally. In Orwell’s world, people were naturally anonymous; today, we leave digital footprints everywhere.

1984‘s police state was centralized; today’s is decentralized. Your phone company knows who you talk to, your credit card company knows where you shop and Netflix knows what you watch. Your ISP can read your email, your cell phone can track your movements and your supermarket can monitor your purchasing patterns. There’s no single government entity bringing this together, but there doesn’t have to be. As Neal Stephenson said, the threat is no longer Big Brother, but instead thousands of Little Brothers.

1984‘s Big Brother was run by the state; today’s Big Brother is market driven. Data brokers like ChoicePoint and credit bureaus like Experian aren’t trying to build a police state; they’re just trying to turn a profit. Of course these companies will take advantage of a national ID; they’d be stupid not to. And the correlations, data mining and precise categorizing they can do is why the U.S. government buys commercial data from them.

1984-style police states required lots of people. East Germany employed one informant for every 66 citizens. Today, there’s no reason to have anyone watch anyone else; computers can do the work of people.

1984-style police states were expensive. Today, data storage is constantly getting cheaper. If some data is too expensive to save today, it’ll be affordable in a few years.

And finally, the police state of 1984 was deliberately constructed, while today’s is naturally emergent. There’s no reason to postulate a malicious police force and a government trying to subvert our freedoms. Computerized processes naturally throw off personalized data; companies save it for marketing purposes, and even the most well-intentioned law enforcement agency will make use of it.

Of course, Orwell’s Big Brother had a ruthless efficiency that’s hard to imagine in a government today. But that completely misses the point. A sloppy and inefficient police state is no reason to cheer; watch the movie Brazil and see how scary it can be. You can also see hints of what it might look like in our completely dysfunctional “no-fly” list and useless projects to secretly categorize people according to potential terrorist risk. Police states are inherently inefficient. There’s no reason to assume today’s will be any more effective.

The fear isn’t an Orwellian government deliberately creating the ultimate totalitarian state, although with the U.S.’s programs of phone-record surveillance, illegal wiretapping, massive data mining, a national ID card no one wants and Patriot Act abuses, one can make that case. It’s that we’re doing it ourselves, as a natural byproduct of the information society.We’re building the computer infrastructure that makes it easy for governments, corporations, criminal organizations and even teenage hackers to record everything we do, and—yes—even change our votes. And we will continue to do so unless we pass laws regulating the creation, use, protection, resale and disposal of personal data. It’s precisely the attitude that trivializes the problem that creates it.

This essay appeared in the May issue of Information Security, as the second half of a point/counterpoint with Marcus Ranum. Here’s his half.

Posted on May 11, 2007 at 9:19 AMView Comments

Dutch eVoting Scandal

Interesting:

His software is used with the Nedap voting machines currently used in 90 per cent of the electoral districts, and although it is not used in the actual vote count, it does tabulate the results on both a regional and national level.

According to the freedom of information disclosures, Groenendaal wrote to election officials in the lead up to the national elections in November 2006, threatening to cease “cooperating” if the government did not accede to his requests.

Posted on March 23, 2007 at 6:12 AMView Comments

Ensuring the Accuracy of Electronic Voting Machines

A Florida judge ruled (text of the ruling) that the defeated candidate has no right to examine the source code in the voting machines that determined the winner in a disputed Congressional race.

Meanwhile:

A laboratory that has tested most of the nation’s electronic voting systems has been temporarily barred from approving new machines after federal officials found that it was not following its quality-control procedures and could not document that it was conducting all the required tests.

That company is Ciber Inc.

Is it just me, or are things starting to make absolutely no sense?

Posted on January 4, 2007 at 12:06 PMView Comments

Revoting

In the world of voting, automatic recount laws are not uncommon. Virginia, where George Allen lost to James Webb in the Senate race by 7,800 out of over 2.3 million votes, or 0.33 percent percent, is an example. If the margin of victory is 1 percent or less, the loser is allowed to ask for a recount. If the margin is 0.5 percent or less, the government pays for it. If the margin is between 0.5 percent and 1 percent, the loser pays for it.

We have recounts because vote counting is—to put it mildly—sloppy. Americans like their election results fast, before they go to bed at night. So we’re willing to put up with inaccuracies in our tallying procedures, and ignore the fact that the numbers we see on television correlate only roughly with reality.

Traditionally, it didn’t matter very much, because most voting errors were “random errors.”

There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random—equally likely to happen to anyone. In a close race, random errors won’t change the result because votes intended for candidate A that mistakenly go to candidate B happen at the same rate as votes intended for B that mistakenly go to A. (Mathematically, as candidate A’s margin of victory increases, random errors slightly decrease it.)

This is why, historically, recounts in close elections rarely change the result. The recount will find the few percent of the errors in each direction, and they’ll cancel each other out. In an extremely close election, a careful recount will yield a different result—but that’s a rarity.

The other kind of voting error is a systemic error. These are errors in the voting process—the voting machines, the procedures—that cause votes intended for A to go to B at a different rate than the reverse.

An example would be a voting machine that mysteriously recorded more votes for A than there were voters. (Sadly, this kind of thing is not uncommon with electronic voting machines.) Another example would be a random error that only occurs in voting equipment used in areas with strong A support. Systemic errors can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A.

Even worse, systemic errors can introduce errors out of proportion to any actual randomness in the vote-counting process. That is, the closeness of an election is not any indication of the presence or absence of systemic errors.

When a candidate has evidence of systemic errors, a recount can fix a wrong result—but only if the recount can catch the error. With electronic voting machines, all too often there simply isn’t the data: there are no votes to recount.

This year’s election in Florida’s 13th Congressional District is such an example. The winner won by a margin of 373 out of 237,861 total votes, but as many as 18,000 votes were not recorded by the electronic voting machines. These votes came from areas where the loser was favored over the winner, and would have likely changed the result.

Or imagine this—as far as we know—hypothetical situation: After the election, someone discovers rogue software in the voting machines that flipped some votes from A to B. Or someone gets caught vote tampering—changing the data on electronic memory cards. The problem is that the original data is lost forever; all we have is the hacked vote.

Faced with problems like this, we can do one of two things. We can certify the result anyway, regretful that people were disenfranchised but knowing that we can’t undo that wrong. Or, we can tell everyone to come back and vote again.

To be sure, the very idea of revoting is rife with problems. Elections are a snapshot in time—election day—and a revote will not reflect that. If Virginia revoted for the Senate this year, the election would not just be for the junior senator from Virginia, but for control of the entire Senate. Similarly, in the 2000 presidential election in Florida, or the 2004 presidential election in Ohio, single-state revotes would have decided the presidency.

And who should be allowed to revote? Should only people in those precincts where there were problems revote, or should the entire election be rerun? In either case, it is certain that more voters will find their way to the polls, possibly changing the demographic and swaying the result in a direction different than that of the initial set of voters. Is that a bad thing, or a good thing?

Should only people who actually voted—records are kept—or who could demonstrate that they were erroneously turned away from the polls be allowed to revote? In this case, the revote will almost certainly have fewer voters, as some of the original voters will be unable to vote a second time. That’s probably a bad thing—but maybe it’s not.

The only analogy we have for this are run-off elections, which are required in some jurisdictions if the winning candidate didn’t get 50 percent of the vote. But it’s easy to know when you need to have a run-off. Who decides, and based on what evidence, that you need to have a revote?

I admit that I don’t have the answers here. They require some serious thinking about elections, and what we’re trying to achieve. But smart election security not only tries to prevent vote hacking—or even systemic electronic voting-machine errors—it prepares for recovery after an election has been hacked. We have to start discussing these issues now, when they’re non-partisan, instead of waiting for the inevitable situation, and the pre-drawn battle lines those results dictate.

This essay originally appeared on Wired.com.

Posted on November 16, 2006 at 6:07 AMView Comments

The Need for Professional Election Officials

In the U.S., elections are run by an army of hundreds of thousands of volunteers. These are both Republicans and Democrats, and the idea is that the one group watches the other: security by competing interests. But at the top are state-elected or -appointed officials, and many election shenanigans in the past several years have been perpetrated by them.

In yet another New York Times op-ed, Loyola Law School professor Richard Hansen argues for professional, non-partisan election officials:

The United States should join the rest of the world’s advanced democracies and put nonpartisan professionals in charge. We need officials whose ultimate allegiance is to the fairness, integrity and professionalism of the election process, not to helping one party or the other gain political advantage. We don’t need disputes like the current one in Florida being resolved by party hacks.

[…]

To improve the chances that states will choose an independent and competent chief elections officer, states should enact laws making that officer a long-term gubernatorial appointee who takes office only upon confirmation by a 75 percent vote of the legislature—a supermajority requirement that would ensure that a candidate has true bipartisan support. Nonpartisanship in election administration is no dream. It is how Canada and Australia run their national elections.

To me, this is easier said than done. Where are these hundreds of thousands of disinterested election officials going to come from? And how do we ensure that they’re disinterested and fair, and not just partisans in disguise? I actually like security by competing interests.

But I do like his idea of a supermajority-confirmed chief elections officer for each state. And at least he’s starting the debate about better election procedures in the U.S.

Posted on November 13, 2006 at 2:57 PMView Comments

The Inherent Inaccuracy of Voting

In a New York Times op-ed, New York University sociology professor Dalton Conley points out that vote counting is inherently inaccurate:

The rub in these cases is that we could count and recount, we could examine every ballot four times over and we’d get—you guessed it—four different results. That’s the nature of large numbers—there is inherent measurement error. We’d like to think that there is a “true” answer out there, even if that answer is decided by a single vote. We so desire the certainty of thinking that there is an objective truth in elections and that a fair process will reveal it.

But even in an absolutely clean recount, there is not always a sure answer. Ever count out a large jar of pennies? And then do it again? And then have a friend do it? Do you always converge on a single number? Or do you usually just average the various results you come to? If you are like me, you probably settle on an average. The underlying notion is that each election, like those recounts of the penny jar, is more like a poll of some underlying voting population.

He’s right, but it’s more complicated than that.

There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random. Votes intended for A that mistakenly go to B are just as likely as votes intended for B that mistakenly go to A. This is why, traditionally, recounts in close elections are unlikely to change things. The recount will find the few percent of the errors in each direction, and they’ll cancel each other out. But in a very close election, a careful recount will yield a more accurate—but almost certainly not perfectly accurate—result.

Systemic errors are more important, because they will cause votes intended for A to go to B at a different rate than the reverse. Those can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A. These errors can either be a particular problem in the system—a badly designed ballot, for example—or a random error that only occurs in precincts where A has more supporters than B.

Here’s where the problems of electronic voting machines become critical: they’re more likely to be systemic problems. Vote flipping, for example, seems to generally affect one candidate more than another. Even individual machine failures are going to affect supporters of one candidate more than another, depending on where the particular machine is. And if there are no paper ballots to fall back on, no recount can undo these problems.

Conley proposes to nullify any election where the margin of victory is less than 1%, and have everyone vote again. I agree, but I think his margin is too large. In the Virginia Senate race, Allen was right not to demand a recount. Even though his 7,800-vote loss was only 0.33%, in the absence of systemic flaws it is unlikely that a recount would change things. I think an automatic revote if the margin of victory is less than 0.1% makes more sense.

Conley again:

Yes, it costs more to run an election twice, but keep in mind that many places already use runoffs when the leading candidate fails to cross a particular threshold. If we are willing to go through all that trouble, why not do the same for certainty in an election that teeters on a razor’s edge? One counter-argument is that such a plan merely shifts the realm of debate and uncertainty to a new threshold—the 99 percent threshold. However, candidates who lose by the margin of error have a lot less rhetorical power to argue for redress than those for whom an actual majority is only a few votes away.

It may make us existentially uncomfortable to admit that random chance and sampling error play a role in our governance decisions. But in reality, by requiring a margin of victory greater than one, seemingly arbitrary vote, we would build in a buffer to democracy, one that offers us a more bedrock sense of security that the “winner” really did win.

This is a good idea, but it doesn’t address the systemic problems with voting. If there are systemic problems, there should be another election day limited to only those precincts that had the problem and only those people who can prove they voted—or tried to vote and failed—during the first election day. (Although I could be persuaded that another re-voting protocol would make more sense.)

But most importantly, we need better voting machines and better voting procedures.

EDITED TO ADD (11/17): I mistakenly mischaracterized Conley’s position. He says that there should be a revote when the margin of error is greater than 1 per cent, not a 1 per cent margin of victory.

In terms of a two-candidate race in which each has attained around 50 percent of the vote, a 1 percent margin of error would be represented by 1.29 divided by the square root of the number of votes

That’s a really good system, although it will be impossible to explain to the general public.

Posted on November 13, 2006 at 12:03 PMView Comments

1 12 13 14 15 16 18

Sidebar photo of Bruce Schneier by Joe MacInnis.