The Inherent Inaccuracy of Voting
In a New York Times op-ed, New York University sociology professor Dalton Conley points out that vote counting is inherently inaccurate:
The rub in these cases is that we could count and recount, we could examine every ballot four times over and we’d get—you guessed it—four different results. That’s the nature of large numbers—there is inherent measurement error. We’d like to think that there is a “true” answer out there, even if that answer is decided by a single vote. We so desire the certainty of thinking that there is an objective truth in elections and that a fair process will reveal it.
But even in an absolutely clean recount, there is not always a sure answer. Ever count out a large jar of pennies? And then do it again? And then have a friend do it? Do you always converge on a single number? Or do you usually just average the various results you come to? If you are like me, you probably settle on an average. The underlying notion is that each election, like those recounts of the penny jar, is more like a poll of some underlying voting population.
He’s right, but it’s more complicated than that.
There are two basic types of voting errors: random errors and systemic errors. Random errors are just that, random. Votes intended for A that mistakenly go to B are just as likely as votes intended for B that mistakenly go to A. This is why, traditionally, recounts in close elections are unlikely to change things. The recount will find the few percent of the errors in each direction, and they’ll cancel each other out. But in a very close election, a careful recount will yield a more accurate—but almost certainly not perfectly accurate—result.
Systemic errors are more important, because they will cause votes intended for A to go to B at a different rate than the reverse. Those can make a dramatic difference in an election, because they can easily shift thousands of votes from A to B without any counterbalancing shift from B to A. These errors can either be a particular problem in the system—a badly designed ballot, for example—or a random error that only occurs in precincts where A has more supporters than B.
Here’s where the problems of electronic voting machines become critical: they’re more likely to be systemic problems. Vote flipping, for example, seems to generally affect one candidate more than another. Even individual machine failures are going to affect supporters of one candidate more than another, depending on where the particular machine is. And if there are no paper ballots to fall back on, no recount can undo these problems.
Conley proposes to nullify any election where the margin of victory is less than 1%, and have everyone vote again. I agree, but I think his margin is too large. In the Virginia Senate race, Allen was right not to demand a recount. Even though his 7,800-vote loss was only 0.33%, in the absence of systemic flaws it is unlikely that a recount would change things. I think an automatic revote if the margin of victory is less than 0.1% makes more sense.
Conley again:
Yes, it costs more to run an election twice, but keep in mind that many places already use runoffs when the leading candidate fails to cross a particular threshold. If we are willing to go through all that trouble, why not do the same for certainty in an election that teeters on a razor’s edge? One counter-argument is that such a plan merely shifts the realm of debate and uncertainty to a new threshold—the 99 percent threshold. However, candidates who lose by the margin of error have a lot less rhetorical power to argue for redress than those for whom an actual majority is only a few votes away.
It may make us existentially uncomfortable to admit that random chance and sampling error play a role in our governance decisions. But in reality, by requiring a margin of victory greater than one, seemingly arbitrary vote, we would build in a buffer to democracy, one that offers us a more bedrock sense of security that the “winner” really did win.
This is a good idea, but it doesn’t address the systemic problems with voting. If there are systemic problems, there should be another election day limited to only those precincts that had the problem and only those people who can prove they voted—or tried to vote and failed—during the first election day. (Although I could be persuaded that another re-voting protocol would make more sense.)
But most importantly, we need better voting machines and better voting procedures.
EDITED TO ADD (11/17): I mistakenly mischaracterized Conley’s position. He says that there should be a revote when the margin of error is greater than 1 per cent, not a 1 per cent margin of victory.
In terms of a two-candidate race in which each has attained around 50 percent of the vote, a 1 percent margin of error would be represented by 1.29 divided by the square root of the number of votes
That’s a really good system, although it will be impossible to explain to the general public.
Robert the Red • November 13, 2006 12:24 PM
How about a revote if the margin is less than b*sqrt(N) where N=# votes cast and b is around 3? This would be proportional to the standard deviation of the vote if it were a Bernoulli coin flip. Allen was out at b=4.7 (Web=1172671, Allen=1165440, Parker=26106), kind of past the edge of where a bizarre concatenation of random events could flip things. But try to explain the statistics to a legislator?!