Schneier on Security
A blog covering security and security technology.
« Stuxnet |
| The Ineffectiveness of Vague Security Warnings »
October 8, 2010
Hacking Trial Breaks D.C. Internet Voting System
Sounds like it was easy:
Last week, the D.C. Board of Elections and Ethics opened a new Internet-based voting system for a weeklong test period, inviting computer experts from all corners to prod its vulnerabilities in the spirit of "give it your best shot." Well, the hackers gave it their best shot -- and midday Friday, the trial period was suspended, with the board citing "usability issues brought to our attention."
Stenbjorn said a Michigan professor whom the board has been working with on the project had "unleashed his students" during the test period, and one succeeded in infiltrating the system.
My primary worry about contests like this is that people will think a positive result means something. If a bunch of students can break into a system after a couple of weeks of attempts, we know it's insecure. But just because a system withstands a test like this doesn't mean it's secure. We don't know who tried. We don't know what they tried. We don't know how long they tried. And we don't know if someone who tries smarter, harder, and longer could break the system.
Posted on October 8, 2010 at 6:23 AM
• 47 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
You will never know how to fix it if you do not understand all the ways it can be broken.
We also don't know if it was broken by other teams as well, who -for whatever reason- decided not to go public.
And we don't know if someone successfully broke into it and didn't bother to tell us.
Probably should have had the testing done prior to buying it.
No matter who you vote for, the government always wins.
-Washington Beltway overpass graffiti c. 1983
To me, the real bombshell in this story is this:
"In the absence of the Digital Vote by Mail system, the Board will resume its past practice of allowing military and overseas voters to send a copy of their ballot by e-mail or fax."
They currently accept votes by e-mail and fax? Good grief...
And even if the tests appear to be successful, we don't know that someone DIDN'T break in. Maybe they did and decided that it was more lucrative to keep quiet and sell their information elsewhere, or use it to their own ends.
That could be said of every system which exists today.
While a successful (con)test of this sort is not conclusive proof of an absolute level of security, running through such a process surely results in a system that is stronger than the original. This is similar to not having any belief in the value of an encryption algorithm developed by an amateur in private, compared to one that has gone through a strong peer review. This kind of contest is not a strong peer review, but it is a large step upward from only using testing chosen by the developer.
As Dijkstra said:
Testing can be used to show the presence of bugs, but never to show their absence.
Nothing anyone builds is impregnable against unlimited resources, Bruce.
"You will never know how to fix it if you do not understand all the ways it can be broken." [trapspam]
Corollary: You will never know if you know all the ways it can be broken.
Implication: You will never know if you know how to fix it.
Conclusion: Did you fix anything at all?
I love how one of the "usability issues" was the fact that the web sight was playing the Michigan fight song.
Intresting take on this article. Great point. I also think it is worth noting that the voting system they are using is open source.
"But just because a system withstands a test like this doesn't mean it's secure. We don't know who tried. We don't know what they tried. We don't know how long they tried. And we don't know if someone who tries smarter, harder, and longer could break the system."
Couldn't one make this same complaint about all the encryption schemes in popular use?
"And we don't know if someone who tries smarter, harder, and longer could break the system."
I think that it should be a foregone conclusion that for sufficiently high values of "smarter" and "longer", the answer is, in almost all cases, yes. After all, any system must have flaws, and with sufficient time and effort, someone can devise a way to exploit those flaws.
So, what are the chances that the makers of the voting software will sue the Michigan student who hacked the system?
Plus even if it had passed, what is to stop someone from standing behind people at their terminal and giving them money when they vote for [whomever].
Internet voting has all the dumb of any of the computerized systems plus a complete new Pandora's box all its own.
Its like the OSI model - every new version of voting machine encapsulates all the previous systems' insecurities plus adds another layer of new ones.
I miss punch card voting. I even miss the butterfly ballot.
@Joe Joe at October 8, 2010 8:39 AM
Surely you'd agree though that an internet voting system, something that affects the very basis of our government so fundamentally, should be a heck of a lot secure than this.
@Mark R at October 8, 2010 7:47 AM
Well, email is pretty bad, but I can think of far worse ways than fax.
Presumably with either, that particular link in the voting system wouldn't provide for anonymous voting. At least I hope so... accepting anonymous faxed and email votes is far worse than some pencilpusher knowing who you voted for, similar lapses in privacy are also needed for handicapped voters.
Either system is obviously unacceptable for widespread use.
What worries me is not this testing seeming
so 'light', but that it was essentially open.
The presumption is that one doesn't open
things up to this kind of testing until AFTER
they're pretty certain that the system is
secure. From what has been claimed, it
was a fairly fundamental flaw.
It's broken from the start. The concept fundamentally undermines the importance of the secret ballot.
"Here, vote now."
It's open to both intimidation (physical threats of strangers, family social pressure, etc) as well as bribery (vote the way we wish and we'll give you five bucks)
I don't support default absentee ballots / vote by mail for the same reason. If you have a legitimate reason (you're in the military, business travel, etc) it's the best option.
Otherwise if you must have vote early systems (and this would apply to most business and leisure travelers unless you're waiting till the last minute to decide) let them at least be in a office where nominal security is in place to assure it's a secret ballot.
I still don't like the early voting because it's subject to more chances of tampering, compared to a single day at a polling place where the process can be observed easily and in the open. The organization needed to observe early voting for weeks before the election day would need to be much, much greater.
The results were unsurprising: e-voting companies have a horrible track record. Also, as Applied Cryptography showed, the ideal voting system must meet many seemingly contradictory requirements and do so in a way that's usable by lay people. AFAIK, we don't even have a verified protocol that meets all requirements. Then, these guys are trying to meet a subset of them and still can't produce a basic system?
Every time I hear about eVoting, all I can think is that the whole thing is FUBAR. Every system I've seen invites cheating by one or both parties or has a ton of bugs. I wanted to contribute to some of the open-source initiatives, but I think they are a waste. They are inherently flawed at the protocol level. The government gets too much power.
I think we'd need a national PKI and tamper-resistant authentication devices for every voter, manufactured by numerous offshore firms, before we could even began building an evoting system that keeps the power in the people's hands. This would naturally use a decentralized voting protocol that uses the aforementioned items as security building blocks. Otherwise, there are few enough systems to target that well-funded, professionals could manipulate votes [again...]. I say we do what Ireland(?) did and go back to paper/optical. Recounts would be done by giving at least 10 randomly selected voters an equal portion of the paper copies, then they could add the votes together.
Using open source, allowing people to look 'under the hood' for problems, and having even a couple days of public scrutiny before deployment, are all significant steps up from previous practices. I'm much more surprised by what they got right this time than the fact that they still got something wrong.
On the other hand, the design was described as 'brittle', which suggests whoever produced this product either had no understanding of secure design, or else had requirements (e.g. time/budget) that forbade it. This strikes me as a deeper problem than the 'contest' testing, which might simply have been a side effect of the same issues that led to brittleness.
We've written about these issues so many times before on this blog and elsewhere it is a shame that the results have not improved much.
Remind me again, what was the value proposition of electronic voting?
I think many of the comments here are a bit harsh. Any attempt to open the procedure to scrutiny should be applauded IMO, even if we accept that perfection is not achievable.
They could greatly improve the process with some cash rewards for major bugs/holes found. Even if someone finds security holes that they wish to keep to themselves, the cash incentive might entice others to reveal the problems for immediate reward.
BTW, the security surrounding ballot box voting is far from perfect in many cases.
Remember, "Vote early and often."
@Matt from CT
"I still don't like the early voting because it's subject to more chances of tampering, compared to a single day at a polling place where the process can be observed easily and in the open. The organization needed to observe early voting for weeks before the election day would need to be much, much greater."
In the 2007 Scottish elections, we had over 140k spoilt ballot papers out of 2,016k with several constituency seats being decided with a majority which was less than the number of spoilt ballots. And that was still paper voting (using a computer system to count, with human double checking).
So, I agree. If that mess can happen while still on paper - I can only imagine evoting will be several fold worse.
The fairness in elections should be the overriding factor; not coolness, not cost, but fairness and accuracy.
@Bruce's original post:
"But just because a system withstands a test like this doesn't mean it's secure. We don't know who tried. We don't know what they tried. We don't know how long they tried. And we don't know if someone who tries smarter, harder, and longer could break the system."
If you hire a security consultancy to do a penetration test, you may know "who", "how long" and roughly "what". However, it is still the situation that someone trying "smarter, harder, and longer" may still break that same system.
What you do have is a reduced likelihood of easily exploitable security issues, albeit an unquantifiable reduction.
I realize I'm at the wrong forum to ask this question, but here goes.
Is open source verifiable code really more secure?
For me the problem is that all open source systems allow the attacker to see potential weaknesses before they invent an exploit.
Closed source systems with encrypted programs deny the attacker the knowledge of where to attack, so they add a level of difficulty before the hack can even take form.
The problem for me, is that I don't believe that any human accessible system can possibly be secure, so it is only a question of how long it takes to hack the target system AND what costs are involved in the hack. Every barrier you can add adds security....
@RobertT: It depends on the size/importance of the system. For a system where the gains of breaking it greatly outweight the investment in doing so, opensource makes sense, since this levels the playing field and allows everyone to find the security holes, and in turn contribute back to fix them, regardless of their original intent: "good guys" will do it for the greater good, "bad guys" will do it to avoid [an unknown number of] others to exploit it before they do and possibly gain advantage over them. In a closed system, those with enough resources can get access to the code anyway, and rest assured only them or few [usually known] others know about anything they find.
"The problem for me, is that I don't believe that any human accessible system can possibly be secure"
Not quite true, "No system is" and by that I mean any and all non trivial systems irrespective of human access.
That is because it is not possible (even inside of a black hole) to have a system fully isolated from the environment it's in. As the system has no control over the how where or what of the input theoreticaly and practicaly it's internal state can be changed without the system being aware of it.
I'm fairly sure you are aware of the next bit but I'll give it for the sake of younger readers ;)
For a real world practical example, once upon a time computers of all sizes used EPROMs to store code such as the "boot ROM" the assumption was that it was always "Read Only Memory" even though it was both Programable and Eraseable, and thus each time you read byte X you would always get value Z.
Well, the early EPROM's had a very large transistor size etc, thus they had a high immunity to EM radiation except at very very high frequencies (Ultra Violet).
This ment that IR and visable light did not effect their operation so the little quartz window used to Erase the PROM did not need to be covered unless UV was around.
Well within a relativly short time the size of transistors etc shrunk many orders of magnitude as did the energy per device and all of a sudden the devices became susceptible to energy in EM radiation in the visable light and IR spectrum. Which ment if you did not cover up the little quartz window properly you would not always get a value of Y when reading byte X...
I'm old enough to have fallen foul of this and I wasted something like a week trying to track down random bugs in a safety critical system I was designing. The penny finaly dropped when I realised no bugs in early morning and late afternoon or when somebody blocked the sunlight falling on my work bench...
The problem today is not only have the transistors etc continued to shrink other factors are now involved. As any space engineer will tell you designing electronic systems for satellite systems is a real issue due to EM radiation up in the ionising spectrum causing "bit flips" and there is no way of detecting them all all the time.
Well some would say so what but some forms of radiation can pass right through the earth and at lower energy levels can certainly pass through the metal case of a computer through the chip packaging and into the chip where they might or might not flip a bit... don't you just love probability :(
Well as NASA realised quite early on you just have to accept that there is no definate solution to the problem and use probablistic methods and "cross your fingers". Which is why they have systems using not just parity checks and their better bretherin but also multiple different systems aranged in a "voting protocol".
But even so any and all non trivial systems will remain susceptible and thus will in a probablistic time "take a walk in the park" and need a severe kick in the Non Maskable Interup known as a Hard Reset (Go straight to jail and do not pass go or collect your two hundred pounds ;).
Which brings us back to the question of "human intervention" can a human exploit this vulnerability, the simple answer is "if not today then in all probability tomorow" (hence my interest in EM Fault injection and side channels).
Which brings us back to your correct and more succinct observation,
"so it is only a question of how long it takes to hack the target system AND what costs are involved in the hack."
But does not mention time / resource trade offs.
Which is in essence what your Open -v- Closed source argument is about. However for the downside in time to attack Open Source you generally have the "many eyes" (if open) and many hands to fix. You also have little or no "security by obscurity" mind set of closed source that gives rise to the "it ain't broken unless you prove it and then we prosecute the heck out of you for having dared look" mentality that was endemic in "Closed Source" (and some would say still is).
However your observation of,
"Every barrier you can add adds security..."
Is problematical because,
'Every thing you add increases the size of the system and thus complexity.'
The problem with complexity is that unless you are extreamly carefull how you deal with it you also increase the number of potential attack vectors.
That is you realy need to know what you are doing and there's very very few engineers that actually do (even in the EmSec brigade). I think I've met maybe five or six in over thirty years as an engineer designing systems many of which have to be high assurance (and before you ask I'm still learning just as they where and I assume still are).
If you hunt back through the Bruce's blog you will find I've made a few postings with relavence to this area when chatting to Nick P, some of which refer to what I call "Probablistic Security" or "Castles -v- Prison" architecture. Some of which are highly relevant to your question.
In essence what I'm looking into is how to get around the implications of the Church / Turing's halting problem ( http://en.wikipedia.org/wiki/Halting_problem ) and the couple of not so minor spanners Kurt Godel came up with with regards undecidability via his incompleatness theorems ( http://en.wikipedia.org/wiki/... ).
The aproach I'm taking is along the same lines as the physicists and mathematicians at Los Alamos during the development of the Nuclear Bomb (which Fermi had independently discoverd. several years prior to this). They realised that due to complexity there where some questions that could not be accuratly answered in a reasonable time or at all. So they looked at taking a "probablistic" aproach to finding answers and the Monte Carlo method was born ( http://library.lanl.gov/la-pubs/00326866.pdf )
The essential feature of Monte Carlo methods is based on statistical sampling methods where a random sampling is an efficient way to evaluate complicated and many dimensional nonlinear and complex problems ( http://en.wikipedia.org/wiki/Monte_Carlo_method ).
One of the issues with systems and security is Kurt Godel's second incompleteness theorem shows that if a system (as described in the first incompleatnes problem) is capable of proving certain basic facts about the natural numbers, then one particular arithmetic truth the system cannot prove is the consistency of the system itself...
That is a General Purpose Computing (GPC) system as described by Church / Turing and. generaly called a Turing Engine / Machine (TM) is incabable of deciding if it is consistent to it's rules of operation using it's own rules. Or more appropriatly for this it cannot tell if it is secure or not... However within certain bounded limits a trivial state machine can check if certain aspects of a TM GPC are consistant, thus there is a rabbit hole in the usual arguments arising.
A simple example being the case of a State Machine Hypervizor (SMH) that stops the GPC and then goes through it's instruction "code" memory and checks it is still unmodified and likewise with "data" memory which should be in a known state.
Obviously the SMH cannot stop the GPC and check to frequently otherwise the efficiency of the GPC would drop below acceptable limits. Thus it's ability to check for memory coruption (malware or otherwise) becomes a factor of statistical sampling.
Once you have accepted the idea of a security hypervisor watching over your user work GPC system a number of other problems quickly drop out.
For instance the von Neumann arcitiectur ( http://en.wikipedia.org/wiki/... ) is considerably less secure, reliable and efficient than the Generalised Harvard architecture ( http://en.wikipedia.org/wiki/Harvard_architecture ). The pluss point for von Neumann is a general purpose computer with operating system is that in a single CPU system you can load in arbitary code as data and then execute it (which is a wonderfull proprty if you are malware ;)
However for most production applications code does not change and does not need to thus it is just the ability to load code that prevents the Strict / Pure Harvard Architecture being used hence the Modifed Harvard Architecture commonly in use ( http://en.wikipedia.org/wiki/... ).
The solution in this case is to use the Hypervisor to load the code into "instruction space", and one hugh sorce of malware attack vectors disapears. Oh and in reality most high end CPU's these days are Modifed Harvard architecture internally the "von Neumann" bit is bolted on near the address and data bus outputs on the very edge of the CPU...
Then there is the issue of memory managment this is always an issue on single CPU systems as the CPU has to control it's own memory which means smart malware gets to control the memory that can be executed from or not etc.
With a hypervisor system the actual control of memory can be done by the hypervisor and the GPC can just make a request for a change of memory size and usage etc which the hypervisor can either grant or deny. This means that the GPC under the influance of malware or corupted code memory cannot arbitarily change things and an attempt to execute data as code at the CPU instruction level will fail.
Obviously interpreters which base their execution in code space on values in data space will still be able to run which is another security issue, and also why even the Strict / Pure Harvard architecture is not defacto secure in use.
This can in most cases be solved by not having an interpreter or anything like it in production code, which unfortunatly can be a limitation on functionality. However the hypervisor can again probabilisticaly check the "interpreter instructions" in data space do not get changed.
I can go on and also talk about the various "execution signitures" etc the hypervisor can monitor whilst the GPC is executing but this is a long enough post as it is, likewise using the Unix shell script / command pipeline philosphy of pre-compiled little tools etc which takes the burden of secure programing off of code cutters.
The important point is to remember that the hypervisor is designed so that it can never execute code in the GPC memory spaces, thus stoping malware "bubbling up". Likewise it monitors as many asspects of the GPC functionality as it can either indirectly by observing external GPC execution activity (signitures) or dirctly by stopping the GPC and checking the memory and other spaces for unknown or unexpected values.
The system is not perfectly secure (nothing ever is by definition) however it goes a long way to removing most ot the current used attack vectors, and others that are not currently used.
Well, it's a matter of potential. Here's a few points for you to consider:
1. A system can be "secure" in the risk management sense whereby it appropriately handles the threats it was designed to handle. Ignore, log, delay or prevent are the common approaches.
2. There are systems out there that are extremely secure and where remote attacks are highly unlikely. Properly configured OpenBSD, XTS-400, Aesec's Gemini Network Processor, and Boeing's SNS guard come to mind. It's all a tradeoff involving requirements, time to market, development cost and risk.
3. Open source doesn't instantly improve assurance: it only increases the potential assurance.
4. Having a major application open sourced gives defenders a greater chance of spotting bugs. Studies show that, on average, bugs are caught more quickly in the high profile open source apps than proprietary alternatives. Largely proportional to the size of developer community doing bugfixes and such...
5. Keeping a major application closed source does not improve its security against simple attacks like buffer overflows and format string errors. The black hats' tools and techniques are very good at finding these holes in binary or at the assembler level. They don't need the source. There's also usually a large number of black hats targeting any one major application, increasing their effectiveness.
6. Keeping a major application closed source with obfuscation techniques may delay some higher-level attacks. However, reverse engineering is a skill many black hats possess and their tools are better than ever. Remember that Skype, one of the most obfuscated closed source apps out there, was reversed and hit with protocol level attacks.
So, the attackers will find bugs in the closed source app if its important. However, it's hard for defenders to find or fix bugs in these apps because that's not their skillset. So, security-critical apps are better off open-source. Besides, security engineering is hard and most developers aren't any good at it. They are likely to make mistakes. "More eyes" is a very good policy in situations like that.
This brings to mind what that Google CEO said (when he spoke about how much Google cares about peoples privacy in the near future), namely that only people who have something to hide would want privacy.
So with that in mind I want the government to hand out all the information about the e-voting system.
And Cisco to hand out their routers source code.
@ Hum Ho
"And cisco to hand out their routers source code"
I would find that much more useful. Personally, anyway. With their source code and specs to build drivers, I bet I could build better firewall appliances overnight that were 100% Cisco compatible from admin and usage standpoint. I almost feel like I'm exaggerating but I have a few tricks up my sleeve there. I'd give them the new source as I.P. if they bought it at a fair rate. (Maybe going rate for experienced embedded engineers....)
To be honest, though, I'd be more interested in getting the source code for Windows 7, Windows 2008 Server, IIS, Exchange and Sharepoint. These are the most used apps in small businesses across the globe. Redesigning to reduce trusted code and rewriting security-critical code via low defect methodologies would greatly reduce attack surface for these apps. The pervasiveness of the apps means the attack surface of much of the country would be reduced against those types of threats. Then, the next battle would begin. ;)
The problem in thinking open source gives users full insight into the system used is that the users cannot be 100% sure that the system in use is the exact system that is avaliable on the repository.
For that reason you should build applications like gpg from source yourself...
Did the framers of the constitution envision e voting, I think not. ballots are slips of paper, ever since the greek pottery shards were replaced.
e voting is a goverment scam, done to remove the last vestige of relevence to the vote. we live in a societ where the government has always needed some war, and now with the gwot, it can target citizens who are the real fear of the government sleazes anyway.
Notice that the government makes laws and then decides who they apply to. peace activists are harrassed by the fbi who say they may have talked to the people israel wants land from, while the news says that karzai has been talking to the taliban, where is the fbi and its theft of computers for karzai? How can obama talk to karzai when karzai is talking to the taliban, this is based on the assumption that the government does not have to obey any law at all. a gov badge makes you a legal criminal.
It won't help much or at all. You have to trust the binary you run at some point. As Dennis Ritchie has shown many years ago, you cannot even trust the compiler or assembler as it can arbitrarily inject backdoors under the hood - unless you have crafted it yourself in machine code... But if you go with your paranoia that far, you cannot trust your CPU...
Can some of these problems be mitigated, just as we can mitigate voter fraud in traditional elections?
e.g. increase observability, auditability, cross-checking, etc?
@ Peter A
"...you cannot trust your CPU"
Actually, you can't. Most processors have errata that allow you to DOS or subvert a system in a way the OS and software can't prevent. The registers, cache, poor specs, and little known operating modes all help the attacker more than the defender. So, no, you can't trust your CPU. That's the root of the problem in building a truly secure system.
The only exception is the AAMP7G. VAMP was formally verified, but AAMP7G implements separation in hardware.
Remote voting systems violate one very essential element of voting: the fact that everybody can verify that such a vote is made secretly and freely without pressure from one or another. That is why we have to go to a polling booth, and fill in the candidate of our preference privately and alone -- no bystanders allowed.
No matter how 'secure' the system, how can I know a vote coming in by the internet, pretending to be from person A was actually made while person B was standing behind the person with a big sledgehammer and a not to be misunderstood instruction to vote B's preference?
What scares me isn't a positive result because nobody happened to hack into it, this time. What I most fear is a positive result because whomever DID hack into it didn't do anything lasting, instead sitting back with a 0-day in their back pocket.
We only know that everyone who wanted could test closed source voiting application without penalty.
It is fortunate that some of them were white hats who reported their findings.
@Andrew Gumbrell, "As Dijkstra said: Testing can be used to show the presence of bugs, but never to show their absence."
Yes, indeed, that is why we need to verify systems rather than test them: http://www.bensmyth.com/publications/...
@Foolish Jordan, in response to "But just because a system withstands a test like this doesn't mean it's secure." Jordon wrote "Couldn't one make this same complaint about all the encryption schemes in popular use?"
No. The security of encryption schemes have been verified. Unfortunately, it is far more difficult to verify a cryptographic protocol (that is, something built from encryption schemes).
@Matthew Cline, "So, what are the chances that the makers of the voting software will sue the Michigan student who hacked the system?"
It would make an interesting case, given that Michigan were asked to hack the system!
@bob (the original bob), "what is to stop someone from standing behind people at their terminal and giving them money when they vote for [whomever]."
The same is true for paper-based systems? (Electronic voting does not necessarily imply voting from home.)
@Nick P, "Every system I've seen invites cheating by one or both parties or has a ton of bugs. I wanted to contribute to some of the open-source initiatives, but I think they are a waste. They are inherently flawed at the protocol level."
This is generally true for the schemes governments have adopted. But does not consider the work of academics. For example,
* Civitas http://www.cs.cornell.edu/projects/civitas/ by Clarkson, Chong, & Myers (based upon Juels, Catalano, & Jakobsson)
* Helios http://heliosvoting.org/ by Adida, Marneffe, Pereira & Quisquater
* Pret-a-voter http://www.pretavoter.com/ by Ryan et al.
* Scantegrity http://www.scantegrity.org/ by Chaum et al.
@ Ben Smith
I appreciate the links. I'll revisit them. The last time I checked on some of these they were either lacking a few of Bruce's stated requirements or a bit complex for the layman. The last point cannot be overstated: I saw routine ignorance in action when a person tried to pay with their credit card at the grocery store.
They swiped their card the wrong way, although the graphic was clear. It offered two options: credit or debit. They asked which button to press. "The one next to credit." (who would have guessed?) The offering of cash back stomped them. The cashier repeated what the screen said, the customer said "don't want cash back," and was instructed to hit no. Finally, it said "amount ok" with order's total on it and yes or no. After that was explained, the customer pressed yes. I asked the cashier about this and she said at least 20 people per shift can't figure the things out at her register alone. And we expect them to understand a complex, cryptographic voting protocol? When they can't figure out how to pay by credit card at a grocery? I think we must consider these things in the design of the next voting system. (Scantegrity shows promise in this area.)
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..