I haven’t had the chance to review the reports. Matt Blaze has a good summary on his blog:
We found significant, deeply-rooted security weaknesses in all three vendors’ software. Our newly-released source code analyses address many of the supposed shortcomings of the red team studies, which have been (quite unfairly, I think) criticized as being “unrealistic”. It should now be clear that the red teams were successful not because they somehow “cheated,” but rather because the built-in security mechanisms they were up against simply don’t work properly. Reliably protecting these systems under operational conditions will likely be very hard.
I just read Matt Bishop’s description of the miserable schedule and support that the California Secretary of State’s office gave to the voting-machine review effort:
The major problem with this study is time. Although the study did not start until mid-June, the end date was set at July 20, and the Secretary of States said that under no circumstandes would it be extended.
The second problem was lack of information. In particular, various documents did not become available until July 13, too late to be of any value to the red teams, and the red teams did not have several security-related documents. Further, some software that would have materially helped the study was never made available.
Matt Blaze, who led the team that reviewed the Sequoia code, had similar things to say:
Reviewing that much code in less than two months was, to say the least, a huge undertaking. We spent our first week (while we were waiting for the code to arrive) setting up infrastructure, including a Trac Wiki on the internal network that proved invaluable for keeping everyone up to speed as we dug deeper and deeper into the system. By the end of the project, we were literally working around the clock.
It seems that we have a new problem to worry about: the Secretary of State has no clue how to get a decent security review done. Perversely, it was good luck that the voting machines tested were so horribly bad that the reviewers found vulnerabilities despite a ridiculous schedule — one month simply isn’t reasonable — and egregious foot-dragging by vendors in providing needed materials.
Next time, we might not be so lucky. If one vendor sees he can avoid embarrassment by stalling delivery of his most vulnerable source code for four weeks, we might end up with the Secretary of State declaring that the system survived vigorous testing and therefore is secure. Given that refusing cooperation incurred no penalty in this series of tests, we can expect vendors to work that angle more energetically in the future.
The Secretary of State’s own web page gives top billing to the need “to restore the public’s confidence in the integrity of the electoral process,” while the actual security of the machines is relegated to second place.
We need real security evaluations, not feel-good fake tests. I wish this were more the former than the latter.
EDITED TO ADD (8/4): California Secretary of State Bowen’s certification decisions are online.
She has totally decertified the ES&S Inkavote Plus system, used in L.A. County, because of ES&S noncompliance with the Top to Bottom Review. The Diebold and Sequoia systems have been decertified and conditionally recertified. The same was done with one Hart Intercivic system (system 6.2.1). (Certification of the Hart system 6.1 was voluntarily withdrawn.)
To those who thought she was staging this review as security theater, this seems like evidence to the contrary. She wants to do the right thing, but has no idea how to conduct a security review.
EDITED TO ADD (8/4): The Diebold software is pretty bad.
EDITED TO ADD (8/5): Ed Felten comments:
It is interesting (at least to me as a computer security guy) to see how often the three companies made similar mistakes. They misuse cryptography in the same ways: using fixed unchangeable keys, using ciphers in ECB mode, using a cyclic redundancy code for data integrity, and so on. Their central tabulators use poorly protected database software. Their code suffers from buffer overflows, integer overflow errors, and format string vulnerabilities. They store votes in a way that compromises the secret ballot.
And Avi Rubin comments:
As I read the three new reports, I could not help but marvel at the fact that so many places in the US are using these machines. When it comes to prescription medications, we perform extensive tests before drugs hit the market. When it comes to aviation, planes are held to standards and tested before people fly on them. But, it seems that the voting machines we are using are even more poorly designed and poorly implemented than I had realized.
He’s right, of course.