Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AM37 Comments

Comments

MathFox August 9, 2007 10:06 AM

@ NZRuss;

The problem isn’t COTS, most people buy standard locks for their (physical) home security. Security professionals will use standard locks to implement security policies.
The problem with COTS software is lack of availability of security information (usually due to a total lack of security design.) Many suppliers show a total lack of awareness for security issues: don’t ask for secure designs, don’t use secure tools/languages, don’t do verification, etc… It is a problem at board level, denying the existence security issues.

bzelbob August 9, 2007 10:17 AM

BS: “…How anyone comes to this presumption is a mystery to me.”

Bruce, it’s because there’s a LOT of money at stake. It’s NOT an accident or stupidity, it’s corruption. Honest people who were informed about bad things would try to fix them.

And it’s not just the money to be made off the voting machine contracts. It’s the money to be made selling vote machine exploits to corrupt political parties. If you can get your politicians put into power for a few ten thousands of dollars, the next thing you know, you get a president elected who votes for needless wars that completely enrich certain companies, thus paying themselves back a thousand fold.

I think voting machines should have to be proven secure by law, before any usage.

Brandioch Conner August 9, 2007 10:17 AM

The problem is that MOST people don’t have a clue what “security” is.

The best they can do is physical security. And even then, only with regard to their own personal property.

If they can’t touch it and see it, it isn’t “real” to them. It can’t be “secured”.

There is absolutely NO ******* REASON that we should be using computers as the “vote” in our elections.

Paper is easier for people to see and touch and therefore, easier for them to understand the “security” of.

Right now, most people are in the dichotomy of:
#1. Knowing that their computers are unstable and crash and get infected.

#2. Trusting that the “computers” the “government” uses are infallible.

Rich Gibbs August 9, 2007 10:19 AM

Thank you, Dr. Schneier, for once again injecting a dose of sense into this discussion.

The standard attitude to security that you describe is just like a new tenant who walks into the kitchen, flips on the light, and sees 10 cockroaches — then assumes that, once those 10 are stepped on, everything is OK.

derf August 9, 2007 10:21 AM

Even if you have a perfectly secure system, you still aren’t secure. People will have access to data through the system, and people are one of the links in the chain. It’s as easy, if not more so, to hack the human as it is the systems today. It’s the human element that clicks on the link in the email saying “You’ve won”. It’s the human at the keyboard that willingly installs the keylogger that pretends to be a weather information program.

Foolproof systems are easily defeated by better fools. You may be able to design a system that won’t allow such nonsense as keyloggers by default, but the human can still be convinced to manually give the trojan “weather program” the permissions needed to get their precious weather data while also snooping the users’ banking passwords.

greg August 9, 2007 10:52 AM

I would really like to see some improvements in programing languages, and even code execution models. The fact that buffer overflows happen at all is not a problem per say, the fact that it can allow arbitrary code to be executed is.

Ada is not a silver bullet any more than Java or Scheme/LISP/etc is. But they are a improvement over C/C++.

finally KISS. A lock is just a Lock. But we don’t write “just a lock” software, we write “is a lock, a toaster and a microwave oven in one”. The feature bloat almost guarantees insecure software.

John R Campbell August 9, 2007 11:15 AM

Realize that any commercial firm is trying to satisfy its share-holders, and, so, they are motivated to cut corners.

Unfortunately, security is just one of the places where cutting corners is far less visible to the end-customer who isn’t digging to see that it’s being done right.

Just like other realities, customer service– i.e. giving the customer what they think they’re paying for– is NOT something that improves shareholder value THIS QUARTER.

Clive Robinson August 9, 2007 11:51 AM

As I have often said before,

“We need to put engeneering into software”.

Instead of using the old “Artisan” methodology used pre Victorian era for making machines, where if something failed you simply tacked another bit on untill either it did not break or the whole machine came crashing down around your ears…

For those more interested look up about acts of parliment and boiler explosions it might sound earily familiar…

Todd Knarr August 9, 2007 11:53 AM

I’ve done audit work, and the one constant rule there was: never assume your system is secure. We didn’t even assume that the original documents at the bottom of the whole accounting system were accurate. What we looked for was discrepancies between parts of the system, places where one party trying to fudge things didn’t have access to change something or couldn’t change it without causing another discrepancy elsewhere. For example, we’d compare the orders Housekeeping placed for laundry soap with the invoices Shipping & Receiving had for deliveries from the vendor with the records Accounts Payable had for the checks they wrote to pay for the shipments. Someone could fudge any of those three sets of records, but the same person couldn’t fudge all of them the same way and we’d spot the mismatch.

Voting systems should be the same way. Making the system unhackable should be secondary to simply making it possible to tell, after the fact and without assuming any part of the system is correct and untampered-with, whether or not the final tally is correct. We have that with paper ballots. If you think the ballot-boxes have been tampered with you don’t have to trust anyone that they weren’t, you can look at the seals yourself. If you think the counters are crooked you don’t have to trust that they’re tallying correctly, you can watch them count and run your own tally and see whether your numbers and theirs match. The only assumption we’re forced to make is that the ballot the voter dropped in the ballot box reflects his intended vote, and we justify that by saying that if it didn’t the voter could’ve seen that as he filled out the ballot and wouldn’t’ve put it in the ballot box until it did.

Oddly, electronic voting machines can help with that last. Drop the direct recording part, drop all the untamperable memory cards and all the problems with a purely electronic vote. Have the machine print out a paper ticket with a human-readable statement of the vote for each item and the same thing encoded in bar code beside it. No issues of alignment or anything else, the voter can read the ticket and see whether it says the right thing. No problems with illegible marks or ambiguous letters, printers make nice clear characters. The bar codes can be scanned as fast as you can feed slips through the reader, with (if the reader’s working properly) a negligible error rate. And you can double-check the readers by picking ballot boxes at random, manually tallying the human-readable votes on them and comparing that tally to the reader’s tally of the same box. And in the end, if at any point you discover the tallys are wrong, you can recover. You’ve got the original slips with the human-readable votes, you can if worst comes to worst dump them out on the counting table and have your human counters manually tally the vote. That just leaves one spot in the system to secure: making sure the ballot boxes can’t have their contents tampered with without someone noticing. And frankly we’ve got plenty of experience doing that.

At that point we don’t care whether the voting machine firmware or hardware was tampered with. If it printed the human-readable vote wrong, the voter’ll notice and raise a fuss when the machine refuses to print what he told it to. If it printed the human-readable vote right but put a different bar-code on, the double-check will show a discrepancy and we’ll discard the bar-code.

And if Diebold says they can’t make a reliable printer for a voting machine, I have only one question for them: “Then how do you make reliable printers for ATMs?”

Pat Cahalan August 9, 2007 11:57 AM

And most of the time, we don’t care. Commercial software, as
insecure as it is, is good enough for most purposes.

This is sort of the crux of the problem.

It’s not that people don’t understand security as a general concept, it’s that they don’t understand security as part of a risk analysis and the familiar becomes the trusted. If I ask my father (who is hardly a computer geek) if he would use his computer to connect to a federal database, he would laugh – of course not. But if I ask a federal employee if they’d use their computer to connect to a federal database, their first response would be, “Wow, I could get so much more work done…” Unless you’re a security professional or basically paranoid by nature, once you’re in side the Safe Zone you forget how much work is taking place at the perimeter; you become accustomed to doing things a certain way. Moving your operations to the Not So Safe Zone requires a ton of training to rewire your brain into being more careful.

Most people, once they say that a tool is “good enough” for a finite set of purposes, don’t retest the tool for “good enough” for purpose [n+1]. As a specific example… if Alice needs to hang a picture on the wall and she doesn’t have a hammer, she might go to the hardware store and buy one. She’s going to pick a cheap hammer that’s small and lightweight; she’s just hammering in a penny nail. But once that hammer is in her utility drawer, she’s going to find other uses for it. Eventually, she’s going to try to hammer something that really needs a better hammer, and her hammer is going to break.

And she’s going to be surprised when it happens.

Snow’s paper on Assurance is great (it’s been a long time favorite of mine), but there is one fundamental flaw – standard COTS software just isn’t designed this way for the reasons Bruce points out, and it’s not likely to be anytime in the near future.

So, if you have the budget to build solutions from scratch, and you need security, follow the principles of Assurance and you’ll build something pretty decent. However, if you don’t have the budget, or the time, or the political will to build a solution from scratch, you’re going to reach into your utility drawer and pull out that small hammer at some point, because it’s “good enough”.

And it’s going to break.

Anonymous August 9, 2007 11:57 AM

@bzelbob

Honest people who were informed about bad things would try to fix them.

No – they usually say “We’ve always done it this way”.

abacus August 9, 2007 12:58 PM

Further to Todd Knarr

There is data to suggest that many voters don’t read the machine’s printed record of their votes. Especially with a long ballot the voter can get lost checking whether the record is all correct.

A hand-marked paper ballot is better in this respect. Maybe a machine could be designed to require the voter to check a vote before moving to the next item on the ballot?

Also – no problem – we’d need scanners to read back to eg a blind voter what the written record has on it…

LBM August 9, 2007 3:37 PM

@Todd Knarr:

One of the Brennan Center studies (I forget which one) included consideration of having the VVPT and the electronic vote that was counted differ, which would be analogous to having the bar code differ from the legible print in your example. The premise was that some fairly low shift could compromise the election results with a reasonably low probability of detection, and an even lower probability of apprehension. Thus it was considered a credible attack method.

A better alternative would be printing the human readable ballot in a very clear font and counting it via OCR, but that still is vulnerable to attacks on the counting system that might not be detected.

guvn't August 9, 2007 3:49 PM

@Bruce, “it means that your design process was so lousy that it permitted buffer overflows,”

A minor quibble, buffer overflows are implementation flaws more than design flaws.

More precisely a buffer overflow is an implementation flaw in the program that overflows its buffer. An unchecked/uncaught buffer overflow is a design flaw in the programming environment and platform upon which it is executing (see comments below).

@greg, “I would really like to see some improvements in programing languages, and even code execution models. The fact that buffer overflows happen at all is not a problem per say, the fact that it can allow arbitrary code to be executed is.”

Disagree with the “not a problem”, agree totally with the desire for improvements.

A buffer overflow is a bug. It’s an internal inconsistency in program logic, because the amount of storage allocated does not match the amount of storage used. Any time your program logic contains an internal inconsistency, it is a problem, because execution results cannot be guaranteed to be correct.

Thing is, this has been a solved problem for years. Hardware memory protection facilities allow enforcement by throwing exceptions, so the problems can be caught easily. It’s just laziness and sloppiness that they are not.

Todd Knarr August 9, 2007 4:22 PM

LBM: true, but it’s also dealable with. You decide on the level of probability of tampering with the election undetected is acceptable (and there’s always an acceptable limit, it’s just sometimes very very low), and then crunch the basic statistics formulas to find out how large a sample you need to check to insure there’s no more than that chance of a discrepancy going undetected.

I’d also note that part of the issue is self-correcting. To go undetected the amount of alteration in the results needs to be small. The larger the alteration, the smaller a sample needed to reveal it. At the same time the alteration has to be large enough to change the result. Changing the results by 1% in a race decided by a 10% margin doesn’t change the outcome. So the races where undetected alteration’s a problem are specifically the very close ones. But those are also the ones where we’re most likely to check them closely using large samples (the extreme being a complete manual recount to insure accurate totals).

And all in all I’m not sanguine about OCR-based counting. My experience has been that bar codes are a lot less likely to be misread at high feed rates. The lower the error rate on the optical counting, the lower we can set the bar for discrepancies indicating a problem and the less likely any tampering with the machine-optical count to slip through undetected.

Tim August 9, 2007 5:07 PM

Let us hope that in the coming years we get out of this mentality of “patch and pray.”

Shane August 9, 2007 6:54 PM

I’m not an expert on policy or security, but would it be constitutional or politically feasible for Congress to pass a law requiring electronic voting machines used in federal elections to be reviewed by (or even designed by) the NSA’s Information Assurance Division?

I’ve been thinking about this since reading the Brian Snow essay you linked to in another post today, since it seems like (and I should hope that) the NSA employs a lot of people who know quite a bit about security.

JackG't August 9, 2007 10:21 PM

Shane, I appreciate the desire and need for assurances in voting, but should we want to get the NSA involved? I’m trying to think of the implications as to, let’s see, conflicts of interest, communications monitoring, satellite photography, voter ID, voter qualifications and disqualifications, ….

Nostromo August 10, 2007 2:20 AM

In most countries, you vote by marking a piece of paper. The marked pieces of paper are then counted by hand in the presence of representatives of the contestants. This seems to work quite well. If there’s a dispute about the accuracy of the counting, it can be re-done.

What is the point of introducing a computer into the process? What problem needs to be solved here? There are lots of activities in which a computer can help, but I don’t see the value of automation in this application, except to the vendors of voting machines.

guvn'r August 10, 2007 9:54 AM

@nostromo, “I don’t see the value of automation in this application, except to the vendors of voting machines.”

and your question is what?

seriously, there are other benefits, especially to the broadcast media pandering to the public appetite for instant results. Paper ballots take time to count, network commentators want to have final results immediately if not sooner so they don’t have to worry about exceeding their audience’s attention span.

Zach August 10, 2007 9:59 AM

Excellent – And further, in regards to voting machines — Before deciding whether or not they are secure a more fundamental decision must be made, and that is whether or not they even fit into the general act of voting. I, and many others, believe that voting machines do not fit into the act of voting. There are bits of the act of voting that many people view as immutable (can not be changed) and those are things like the physical ballot, the physical polling place, the physical voting rolls. When these immutable concepts are not treated as such we start down a slippery slope that gets us to where we are today with electromechanical devices used to vote and tally the vote.

Pat Cahalan August 10, 2007 10:58 AM

@ Shane

but would it be constitutional or politically feasible for Congress to pass a
law requiring electronic voting machines used in federal elections to be reviewed

Cursory read of the Constitution says Yes.

From Article 1, Section 4:

“The Times, Places and Manner of holding Elections for Senators and Representatives, shall be prescribed in each State by the Legislature thereof; but the Congress may at any time by Law make or alter such Regulations, except as to the Places of chusing Senators.”

It’s the job of the States to choose how they elect Senators and/or Representatives, but Congress has the authority to trump state law.

John Smith August 10, 2007 1:00 PM

A couple of points – it’s interesting you use the safety of airplanes as an example because it also shows what you’re missing in your analysis.

Airplanes aren’t safe simply because they’re BUILT safe. They’re safe because of the infrastructure and process that surrounds them. The regular maintenance and testing. The airport security that prevents people from just walking up and tampering with them. The flight checks that make sure everything is working ok before and during a flight. Remove all this and even the best designed and best built airplanes will fail.

Every analysis of electronic voting systems starts with a flawed assumption: that the infrastructure and process is irrelevent and therefore can be ignored. The California test even went further – it handed the testers the source code to the machines, put them in a room alone with the machines and said ‘have at it’. Bruce, I defy you to build any computer system that cannot be hacked in that environment that’s also cost effective and easy to use – because remember – voters use these systems – and regular people, not computer programmer or security experts will be operating these things. This is not a realistic test and it’s not a realistic setup.

I really wish the people who criticise would actually try building a product and selling it in the real world. It’s just not as simple as you all want it to be.

A real test would be to create a simulated poll station staffed with real poll workers – and then let the hackers in and see what they can do. You could create a fictional ‘runoff’ vote in a county, but not tell the staff so they’d think it was a real vote.

The real test of security isn’t ‘is it perfect’ – there is no perfect security system, and in any system where you have to share a secret in order to give public access to information – it’s essentially impossible to come up with a good solution (see DVDs). The real test is: will this reduce errors and real world attacks enough to keep the loss/error rate below any other method.

People keep rating evoting systems against manual counting. The problem is, the US balloting system is SO complex that the error rate on paper ballots is actually fairly high. Evoting systems already do better than that.

The real problem in the US electoral system isn’t the evoting machines and the focus on them really distracts everyone from the real issues. The real problems lie in how voting is done and controlled: federal gov’t can’t dictate to states, who then pass on responsibility to counties – so there’s no consistency across the system; the existence of federally funded votes that states and counties can then piggy back their elections onto to save money, creating massively complex ballots with even more complex rules on how to count the votes on them; the bizarre party system that can lead to things like straight-party ballots where a single error can disqualify the entire ballot without the voter even realising what happened and so on. There’s the state’s constant attempts to save money by piggybacking elections on federal elections, making the ballots bizarrely complex – or creating a large number of different ballots tailored to each voter type (which, BTW, introduces a wonderful failure in voter anonymity – I’ll leave it up to you to figure it out – it’s really obvious how to guess the votes of minorities in some districts – and it has nothing to do with evoting systems).

Then there’s the procedural issues. Having a state secretary who was an admitted fund raiser for one of the two candidates in the 2000 presidental election be the person to preside over recounting votes is a massive conflict of interest. She also was the person who arranged for the suspect use of felon lists to deny possibly tens of thousands of legitimate voters (mostly Democrat) their right to vote. In Ohio 2004, poll stations were delivered insufficient ballots.

Can and should evoting systems be better? Sure – that should be a goal of all developers. But there’s a point where you’re making improvement that don’t really translate into any real gain. It’s like putting the 10th lock on your door only to find that someone came by while you were away and came in through the bathroom window. What’s needed is a full, systemic analysis of the security and reliability of the voting system – not just one part of it.

And as for airplanes – they still fall out of the sky from time to time… even in a system that good.

John Smith August 10, 2007 1:20 PM

I have to comment on some of the comments. It’s actually very illuminating WRT to my comment about the focus on evoting taking away attention from the bigger picture.

Nostromo – you may not realise it – most people don’t – but the vast majority of votes in the US are still done on paper and counted on optical mark ballots. The paper ballots are kept and can be counted by hand. The illusion that most votes in the US are done by touchscreen or by punch cards is exactly that – an illusion. For one thing, punch card ballots have pretty much been eliminated from the system.

Also, there’s a perception of unreliability but no clear evidence of it. Remember, if the goal is 100% accuracy – no existing system – including paper ballots succeeds, so the question isn’t is it perfect – it’s ‘is it as good or better as the next best’. Yet everyone wants evoting to be 100% foolproof. That’s unrealistic.

guvn’r – actually, it’s WAY more complex than that. A single presidential ballot can carry a lot of other polls – like senate and congress – heck it can even have dogcatcher on it in some places. The ballot has to be sliced up across many different polls and some parts may be counted against different polls in different ridings (one poll station might actually be in two different ridings when a riding is split).

Simply dismissing it as ‘convenience’ is not really helping the issue here.

Zach – see my comment to Nostromo.

And as for auditing, most of these systems have mechanisms for auditing ballots. Some even keep images of the scanned ballot so they can be monitored onscreen by a scruitineer.

What bothers me about this entire discussion is that it’s entirely from one side – and from what I read here – it’s the one side that actually knows very little of how the US electoral and voting system ACTUALLY works both in practice and by law (for example – no, the federal govenment cannot dictate to a state how to do a vote – it’s been tested in courts), and really has no clue about real world reliability, security or even process.

It’s easy to criticise when you just don’t have a clue. I recommend to everyone one of you – go volunteer to be a poll worker in the next election or primary. Actually find out how the damned system works before you trot out your opinions as if you actually knew what you were talking about.

Because most of you don’t.

nedu August 10, 2007 1:40 PM

I recently spent some time reading the NTSB report on the “Ceiling Collapse in the Interstate 90 Connector Tunnel, Boston, Massachusetts, July 10, 2006”

http://www.ntsb.gov/publictn/2007/HAR0702.pdf

While ceiling collapses in tunnels are far outside my field of study, this report seems fairly accessible to anyone with a basic background in science or engineering–although perhaps some general knowledge of the construction industry is helpful. I’d encourage people involved in voting systems and other large-scale information technology projects to take the time to read this report, or similar failure reports from engineering disciplines somewhat more established than “software engineering”.

Pulling some summarized findings from the press release accompanying the report, note that:

“The Board states in its probable cause that the use of an inappropriate epoxy formulation resulted from the failure of Gannett Fleming, Inc., (Gannett Fleming) and Bechtel/Parsons Brinckerhoff (B/BP) to identify potential creep in the anchor adhesive as a critical long-term failure mode and to account for possible anchor creep in the design, specifications, and approval process for the epoxy anchors used in the tunnel. The Board also notes that had Gannett Fleming specified the use of adhesive anchors with adequate creep resistance in the construction contract, a different anchor adhesive could have been chosen, and the accident might have been prevented.

“The use of an inappropriate epoxy formulation also resulted from a general lack of understanding and knowledge in the construction community about creep in adhesive anchoring systems. The Board notes that those responsible for overseeing the Central Artery/Tunnel project (CA/T), in design and specifications for the tunnel’s ceiling, failed to account for the fact that polymer adhesives are susceptible to deformation (creep) under sustained load. In addition, Powers Fasteners, Inc., (Powers) failed to provide the CA/T project with sufficiently complete, accurate and detailed information about the suitability of the company’s Fast Set epoxy for sustaining long-term tensile loads.”

“[…]

“The Massachusetts Turnpike Authority (MTA) also contributed to the accident by failing to implement a timely tunnel inspection program that would likely have revealed the ongoing anchor creep in time to correct the deficiencies before an accident occurred. The Board concluded that had MTA, at regular intervals, inspected the area above the suspended ceilings in the D Street portal tunnels, the anchor creep that led to this accident would likely have been detected, and action could have been taken that would have prevented this accident.”

From that same summary, among the board’s recommendations were:

“Prohibiting the use of adhesive anchors in sustained tensile-load overhead highway applications where failure of the adhesive would result in a risk to the public until testing standards and protocols have been developed and implemented that ensure the safety of these applications;”

http://www.ntsb.gov/Pressrel/2007/071007b.htm

Voting systems for use in public elections are critical, public infrastructure.

Given the state of the art in software engineering, we know that there will be bugs and vulnerabilities in deployed software, firmware and hardware.

The architecture of elections systems must take into account the forseeable certainty of defects in particular subsystems–particularly complex subsystems. It must also take into account the current lack of knowledge in the profession regarding the verifiability of complex software, firmware and hardware systems.

Willfully ignoring those realities is reckless.

Phil August 13, 2007 3:17 AM

@John Smith

“The California test even went further – it handed the testers the source code to the machines, put them in a room alone with the machines and said ‘have at it’.”

When you want to test if someone can hack a voting machine, you MUST assume he’s ready to gather all necessary data by any means so it’s the right thing to consider the bad guys have the code.

“A real test would be to create a simulated poll station staffed with real poll workers – and then let the hackers in and see what they can do.”

You presume the attack must be done at the time and place of the vote. The bad guy can also tamper with the machines at other times and places :
1. before the voting day while they are in some storage facility,
2. between the vote and the tally (this one is really hard due to short timing but you can’t ignore it)
3. during the vote but from some other place if the machines are networked.
You can also combine these : first inject a trojan than use it via the network to monitor and alter the votes.

So this setup is not so far from real attacks.

Pat Cahalan August 13, 2007 3:51 PM

@ John Smith

Every analysis of electronic voting systems starts with a flawed assumption:
that the infrastructure and process is irrelevent and therefore can be
ignored.

I don’t believe that this is true, at least EVS analysis done by voting security specialists. Certainly the blogosphere concentrates upon the security vulnerabilities in the devices themselves, but that doesn’t mean that people who analyze elections process professionally discount the entire infrastructure and process.

The California test even went further – it handed the testers the source
code to the machines, put them in a room alone with the machines and said
‘have at it’.

Well, as Phil pointed out, you’re assuming that the machines need to be attacked at the time and place of the vote with no research done beforehand. In addition, your objection to this methodology clearly is in opposition to your first statement -> if you assume that hackers won’t have an opportunity to attack the machines outside of their deployed state, you’re saying yourself that the infrastructure and process are irrelevant. These machines are certainly placed in warehouses for storage. These machines are designed and built by software employees who may be disgruntled and may maliciously (or, for that matter, accidentally) leak source code. Admittedly, having a clean room attack scenario makes it relatively easy to find vulnerabilities, but the sheer number and range of vulnerabilities (many of which did not require access to the source code) found by the red team analysis during a very short time span would indicate that the testing environment is hardly unreasonable.

Bruce, I defy you to build any computer system that cannot be hacked
in that environment that’s also cost effective and easy to use – because
remember – voters use these systems – and regular people, not computer
programmer or security experts will be operating these things.

If true, this begs the question: if it is not possible to build a reasonably secure voting system for a reasonably secure amount of money, why are we pursuing this avenue in the first place?

This is not a realistic test and it’s not a realistic setup.

On the contrary, I think given the importance of the electoral process, this is an eminently realistic test.

People keep rating evoting systems against manual counting. The problem is,
the US balloting system is SO complex that the error rate on paper ballots is
actually fairly high.

This is a valid point. However, it certainly seems reasonable to pursue remediation efforts here. Certainly improving election processes can yield better error rates counting paper ballots, and improving election processes is not technology-dependent: building a better ballot that is demonstrably less error-prone scales across balloting processes: replacing a paper ballot with an electronic one does not make this problem go away, it just shifts problem domains.

Evoting systems already do better than that.

I won’t actively dispute this, but on the other hand I won’t accept it as a throwaway comment, either. Can you cite some references in support of this statement?

The real problem in the US electoral system isn’t the evoting machines and the
focus on them really distracts everyone from the real issues. The real
problems lie in how voting is done and controlled

This is not relevant to your base premise that electronic machines are somehow better (which seems to be your main point). Sure, the electoral process in this country is not standardized and is really irrationally designed. I absolutely agree. In what way does adding another layer of complexity by introducing E-voting systems (which also are nonstandard, by the nature of the absurdities of process you outline in this paragraph) making anything better?

It’s like putting the 10th lock on your door only to find that someone came
by while you were away and came in through the bathroom window.
What’s needed is a full, systemic analysis of the security and reliability of the
voting system – not just one part of it.

Yes and no. The voting system is broken, I’ll agree. The remediation efforts are going to be difficult due to the number of agendas involved and the complexity of their interactions, I’ll agree with that, too. However, adding E-voting machines to the mix does nothing to solve either of these two problems, for one, and introduces a significant new vulnerability: the ability for a class break in a particular model of E-voting system to lead to a rather easily designed and eminently scalable attack on the electoral process. Clearly, opening the electoral process up to an additional vulnerability is worth significant scrutiny.

And as for airplanes – they still fall out of the sky from time to time… even in a
system that good.

Yes, but even when a design flaw leads one of them to fall out of the sky, it doesn’t lead all of the planes of one model to fall out of the sky simultaneously. In addition, planes are in the sky all the time, E-voting machines are used rarely but when they are used they are deployed across entire domains. The airplane analogy is not great; we don’t park all of the 727s in the nation in a hanger for 11 months and two weeks and then suddenly deploy them all at every airport and mandate that they be used for short-hop flights on one particular day.

And as for auditing, most of these systems have mechanisms for auditing
ballots. Some even keep images of the scanned ballot so they can
be monitored onscreen by a scruitineer.

Yes, and there were vulnerabilities found in audit mechanisms, not to mention the fact that if the machine can be suborned, the audit trail can be suborned as well.

What bothers me about this entire discussion is that it’s entirely from
one side, [edit] and really has no clue about real world reliability, security or
even process.

On the contrary, with the exception of the occasional troll, this blog’s GP consists almost entirely of people who have a tremendous body of practical knowledge in real world security and process. Even those with whom I disagree regularly have a good body of practical experience; they just regard their solution space from a different standpoint than I do.

Kurt Kilicli August 15, 2007 9:56 AM

An interesting area to consider regarding security in voting machine software is in the medical device industry. The FDA has stringent requirements on the entire medical software development process including verification and validation of the software. If the voting machine software doesn’t work correctly, votes can be lost (and yes, the election can be thrown). If medical software doesn’t work correctly, someone dies. And that can have important and immediate consequences on the company that makes it (FDA can shut it down, company personnel can be jailed, plus wrongful death suits). Would be nice if similar penalties could be levied against the voting machine companies.

Joe Werner August 15, 2007 11:00 AM

As I read discussions of finding and patching deficiencies in code (especially in trying to improve the security of systems), I marvel that the industry seems to have forgotten what Edsger W. Dijkstra said nearly 40 years ago: Testing never reveals the absence of bugs, only their presence. Dr. Dijkstra was a big proponent of provable programming, and advocated starting from a provable design and morphing that design into a proven program.

Recall that Dr. Dijkstra also wrote in 1968 that the GOTO statement was dangerous, which gave us “Structured Programming.” It was the best we could do at that time with the tools at hand (like COBOL and FORTRAN). After all this time, we’re still not close enough to tools better to help provide provability.

There’s a lot we can learn from the past. And a lot yet to do!

speedball August 15, 2007 12:27 PM

Bruce,

You missed the bigger issue. This is not about security. It is about politics and greed.

The politicians want to give the voting machine business to contributors and friends.

Cynical folks think they may also be trying to make it easier to steal the election with or without some help from CIA black bag artists.

Of course you can’t rule out stupidity too. Anyone who would certify any machine without maintaining a source document record is a fool or a crook or both.

The only valid system is one that keep the original paper ballots for a recount, or the equivalent where cards are scanned by OCR to do a machine count instead of a hand count but could be counted by hand or recounted by machine if necessary.

Even the old lever machines were questionable. The electronic ones are just high tech versions of the same flawed approach and which are easier to hack to fix an election.

william adams, pe, phd August 15, 2007 12:31 PM

Bruce,

The overlooked factoid is that systems can be 100% secure if it is ARCHITECTED IN during the SYSTEMS ARCHITECTURE which is the key and mandatory first step of a helical life cycle. Many implementations omit the explicit SYSTEMS ARCHITECTURE or do an incomplete andor erroneous job at it.

After that point, in the life cycle, you can only apply band aids and they wont plug all the holes.

Agile hackers will always fail; and first to market companies like microschlock dont care about security enough to do it right. But always have time to try to fix it later. go figure.

I architected a virus proof DOS type pc in the 80s. Turns out I used the same technique that a classified USAF project used to do the same thing. Not sure why they were doing it. I could do the same thing again today with graphics oriented OSs. But Nobody really wants security nor is willing to pay for it.

Why nobody is willing to use the techniques up front is a mystery. The govt never demanded that their technique be used for govt pcs that the USAF came up with (unless it was done secretly?!).

Nobody I contacted wanted to solve the problem. They all wanted to just add more bandaids. The virus industry would die if there were no problem. IBM et al dont believe anyone else can be smarter than they are about anything.

Security is a far larger issue and will need to consider politics and other nvolve other softer non tekkie areas , before it can happen.

william adams, pe, phd

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.