Entries Tagged "Diebold"

Page 1 of 3

Insider Attack Against Diebold Voting Machines

This is both news and not news:

Indeed, the Argonne team’s attack required no modification, reprogramming, or even knowledge, of the voting machine’s proprietary source code. It was carried out by inserting a piece of inexpensive “alien electronics” into the machine.

It’s not news because we already know that if you have access to the internals of a voting machine, you can make it do whatever you want.

It is news because it’s so easy. The entire hack took two hours, start to finish. The attacker doesn’t have to know how the machine works, he just needs physical access. (And we know that voting machines are routinely left unguarded, and have locks that are easily bypassed.)

I find this all so frustrating because there are a gazillion ways to hack electronic voting machines. Specific attacks get the headlines, and the voting machine companies counter with reasons why those attacks are not “valid.” And in the noise and counter-noise, no one hears the general truth: these systems are insecure, and should not be used in elections.

Posted on October 5, 2011 at 6:58 AMView Comments

Insiders

Rajendrasinh Makwana was a UNIX contractor for Fannie Mae. On October 24, he was fired. Before he left, he slipped a logic bomb into the organization’s network. The bomb would have “detonated” on January 31. It was programmed to disable access to the server on which it was running, block any network monitoring software, systematically and irretrievably erase everything—and then replicate itself on all 4,000 Fannie Mae servers. Court papers claim the damage would have been in the millions of dollars, a number that seems low. Fannie Mae would have been shut down for at least a week.

Luckily—and it does seem it was pure luck—another programmer discovered the script a week later, and disabled it.

Insiders are a perennial problem. They have access, and they’re known by the system. They know how the system and its security works, and its weak points. They have opportunity. Bank heists, casino thefts, large-scale corporate fraud, train robberies: many of the most impressive criminal attacks involve insiders. And, like Makwana’s attempt at revenge, these insiders can have pretty intense motives—motives that can only intensify as the economy continues to suffer and layoffs increase.

Insiders are especially pernicious attackers because they’re trusted. They have access because they’re supposed to have access. They have opportunity, and an understanding of the system, because they use it—or they designed, built, or installed it. They’re already inside the security system, making them much harder to defend against.

It’s not possible to design a system without trusted people. They’re everywhere. In offices, employees are trusted people given access to facilities and resources, and allowed to act—sometimes broadly, sometimes narrowly—in the company’s name. In stores, employees are allowed access to the back room and the cash register; and customers are trusted to walk into the store and touch the merchandise. IRS employees are trusted with personal tax information; hospital employees are trusted with personal health information. Banks, airports, and prisons couldn’t operate without trusted people.

Replacing trusted people with computers doesn’t make the problem go away; it just moves it around and makes it even more complex. The computer, software, and network designers, implementers, coders, installers, maintainers, etc. are all trusted people. See any analysis of the security of electronic voting machines, or some of the frauds perpetrated against computerized gambling machines, for some graphic examples of the risks inherent in replacing people with computers.

Of course, this problem is much, much older than computers. And the solutions haven’t changed much throughout history, either. There are five basic techniques to deal with trusted people:

1. Limit the number of trusted people. This one is obvious. The fewer people who have root access to the computer system, know the combination to the safe, or have the authority to sign checks, the more secure the system is.

2. Ensure that trusted people are also trustworthy. This is the idea behind background checks, lie detector tests, personality profiling, prohibiting convicted felons from getting certain jobs, limiting other jobs to citizens, the TSA’s no-fly list, and so on, as well as behind bonding employees, which means there are deep pockets standing behind them if they turn out not to be trustworthy.

3. Limit the amount of trust each person has. This is compartmentalization; the idea here is to limit the amount of damage a person can do if he ends up not being trustworthy. This is the concept behind giving people keys that only unlock their office or passwords that only unlock their account, as well as “need to know” and other levels of security clearance.

4. Give people overlapping spheres of trust. This is what security professionals call defense in depth. It’s why it takes two people with two separate keys to launch nuclear missiles, and two signatures on corporate checks over a certain value. It’s the idea behind bank tellers requiring management overrides for high-value transactions, double-entry bookkeeping, and all those guards and cameras at casinos. It’s why, when you go to a movie theater, one person sells you a ticket and another person standing a few yards away tears it in half: It makes it much harder for one employee to defraud the system. It’s why key bank employees need to take their two-week vacations all at once—so their replacements have a chance to uncover any fraud.

5. Detect breaches of trust after the fact and prosecute the guilty. In the end, the four previous techniques can only do so well. Trusted people can subvert a system. Most of the time, we discover the security breach after the fact and then punish the perpetrator through the legal system: publicly, so as to provide a deterrence effect and increase the overall level of security in society. This is why audit is so vital.

These security techniques don’t only protect against fraud or sabotage; they protect against the more common problem: mistakes. Trusted people aren’t perfect; they can inadvertently cause damage. They can make a mistake, or they can be tricked into making a mistake through social engineering.

Good security systems use multiple measures, all working together. Fannie Mae certainly limits the number of people who have the ability to slip malicious scripts into their computer systems, and certainly limits the access that most of these people have. It probably has a hiring process that makes it less likely that malicious people come to work at Fannie Mae. It obviously doesn’t have an audit process by which a change one person makes on the servers is checked by someone else; I’m sure that would be prohibitively expensive. Certainly the company’s IT department should have terminated Makwana’s network access as soon as he was fired, and not at the end of the day.

In the end, systems will always have trusted people who can subvert them. It’s important to keep in mind that incidents like this don’t happen very often; that most people are honest and honorable. Security is very much designed to protect against the dishonest minority. And often little things—like disabling access immediately upon termination—can go a long way.

This essay originally appeared on the Wall Street Journal website.

Posted on February 16, 2009 at 12:20 PMView Comments

When Voting Machine Audit Logs Don't Help

Wow:

Computer audit logs showing what occurred on a vote tabulation system that lost ballots in the November election are raising more questions not only about how the votes were lost, but also about the general reliability of voting system audit logs to record what occurs during an election and to ensure the integrity of results.

The logs, which Threat Level obtained through a public records request from Humboldt County, California, are produced by the Global Election Management System, the tabulation software, also known as GEMS, that counts the votes cast on all voting machines—touch-screen and optical-scan machines—made by Premier Election Solutions (formerly called Diebold Election Systems).

The article gets pretty technical, but is worth reading.

Posted on January 23, 2009 at 7:43 AMView Comments

Diebold Finally Admits its Voting Machines Drop Votes

Premier Election Solutions, formerly called Diebold Election Systems, has finally admitted that a ten-year-old error has caused votes to be dropped.

It’s unclear if this error is random or systematic. If it’s random—a small percentage of all votes are dropped—then it is highly unlikely that this affected the outcome of any election. If it’s systematic—a small percentage of votes for a particular candidate are dropped—then it is much more problematic.

Ohio is trying to sue:

Ohio Secretary of State Jennifer Brunner is seeking to recover millions of dollars her state spent on the touch-screen machines and is urging the state legislature to require optical scanners statewide instead.

In a lawsuit, Brunner charged on Aug. 6 that touch-screen machines made by the former Diebold Election Systems and bought by 11 Ohio counties “produce computer stoppages” or delays and are vulnerable to “hacking, tampering and other attacks.” In all, 44 Ohio counties spent $83 million in 2006 on Diebold’s touch screens.

In other news, election officials sometimes take voting machines home for the night.

My 2004 essay: “Why Election Technology is Hard.”

Posted on August 28, 2008 at 6:38 AMView Comments

More Voting Machine News

Ohio just completed a major study of voting machines. (Here’s the report, a gigantic pdf.) And, like the California study earlier this year, they found all sorts of problems:

While some tests to compromise voting systems took higher levels of sophistication, fairly simple techniques were often successfully deployed.

“To put it in every-day terms, the tools needed to compromise an accurate vote count could be as simple as tampering with the paper audit trail connector or using a magnet and a personal digital assistant,” Brunner said.

The New York Times writes:

“It was worse than I anticipated,” the official, Secretary of State Jennifer Brunner, said of the report. “I had hoped that perhaps one system would test superior to the others.”

At polling stations, teams working on the study were able to pick locks to access memory cards and use hand-held devices to plug false vote counts into machines. At boards of election, they were able to introduce malignant software into servers.

Note the lame defense from one voting machine manufacturer:

Chris Riggall, a Premier spokesman, said hardware and software problems had been corrected in his company’s new products, which will be available for installation in 2008.

“It is important to note,” he said, “that there has not been a single documented case of a successful attack against an electronic voting system, in Ohio or anywhere in the United States.”

I guess he didn’t read the part of the report that talked about how these attacks would be undetectable. Like this one:

They found that the ES&S tabulation system and the voting machine firmware were rife with basic buffer overflow vulnerabilities that would allow an attacker to easily take control of the systems and “exercise complete control over the results reported by the entire county election system.”

They also found serious security vulnerabilities involving the magnetically switched bidirectional infrared (IrDA) port on the front of the machines and the memory devices that are used to communicate with the machine through the port. With nothing more than a magnet and an infrared-enabled Palm Pilot or cell phone they could easily read and alter a memory device that is used to perform important functions on the ES&S iVotronic touch-screen machine—such as loading the ballot definition file and programming the machine to allow a voter to cast a ballot. They could also use a Palm Pilot to emulate the memory device and hack a voting machine through the infrared port (see the picture above right).

They found that a voter or poll worker with a Palm Pilot and no more than a minute’s access to a voting machine could surreptitiously re-calibrate the touch-screen so that it would prevent voters from voting for specific candidates or cause the machine to secretly record a voter’s vote for a different candidate than the one the voter chose. Access to the screen calibration function requires no password, and the attacker’s actions, the researchers say, would be indistinguishable from the normal behavior of a voter in front of a machine or of a pollworker starting up a machine in the morning.

Elsewhere in the country, Colorado has decertified most of its electronic voting machines:

The decertification decision, which cited problems with accuracy and security, affects electronic voting machines in Denver and five other counties. A number of electronic scanners used to count ballots were also decertified.

Coffman would not comment Monday on what his findings mean for past elections, despite his conclusion that some equipment had accuracy issues.

“I can only report,” he said. “The voters in those respective counties are going to have to interpret” the results.

Coffman announced in March that he had adopted new rules for testing electronic voting machines. He required the four systems used in Colorado to apply for recertification.

The systems are manufactured by Premier Election Solutions, formerly known as Diebold Election Systems; Hart InterCivic; Sequoia Voting Systems; and Election Systems and Software. Only Premier had all its equipment pass the recertification.

California is about to give up on electronic voting machines, too. This probably didn’t help:

More than a hundred computer chips containing voting machine software were lost or stolen during transit in California this week.

EDITED TO ADD (1/2): More news.

Posted on December 24, 2007 at 1:02 PMView Comments

Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AMView Comments

Florida E-Voting Study

Florida just recently released another study of the Diebold voting
machines. They—and it was real security researchers like the California study, and not posers—studied v4.6.5 of the Diebold TSx and v1.96.8 of the Diebold Optical Scan. (California studied older versions (v4.6.4 of the TSx and v1.96.6 of the Optical Scan).

The most interesting issues are (1) Diebold’s apparent “find- then-patch” approach to computer security, and (2) Diebold’s lousy use of cryptography.

Among the findings:

  • Section 3.5. They use RSA signatures, apparently to address previously documented flaws in the literature. But their signature verification step has a problem. It computes H = signature**3 mod N, and then compares _only 160 bits of H_ with the SHA1 hash of a message. This is a natural way to implement RSA signatures if you just read a security textbook. But this approach is also insecure—the report demonstrates how to create a 250-line Java program to forge RSA signatures over (basically) arbitrary messages of their choosing.
  • Section 3.10.3. The original Hopkins report talked about the lack of crypto for network (or dialup) communications between a TSX voting machine and the back-end GEMs server. Apparently, Diebold tried to use SSL to fix the problem. The RABA report analyzed Diebold’s SSL usage and found a security problem. Diebold then tried to patch their SSL implementation. This new report looks at the patched version, and finds that it is still vulnerable to a man-in-the-middle attack.
  • Section 3.7.1.1. Key management. Avi Rubin has already summarized some of the highlights.

    This is arguably worse than having a fixed static key in all of the machines. Because with knowledge of the machine’s serial number, anyone can calculate all of the secret keys. Whereas before, someone would have needed access to the source code or the binary in the machine.

    Other attacks mentioned in the report include swapping two candidate vote counters and many other vote switching attacks. The supervisor PIN is protected with weak cryptography, and once again Diebold has shown that they do not have even a basic understanding of how to apply cryptographic mechanisms.

Avi Rubin has a nice overall summary, too:

So, Diebold is doing some things better than they did before when they had absolutely no security, but they have yet to do them right. Anyone taking any of our cryptography classes at Johns Hopkins, for example, would do a better job applying cryptography. If you read the SAIT report, this theme repeats throughout.

Right. These are classic examples of problems that can arise if (1) you “roll your own” crypto and/or (2) employ “find and patch” rather than a principled approach to security.

It all makes me wonder what new problems will arise from future security patches.

The good news is that Florida has decided not to certify the TSX at this time. They may try to certify a revised version of the OS (optical scan) system.

Posted on August 6, 2007 at 6:34 AMView Comments

More on the California Voting Machine Review

This is a follow-on to this post. What’s new is that the source code reviews are now available.

I haven’t had the chance to review the reports. Matt Blaze has a good summary on his blog:

We found significant, deeply-rooted security weaknesses in all three vendors’ software. Our newly-released source code analyses address many of the supposed shortcomings of the red team studies, which have been (quite unfairly, I think) criticized as being “unrealistic”. It should now be clear that the red teams were successful not because they somehow “cheated,” but rather because the built-in security mechanisms they were up against simply don’t work properly. Reliably protecting these systems under operational conditions will likely be very hard.

I just read Matt Bishop’s description of the miserable schedule and support that the California Secretary of State’s office gave to the voting-machine review effort:

The major problem with this study is time. Although the study did not start until mid-June, the end date was set at July 20, and the Secretary of States said that under no circumstandes would it be extended.

[…]

The second problem was lack of information. In particular, various documents did not become available until July 13, too late to be of any value to the red teams, and the red teams did not have several security-related documents. Further, some software that would have materially helped the study was never made available.

Matt Blaze, who led the team that reviewed the Sequoia code, had similar things to say:

Reviewing that much code in less than two months was, to say the least, a huge undertaking. We spent our first week (while we were waiting for the code to arrive) setting up infrastructure, including a Trac Wiki on the internal network that proved invaluable for keeping everyone up to speed as we dug deeper and deeper into the system. By the end of the project, we were literally working around the clock.

It seems that we have a new problem to worry about: the Secretary of State has no clue how to get a decent security review done. Perversely, it was good luck that the voting machines tested were so horribly bad that the reviewers found vulnerabilities despite a ridiculous schedule—one month simply isn’t reasonable—and egregious foot-dragging by vendors in providing needed materials.

Next time, we might not be so lucky. If one vendor sees he can avoid embarrassment by stalling delivery of his most vulnerable source code for four weeks, we might end up with the Secretary of State declaring that the system survived vigorous testing and therefore is secure. Given that refusing cooperation incurred no penalty in this series of tests, we can expect vendors to work that angle more energetically in the future.

The Secretary of State’s own web page gives top billing to the need “to restore the public’s confidence in the integrity of the electoral process,” while the actual security of the machines is relegated to second place.

We need real security evaluations, not feel-good fake tests. I wish this were more the former than the latter.

EDITED TO ADD (8/4): California Secretary of State Bowen’s certification decisions are online.

She has totally decertified the ES&S Inkavote Plus system, used in L.A. County, because of ES&S noncompliance with the Top to Bottom Review. The Diebold and Sequoia systems have been decertified and conditionally recertified. The same was done with one Hart Intercivic system (system 6.2.1). (Certification of the Hart system 6.1 was voluntarily withdrawn.)

To those who thought she was staging this review as security theater, this seems like evidence to the contrary. She wants to do the right thing, but has no idea how to conduct a security review.

Another article.

EDITED TO ADD (8/4): The Diebold software is pretty bad.

EDITED TO ADD (8/5): Ed Felten comments:

It is interesting (at least to me as a computer security guy) to see how often the three companies made similar mistakes. They misuse cryptography in the same ways: using fixed unchangeable keys, using ciphers in ECB mode, using a cyclic redundancy code for data integrity, and so on. Their central tabulators use poorly protected database software. Their code suffers from buffer overflows, integer overflow errors, and format string vulnerabilities. They store votes in a way that compromises the secret ballot.

And Avi Rubin comments:

As I read the three new reports, I could not help but marvel at the fact that so many places in the US are using these machines. When it comes to prescription medications, we perform extensive tests before drugs hit the market. When it comes to aviation, planes are held to standards and tested before people fly on them. But, it seems that the voting machines we are using are even more poorly designed and poorly implemented than I had realized.

He’s right, of course.

Posted on August 3, 2007 at 12:55 PMView Comments

California Voting Machine Audit Results

The state of California conducted a security review of their electronic voting machines earlier this year. This was a serious review, with real security researchers getting access to the source code. The report was issued last week, and the researchers were able to compromise all three machines—by Diebold Election Systems, Hart Intercivic, and Sequoia Voting Systems—multiple ways. (They said they could probably find more ways, if they had more time.)

Final report and details about the audit here. Good blog entries here and here. We don’t know what California will do now.

This is no surprise, really. The notion that electronic voting machines were somehow more secure every other computer system ever built was ridiculous from the start. And the claims by machine manufacturers that releasing their source code would hurt the security of the machine was—like all these sorts of claims—really an attempt to prevent embarrassment to the company.

Not everyone gets this, unfortunately. And not everyone involved in voting:

Letting the hackers have the source codes, operating manuals and unlimited access to the voting machines “is like giving a burglar the keys to your house,” said Steve Weir, clerk-recorder of Contra Costa County and head of the state Association of Clerks and Election Officials.

No. It’s like giving burglars the schematics, installation manuals, and unlimited access to your front door lock. If your lock is good, it will survive the burglar having that information. If your lock isn’t good, the burglar will get in.

I have two essays on this, from 2004: “Why Election Technology is Hard,” and “Electronic Voting Machines.” This essay—”Voting and Technology“—was written in 2000.

EDITED TO ADD (7/31): Another article.

EDITED TO ADD (8/2): Good commentary.

Posted on July 31, 2007 at 10:57 AMView Comments

1 2 3

Sidebar photo of Bruce Schneier by Joe MacInnis.