Entries Tagged "disclosure"

Page 7 of 9

More on the California Voting Machine Review

This is a follow-on to this post. What’s new is that the source code reviews are now available.

I haven’t had the chance to review the reports. Matt Blaze has a good summary on his blog:

We found significant, deeply-rooted security weaknesses in all three vendors’ software. Our newly-released source code analyses address many of the supposed shortcomings of the red team studies, which have been (quite unfairly, I think) criticized as being “unrealistic”. It should now be clear that the red teams were successful not because they somehow “cheated,” but rather because the built-in security mechanisms they were up against simply don’t work properly. Reliably protecting these systems under operational conditions will likely be very hard.

I just read Matt Bishop’s description of the miserable schedule and support that the California Secretary of State’s office gave to the voting-machine review effort:

The major problem with this study is time. Although the study did not start until mid-June, the end date was set at July 20, and the Secretary of States said that under no circumstandes would it be extended.

[…]

The second problem was lack of information. In particular, various documents did not become available until July 13, too late to be of any value to the red teams, and the red teams did not have several security-related documents. Further, some software that would have materially helped the study was never made available.

Matt Blaze, who led the team that reviewed the Sequoia code, had similar things to say:

Reviewing that much code in less than two months was, to say the least, a huge undertaking. We spent our first week (while we were waiting for the code to arrive) setting up infrastructure, including a Trac Wiki on the internal network that proved invaluable for keeping everyone up to speed as we dug deeper and deeper into the system. By the end of the project, we were literally working around the clock.

It seems that we have a new problem to worry about: the Secretary of State has no clue how to get a decent security review done. Perversely, it was good luck that the voting machines tested were so horribly bad that the reviewers found vulnerabilities despite a ridiculous schedule — one month simply isn’t reasonable — and egregious foot-dragging by vendors in providing needed materials.

Next time, we might not be so lucky. If one vendor sees he can avoid embarrassment by stalling delivery of his most vulnerable source code for four weeks, we might end up with the Secretary of State declaring that the system survived vigorous testing and therefore is secure. Given that refusing cooperation incurred no penalty in this series of tests, we can expect vendors to work that angle more energetically in the future.

The Secretary of State’s own web page gives top billing to the need “to restore the public’s confidence in the integrity of the electoral process,” while the actual security of the machines is relegated to second place.

We need real security evaluations, not feel-good fake tests. I wish this were more the former than the latter.

EDITED TO ADD (8/4): California Secretary of State Bowen’s certification decisions are online.

She has totally decertified the ES&S Inkavote Plus system, used in L.A. County, because of ES&S noncompliance with the Top to Bottom Review. The Diebold and Sequoia systems have been decertified and conditionally recertified. The same was done with one Hart Intercivic system (system 6.2.1). (Certification of the Hart system 6.1 was voluntarily withdrawn.)

To those who thought she was staging this review as security theater, this seems like evidence to the contrary. She wants to do the right thing, but has no idea how to conduct a security review.

Another article.

EDITED TO ADD (8/4): The Diebold software is pretty bad.

EDITED TO ADD (8/5): Ed Felten comments:

It is interesting (at least to me as a computer security guy) to see how often the three companies made similar mistakes. They misuse cryptography in the same ways: using fixed unchangeable keys, using ciphers in ECB mode, using a cyclic redundancy code for data integrity, and so on. Their central tabulators use poorly protected database software. Their code suffers from buffer overflows, integer overflow errors, and format string vulnerabilities. They store votes in a way that compromises the secret ballot.

And Avi Rubin comments:

As I read the three new reports, I could not help but marvel at the fact that so many places in the US are using these machines. When it comes to prescription medications, we perform extensive tests before drugs hit the market. When it comes to aviation, planes are held to standards and tested before people fly on them. But, it seems that the voting machines we are using are even more poorly designed and poorly implemented than I had realized.

He’s right, of course.

Posted on August 3, 2007 at 12:55 PMView Comments

California Voting Machine Audit Results

The state of California conducted a security review of their electronic voting machines earlier this year. This was a serious review, with real security researchers getting access to the source code. The report was issued last week, and the researchers were able to compromise all three machines — by Diebold Election Systems, Hart Intercivic, and Sequoia Voting Systems — multiple ways. (They said they could probably find more ways, if they had more time.)

Final report and details about the audit here. Good blog entries here and here. We don’t know what California will do now.

This is no surprise, really. The notion that electronic voting machines were somehow more secure every other computer system ever built was ridiculous from the start. And the claims by machine manufacturers that releasing their source code would hurt the security of the machine was — like all these sorts of claims — really an attempt to prevent embarrassment to the company.

Not everyone gets this, unfortunately. And not everyone involved in voting:

Letting the hackers have the source codes, operating manuals and unlimited access to the voting machines “is like giving a burglar the keys to your house,” said Steve Weir, clerk-recorder of Contra Costa County and head of the state Association of Clerks and Election Officials.

No. It’s like giving burglars the schematics, installation manuals, and unlimited access to your front door lock. If your lock is good, it will survive the burglar having that information. If your lock isn’t good, the burglar will get in.

I have two essays on this, from 2004: “Why Election Technology is Hard,” and “Electronic Voting Machines.” This essay — “Voting and Technology” — was written in 2000.

EDITED TO ADD (7/31): Another article.

EDITED TO ADD (8/2): Good commentary.

Posted on July 31, 2007 at 10:57 AMView Comments

Cloning RFID Chips Made by HID

Remember the Cisco fiasco from BlackHat 2005? Next in the stupid box is RFID-card manufacturer HID, who has prevented Chris Paget from presenting research on how to clone those cards.

Won’t these companies ever learn? HID won’t prevent the public from learning about the vulnerability, and they will end up looking like heavy handed goons. And it’s not even secret; Paget demonstrated the attack to me and others at the RSA Conference last month.

There’s a difference between a security flaw and information about a security flaw; HID needs to fix the first and not worry about the second. Full disclosure benefits us all.

EDITED TO ADD (2/28): The ACLU is presenting instead.

Posted on February 28, 2007 at 12:00 PMView Comments

Debating Full Disclosure

Full disclosure — the practice of making the details of security vulnerabilities public — is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you — the user — much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies — who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability — and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea — and these days it’s normal procedure — but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us — unless, of course, they knew about it beforehand — but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AMView Comments

Ensuring the Accuracy of Electronic Voting Machines

A Florida judge ruled (text of the ruling) that the defeated candidate has no right to examine the source code in the voting machines that determined the winner in a disputed Congressional race.

Meanwhile:

A laboratory that has tested most of the nation’s electronic voting systems has been temporarily barred from approving new machines after federal officials found that it was not following its quality-control procedures and could not document that it was conducting all the required tests.

That company is Ciber Inc.

Is it just me, or are things starting to make absolutely no sense?

Posted on January 4, 2007 at 12:06 PMView Comments

Forge Your Own Boarding Pass

Last week Christopher Soghoian created a Fake Boarding Pass Generator website, allowing anyone to create a fake Northwest Airlines boarding pass: any name, airport, date, flight. This action got him visited by the FBI, who later came back, smashed open his front door, and seized his computers and other belongings. It resulted in calls for his arrest — the most visible by Rep. Edward Markey (D-Massachusetts) — who has since recanted. And it’s gotten him more publicity than he ever dreamed of.

All for demonstrating a known and obvious vulnerability in airport security involving boarding passes and IDs.

This vulnerability is nothing new. There was an article on CSOonline from February 2006. There was an article on Slate from February 2005. Sen. Chuck Schumer spoke about it as well. I wrote about it in the August 2003 issue of Crypto-Gram. It’s possible I was the first person to publish it, but I certainly wasn’t the first person to think of it.

It’s kind of obvious, really. If you can make a fake boarding pass, you can get through airport security with it. Big deal; we know.

You can also use a fake boarding pass to fly on someone else’s ticket. The trick is to have two boarding passes: one legitimate, in the name the reservation is under, and another phony one that matches the name on your photo ID. Use the fake boarding pass in your name to get through airport security, and the real ticket in someone else’s name to board the plane.

This means that a terrorist on the no-fly list can get on a plane: He buys a ticket in someone else’s name, perhaps using a stolen credit card, and uses his own photo ID and a fake ticket to get through airport security. Since the ticket is in an innocent’s name, it won’t raise a flag on the no-fly list.

You can also use a fake boarding pass instead of your real one if you have the “SSSS” mark and want to avoid secondary screening, or if you don’t have a ticket but want to get into the gate area.

Historically, forging a boarding pass was difficult. It required special paper and equipment. But since Alaska Airlines started the trend in 1999, most airlines now allow you to print your boarding pass using your home computer and bring it with you to the airport. This program was temporarily suspended after 9/11, but was quickly brought back because of pressure from the airlines. People who print the boarding passes at home can go directly to airport security, and that means fewer airline agents are required.

Airline websites generate boarding passes as graphics files, which means anyone with a little bit of skill can modify them in a program like Photoshop. All Soghoian’s website did was automate the process with a single airline’s boarding passes.

Soghoian claims that he wanted to demonstrate the vulnerability. You could argue that he went about it in a stupid way, but I don’t think what he did is substantively worse than what I wrote in 2003. Or what Schumer described in 2005. Why is it that the person who demonstrates the vulnerability is vilified while the person who describes it is ignored? Or, even worse, the organization that causes it is ignored? Why are we shooting the messenger instead of discussing the problem?

As I wrote in 2005: “The vulnerability is obvious, but the general concepts are subtle. There are three things to authenticate: the identity of the traveler, the boarding pass and the computer record. Think of them as three points on the triangle. Under the current system, the boarding pass is compared to the traveler’s identity document, and then the boarding pass is compared with the computer record. But because the identity document is never compared with the computer record — the third leg of the triangle — it’s possible to create two different boarding passes and have no one notice. That’s why the attack works.”

The way to fix it is equally obvious: Verify the accuracy of the boarding passes at the security checkpoints. If passengers had to scan their boarding passes as they went through screening, the computer could verify that the boarding pass already matched to the photo ID also matched the data in the computer. Close the authentication triangle and the vulnerability disappears.

But before we start spending time and money and Transportation Security Administration agents, let’s be honest with ourselves: The photo ID requirement is no more than security theater. Its only security purpose is to check names against the no-fly list, which would still be a joke even if it weren’t so easy to circumvent. Identification is not a useful security measure here.

Interestingly enough, while the photo ID requirement is presented as an antiterrorism security measure, it is really an airline-business security measure. It was first implemented after the explosion of TWA Flight 800 over the Atlantic in 1996. The government originally thought a terrorist bomb was responsible, but the explosion was later shown to be an accident.

Unlike every other airplane security measure — including reinforcing cockpit doors, which could have prevented 9/11 — the airlines didn’t resist this one, because it solved a business problem: the resale of non-refundable tickets. Before the photo ID requirement, these tickets were regularly advertised in classified pages: “Round trip, New York to Los Angeles, 11/21-30, male, $100.” Since the airlines never checked IDs, anyone of the correct gender could use the ticket. Airlines hated that, and tried repeatedly to shut that market down. In 1996, the airlines were finally able to solve that problem and blame it on the FAA and terrorism.

So business is why we have the photo ID requirement in the first place, and business is why it’s so easy to circumvent it. Instead of going after someone who demonstrates an obvious flaw that is already public, let’s focus on the organizations that are actually responsible for this security failure and have failed to do anything about it for all these years. Where’s the TSA’s response to all this?

The problem is real, and the Department of Homeland Security and TSA should either fix the security or scrap the system. What we’ve got now is the worst security system of all: one that annoys everyone who is innocent while failing to catch the guilty.

This essay — my 30th for Wired.com — appeared today.

EDITED TO ADD (11/4): More news and commentary.

EDITED TO ADD (1/10): Great essay by Matt Blaze.

Posted on November 2, 2006 at 6:21 AMView Comments

Voting Software and Secrecy

Here’s a quote from an elections official in Los Angeles:

“The software developed for InkaVote is proprietary software. All the software developed by vendors is proprietary. I think it’s odd that some people don’t want it to be proprietary. If you give people the open source code, they would have the directions on how to hack into it. We think the proprietary nature of the software is good for security.”

It’s funny, really. What she should be saying is something like: “I think it’s odd that everyone who has any expertise in computer security doesn’t want the software to be proprietary. Speaking as someone who knows nothing about computer security, I think that secrecy is an asset.” That’s a more realistic quote.

As I’ve said many times, secrecy is not the same as security. And in many cases, secrecy hurts security.

Posted on October 2, 2006 at 7:10 AMView Comments

Faux Disclosure

Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.

You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.

Security researchers Dave Maynor of ISS and Johnny Cache — a.k.a. Jon Ellch — demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card — which they declined to identify — was inserted
in a Mac, Windows, or Linux laptop.

[…]

How is that for murky and non-transparent? The whole world is at risk — if the exploit is real — whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.

It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers — mainly because Apple had not fixed the problem yet.”

That’s part of what is meant by full disclosure these days — giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.

Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

Posted on August 14, 2006 at 1:41 PMView Comments

A Month of Browser Bugs

To kick off his new Browser Fun blog, H.D. Moore began with “A Month of Browser Bugs”:

This blog will serve as a dumping ground for browser-based security research and vulnerability disclosure. To kick off this blog, we are announcing the Month of Browser Bugs (MoBB), where we will publish a new browser hack, every day, for the entire month of July. The hacks we publish are carefully chosen to demonstrate a concept without disclosing a direct path to remote code execution. Enjoy!

Thirty-one days, and thirty-one hacks later, the blog lists exploits against all the major browsers:

  • Internet Explorer: 25
  • Mozilla: 2
  • Safari: 2
  • Opera: 1
  • Konqueror: 1

My guess is that he could have gone on for another month without any problem, and possibly could produce a new browser bug a day indefinitely.

The moral here isn’t that IE is less secure than the other browsers, although I certainly believe that. The moral is that coding standards are so bad that security flaws are this common.

Eric Rescorla argues that it’s a waste of time to find and fix new security holes, because so many of them still remain and the software’s security isn’t improved. I think he has a point. (Note: this is not to say that it’s a waste of time to fix the security holes found and publicly exploited by the bad guys. The question Eric tries to answer is whether or not it is worth it for the security community to find new security holes.)

Another commentary is here.

Posted on August 3, 2006 at 1:53 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.