Entries Tagged "disclosure"

Page 9 of 10

More Lynn/Cisco Information

There’s some new information on last week’s Lynn/Cisco/ISS story: Mike Lynn gave an interesting interview to Wired. Here’s some news about the FBI’s investigation. And here’s a video of Cisco/ISS ripping pages out of the BlackHat conference proceedings.

Someone is setting up a legal defense fund for Lynn. Send donations via PayPal to Abaddon@IO.com. (Does anyone know the URL?) According to BoingBoing, donations not used to defend Lynn will be donated to the EFF.

Copies of Lynn’s talk have popped up on the Internet, but some have been removed due to legal cease-and-desist letters from ISS attorneys, like this one. Currently, Lynn’s slides are here, here, here, here, here, here, here, here, here, here, here, here, here, here, and here. (The list is from BoingBoing.) Note that the presentation above is not the same as the one Lynn gave at BlackHat. The presentation at BlackHat didn’t have the ISS logo at the bottom, as the one on the Internet does. Also, the critical code components were blacked out. (Photographs of Lynn’s actual presentation slides were available here, but have been removed due to legal threats from ISS.)

There have been a bunch of commentary and analyses on the whole story. Business Week completely missed the point. Larry Seltzer at eWeek is more balanced.

Hackers are working overtime to reconstruct Lynn’s attack and write an exploit. This, of course, means that we’re in much more danger of there being a worm that makes use of this vulnerability.

The sad thing is that we could have avoided this. If Cisco and ISS had simply let Lynn present his work, it would have been just another obscure presentation amongst the sea of obscure presentations that is BlackHat. By attempting to muzzle Lynn, the two companies ensured that 1) the vulnerability was the biggest story of the conference, and 2) some group of hackers would turn the vulnerability into exploit code just to get back at them.

EDITED TO ADD: Jennifer Granick is Lynn’s attorney, and she has blogged about what happened at BlackHat and DefCon. And photographs of the slides Lynn actually used for his talk are here (for now, at least). Is it just me, or does it seem like ISS is pursuing this out of malice? With Cisco I think it was simple stupidity, but I think it’s malice with ISS.

EDITED TO ADD: I don’t agree with Irs Winkler’s comments, either.

EDITED TO ADD: ISS defends itself.

EDITED TO ADD: More commentary.

EDITED TO ADD: Nice rebuttal to Winkler’s essay.

Posted on August 3, 2005 at 1:31 PMView Comments

Cisco Harasses Security Researcher

I’ve written about full disclosure, and how disclosing security vulnerabilities is our best mechanism for improving security — especially in a free-market system. (That essay is also worth reading for a general discussion of the security trade-offs.) I’ve also written about how security companies treat vulnerabilities as public-relations problems first and technical problems second. This week at BlackHat, security researcher Michael Lynn and Cisco demonstrated both points.

Lynn was going to present security flaws in Cisco’s IOS, and Cisco went to inordinate lengths to make sure that information never got into the hands of the their consumers, the press, or the public.

Cisco threatened legal action to stop the conference’s organizers from allowing a 24-year-old researcher for a rival tech firm to discuss how he says hackers could seize control of Cisco’s Internet routers, which dominate the market. Cisco also instructed workers to tear 20 pages outlining the presentation from the conference program and ordered 2,000 CDs containing the presentation destroyed.

In the end, the researcher, Michael Lynn, went ahead with a presentation, describing flaws in Cisco’s software that he said could allow hackers to take over corporate and government networks and the Internet, intercepting and misdirecting data communications. Mr. Lynn, wearing a white hat emblazoned with the word “Good,” spoke after quitting his job at Internet Security Systems Inc. Wednesday. Mr. Lynn said he resigned because ISS executives had insisted he strike key portions of his presentation.

Not being able to censor the information, Cisco decided to act as if it were no big deal:

In a release shortly after the presentation, Cisco stated, “It is important to note that the information Lynn presented was not a disclosure of a new vulnerability or a flaw with Cisco IOS software. Lynn’s research explores possible ways to expand exploitations of known security vulnerabilities impacting routers.” And went on to state “Cisco believes that the information Lynn presented at the Blackhat conference today contained proprietary information and was illegally obtained.” The statement also refers to the fact that Lynn stated in his presentation that he used a popular file decompressor to ‘unzip’ the Cisco image before reverse engineering it and finding the flaw, which is against Cisco’s use agreement.

The Cisco propaganda machine is certainly working overtime this week.

The security implications of this are enormous. If companies have the power to censor information about their products they don’t like, then we as consumers have less information with which to make intelligent buying decisions. If companies have the power to squelch vulnerability information about their products, then there’s no incentive for them to improve security. (I’ve written about this in connection to physical keys and locks.) If free speech is subordinate to corporate demands, then we are all much less safe.

Full disclosure is good for society. But because it helps the bad guys as well as the good guys (see my essay on secrecy and security for more discussion of the balance), many of us have championed “responsible disclosure” guidelines that give vendors a head start in fixing vulnerabilities before they’re announced.

The problem is that not all researchers follow these guidelines. And laws limiting free speech do more harm to society than good. (In any case, laws won’t completely fix the problem; we can’t get laws passed in every possible country security researchers live.) So the only reasonable course of action for a company is to work with researchers who alert them to vulnerabilities, but also assume that vulnerability information will sometimes be released without prior warning.

I can’t imagine the discussions inside Cisco that led them to act like thugs. I can’t figure out why they decided to attack Michael Lynn, BlackHat, and ISS rather than turn the situation into a public-relations success. I can’t believe that they thought they could have censored the information by their actions, or even that it was a good idea.

Cisco’s customers want information. They don’t expect perfection, but they want to know the extent of problems and what Cisco is doing about them. They don’t want to know that Cisco tries to stifle the truth:

Joseph Klein, senior security analyst at the aerospace electronic systems division for Honeywell Technology Solutions, said he helped arrange a meeting between government IT professionals and Lynn after the talk. Klein said he was furious that Cisco had been unwilling to disclose the buffer-overflow vulnerability in unpatched routers. “I can see a class-action lawsuit against Cisco coming out of this,” Klein said.

ISS didn’t come out of this looking very good, either:

“A few years ago it was rumored that ISS would hold back on certain things because (they’re in the business of) providing solutions,” [Ali-Reza] Anghaie, [a senior security engineer with an aerospace firm, who was in the audience,] said. “But now you’ve got full public confirmation that they’ll submit to the will of a Cisco or Microsoft, and that’s not fair to their customers…. If they’re willing to back down and leave an employee … out to hang, well what are they going to do for customers?”

Despite their thuggish behavior, this has been a public-relations disaster for Cisco. Now it doesn’t matter what they say — we won’t believe them. We know that the public-relations department handles their security vulnerabilities, and not the engineering department. We know that they think squelching information and muzzling researchers is more important than informing the public. They could have shown that they put their customers first, but instead they demonstrated that short-sighted corporate interests are more important than being a responsible corporate citizen.

And these are the people building the hardware that runs much of our infrastructure? Somehow, I don’t feel very secure right now.

EDITED TO ADD: I am impressed with Lynn’s personal integrity in this matter:

When Mr. Lynn took the stage yesterday, he was introduced as speaking on a different topic, eliciting boos. But those turned to cheers when he asked, “Who wants to hear about Cisco?” As he got started, Mr. Lynn said, “What I just did means I’m about to get sued by Cisco and ISS. Not to put too fine a point on it, but bring it on.”

And this:

Lynn closed his talk by directing the audience to his resume and asking if anyone could give him a job.

“In large part I had to quit to give this presentation because ISS and Cisco would rather the world be at risk, I guess,” Lynn said. “They had to do what’s right for their shareholders; I understand that. But I figured I needed to do what’s right for the country and for the national critical infrastructure.”

There’s a lawsuit against him. I’ll let you know if there’s a legal defense fund.

EDITED TO ADD: The lawsuit has been settled. Some details:

Michael Lynn, a former ISS researcher, and the Black Hat organisers agreed to a permanent injunction barring them from further discussing the presentation Lynn gave on Wednesday. The presentation showed how attackers could take over Cisco routers, a problem that Lynn said could bring the Internet to its knees.

The injunction also requires Lynn to return any materials and disassembled code related to Cisco, according to a copy of the injunction, which was filed in US District Court for the District of Northern California. The injunction was agreed on by attorneys for Lynn, Black Hat, ISS and Cisco.

Lynn is also forbidden to make any further presentations at the Black Hat event, which ended on Thursday, or the following Defcon event. Additionally, Lynn and Black Hat have agreed never to disseminate a video made of Lynn’s presentation and to deliver to Cisco any video recording made of Lynn.

My hope is that Cisco realized that continuing with this would be a public-relations disaster.

EDITED TO ADD: Lynn’s BlackHat presentation is on line.

EDITED TO ADD: The FBI is getting involved.

EDITED TO ADD: The link to the presentation, above, has been replaced with a cease-and-desist letter. A copy of the presentation is now here.

Posted on July 29, 2005 at 4:35 AMView Comments

Public Disclosure of Personal Data Loss

Citigroup announced that it lost personal data on 3.9 million people. The data was on a set of backup tapes that were sent by UPS (a package delivery service) from point A and never arrived at point B.

This is a huge data loss, and even though it is unlikely that any bad guys got their hands on the data, it will have profound effects on the security of all our personal data.

It might seem that there has been an epidemic of personal-data losses recently, but that’s an illusion. What we’re seeing are the effects of a California law that requires companies to disclose losses of thefts of personal data. It’s always been happening, only now companies have to go public with it.

As a security expert, I like the California law for three reasons. One, data on actual intrusions is useful for research. Two, alerting individuals whose data is lost or stolen is a good idea. And three, increased public scrutiny leads companies to spend more effort protecting personal data.

Think of it as public shaming. Companies will spend money to avoid the PR cost of public shaming. Hence, security improves.

This works, but there’s an attenuation effect going on. As more of these events occur, the press is less likely to report them. When there’s less noise in the press, there’s less public shaming. And when there’s less public shaming, the amount of money companies are willing to spend to avoid it goes down.

This data loss has set a new bar for reporters. Data thefts affecting 50,000 individuals will no longer be news. They won’t be reported.

The notification of individuals also has an attenuation effect. I know people in California who have a dozen notices about the loss of their personal data. When no identity theft follows, people start believing that it isn’t really a problem. (In the large, they’re right. Most data losses don’t result in identity theft. But that doesn’t mean that it’s not a problem.)

Public disclosure is good. But it’s not enough.

Posted on June 8, 2005 at 4:45 PMView Comments

The Price of Restricting Vulnerability Information

Interesting law article:

There are calls from some quarters to restrict the publication of information about security vulnerabilities in an effort to limit the number of people with the knowledge and ability to attack computer systems. Scientists in other fields have considered similar proposals and rejected them, or adopted only narrow, voluntary restrictions. As in other fields of science, there is a real danger that publication restrictions will inhibit the advancement of the state of the art in computer security. Proponents of disclosure restrictions argue that computer security information is different from other scientific research because it is often expressed in the form of functioning software code. Code has a dual nature, as both speech and tool. While researchers readily understand the information expressed in code, code enables many more people to do harm more readily than with the non-functional information typical of most research publications. Yet, there are strong reasons to reject the argument that code is different, and that restrictions are therefore good policy. Code’s functionality may help security as much as it hurts it and the open distribution of functional code has valuable effects for consumers, including the ability to pressure vendors for more secure products and to counteract monopolistic practices.

Posted on April 4, 2005 at 7:25 AMView Comments

Sybase Practices Dumb Security

From Computerworld:

A threat by Sybase Inc. to sue a U.K.-based security research firm if it publicly discloses the details of eight holes it found in Sybase’s database software last year is evoking sharp criticism from some IT managers but sympathetic comments from others.

I can see why Sybase would prefer it if people didn’t know about vulnerabilities in their software — it’s bad for business — but disclosure is the reason companies are fixing them. If researchers are prohibited from publishing, then software developers are free to ignore security problems.

Posted on April 1, 2005 at 1:24 PMView Comments

Flaw in Winkhaus Blue Chip Lock

The Winkhaus Blue Chip Lock is a very popular, and expensive, 128-bit encrypted door lock. When you insert a key, there is a 128-bit challenge/response exchange between the key and the lock, and when the key is authorized it will pull a small pin down through some sort of solenoid switch. This allows you to turn the lock.

Unfortunately, it has a major security flaw. If you put a strong magnet near the lock, you can also pull this pin down, without authorization — without damage or any evidence.

The worst part is that Winkhaus is in denial about the problem, and is hoping it will just go away by itself. They’ve known about the flaw for at least six months, and have done nothing. They haven’t told any of their customers. If you ask them, they’ll say things like “it takes a very special magnet.”

From what I’ve heard, the only version that does not have this problem is the model without a built-in battery. In this model, the part with the solenoid switch is aimed on the inside instead of the outside. The internal battery is a weak spot, since you need to lift a small lid to exchange it. So this side can never face the “outside” of the door, since anyone could remove the batteries. With an external power supply you do not have this problem, since one side of the lock is pure metal.

A video demonstration is available here.

Posted on March 2, 2005 at 3:00 PMView Comments

ChoicePoint

The ChoicePoint fiasco has been news for over a week now, and there are only a few things I can add. For those who haven’t been following along, ChoicePoint mistakenly sold personal credit reports for about 145,000 Americans to criminals.

This story would have never been made public if it were not for SB 1386, a California law requiring companies to notify California residents if any of a specific set of personal information is leaked.

ChoicePoint’s behavior is a textbook example of how to be a bad corporate citizen. The information leakage occurred in October, and it didn’t tell any victims until February. First, ChoicePoint notified 30,000 Californians and said that it would not notify anyone who lived outside California (since the law didn’t require it). Finally, after public outcry, it announced that it would notify everyone affected.

The clear moral here is that first, SB 1386 needs to be a national law, since without it ChoicePoint would have covered up their mistakes forever. And second, the national law needs to force companies to disclose these sorts of privacy breaches immediately, and not allow them to hide for four months behind the “ongoing FBI investigation” shield.

More is required. Compare the difference in ChoicePoint’s public marketing slogans with its private reality.

From “Identity Theft Puts Pressure on Data Sellers,” by Evan Perez, in the 18 Feb 2005 Wall Street Journal:

The current investigation involving ChoicePoint began in October when the company found the 50 accounts it said were fraudulent. According to the company and police, criminals opened the accounts, posing as businesses seeking information on potential employees and customers. They paid fees of $100 to $200, and provided fake documentation, gaining access to a trove of
personal data including addresses, phone numbers, and social security numbers.

From ChoicePoint Chairman and CEO Derek V. Smith:

ChoicePoint’s core competency is verifying and authenticating individuals
and their credentials.

The reason there is a difference is purely economic. Identity theft is the fastest-growing crime in the U.S., and an enormous problem elsewhere in the world. It’s expensive — both in money and time — to the victims. And there’s not much people can do to stop it, as much of their personal identifying information is not under their control: it’s in the computers of companies like ChoicePoint.

ChoicePoint protects its data, but only to the extent that it values it. The hundreds of millions of people in ChoicePoint’s databases are not ChoicePoint’s customers. They have no power to switch credit agencies. They have no economic pressure that they can bring to bear on the problem. Maybe they should rename the company “NoChoicePoint.”

The upshot of this is that ChoicePoint doesn’t bear the costs of identity theft, so ChoicePoint doesn’t take those costs into account when figuring out how much money to spend on data security. In economic terms, it’s an “externality.”

The point of regulation is to make externalities internal. SB 1386 did that to some extent, since ChoicePoint now must figure the cost of public humiliation when they decide how much money to spend on security. But the actual cost of ChoicePoint’s security failure is much, much greater.

Until ChoicePoint feels those costs — whether through regulation or liability — it has no economic incentive to reduce them. Capitalism works, not through corporate charity, but through the free market. I see no other way of solving the problem.

Posted on February 23, 2005 at 3:19 PMView Comments

Safecracking

Matt Blaze has written an excellent paper: “Safecracking for the computer scientist.”

It has completely pissed off the locksmithing community.

There is a reasonable debate to be had about secrecy versus full disclosure, but a lot of these comments are just mean. Blaze is not being dishonest. His results are not trivial. I believe that the physical security community has a lot to learn from the computer security community, and that the computer security community has a lot to learn from the physical security community. Blaze’s work in physical security has important lessons for computer security — and, as it turns out, physical security — notwithstanding these people’s attempt to trivialize it in their efforts to attack him.

Posted on January 14, 2005 at 8:18 AMView Comments

Keeping Network Outages Secret

There’s considerable confusion between the concept of secrecy and the concept of security, and it is causing a lot of bad security and some surprising political arguments. Secrecy is not the same as security, and most of the time secrecy contributes to a false feeling of security instead of to real security.

In June, the U.S. Department of Homeland Security urged regulators to keep network outage information secret. The Federal Communications Commission already requires telephone companies to report large disruptions of telephone service, and wants to extend that requirement to high-speed data lines and wireless networks. But the DHS fears that such information would give cyberterrorists a “virtual road map” to target critical infrastructures.

This sounds like the “full disclosure” debate all over again. Is publishing computer and network vulnerability information a good idea, or does it just help the hackers? It arises again and again, as malware takes advantage of software vulnerabilities after they’ve been made public.

The argument that secrecy is good for security is naive, and always worth rebutting. Secrecy is only beneficial to security in limited circumstances, and certainly not with respect to vulnerability or reliability information. Secrets are fragile; once they’re lost they’re lost forever. Security that relies on secrecy is also fragile; once secrecy is lost there’s no way to recover security. Trying to base security on secrecy is just plain bad design.

Cryptography is based on secrets — keys — but look at all the work that goes into making them effective. Keys are short and easy to transfer. They’re easy to update and change. And the key is the only secret component of a cryptographic system. Cryptographic algorithms make terrible secrets, which is why one of cryptography’s most basic principles is to assume that the algorithm is public.

That’s the other fallacy with the secrecy argument: the assumption that secrecy works. Do we really think that the physical weak points of networks are such a mystery to the bad guys? Do we really think that the hacker underground never discovers vulnerabilities?

Proponents of secrecy ignore the security value of openness: public scrutiny is the only reliable way to improve security. Before software bugs were routinely published, software companies routinely denied their existence and wouldn’t bother fixing them, believing in the security of secrecy. And because customers didn’t know any better, they bought these systems, believing them to be secure. If we return to a practice of keeping software bugs secret, we’ll have vulnerabilities known to a few in the security community and to much of the hacker underground.

Secrecy prevents people from assessing their own risks.

Public reporting of network outages forces telephone companies to improve their service. It allows consumers to compare the reliability of different companies, and to choose one that best serves their needs. Without public disclosure, companies could hide their reliability performance from the public.

Just look at who supports secrecy. Software vendors such as Microsoft want very much to keep vulnerability information secret. The Department of Homeland Security’s recommendations were loudly echoed by the phone companies. It’s the interests of these companies that are served by secrecy, not the interests of consumers, citizens, or society.

In the post-9/11 world, we’re seeing this clash of secrecy versus openness everywhere. The U.S. government is trying to keep details of many anti-terrorism countermeasures — and even routine government operations — secret. Information about the infrastructure of plants and government buildings is secret. Profiling information used to flag certain airline passengers is secret. The standards for the Department of Homeland Security’s color-coded terrorism threat levels are secret. Even information about government operations without any terrorism connections is being kept secret.

This keeps terrorists in the dark, especially “dumb” terrorists who might not be able to figure out these vulnerabilities on their own. But at the same time, the citizenry — to whom the government is ultimately accountable — is not allowed to evaluate the countermeasures, or comment on their efficacy. Security can’t improve because there’s no public debate or public education.

Recent studies have shown that most water, power, gas, telephone, data, transportation, and distribution systems are scale-free networks. This means they always have highly connected hubs. Attackers know this intuitively and go after the hubs. Defenders are beginning to learn how to harden the hubs and provide redundancy among them. Trying to keep it a secret that a network has hubs is futile. Better to identify and protect them.

We’re all safer when we have the information we need to exert market pressure on vendors to improve security. We would all be less secure if software vendors didn’t make their security vulnerabilities public, and if telephone companies didn’t have to report network outages. And when government operates without accountability, that serves the security interests of the government, not of the people.

Security Focus article
CNN article

Another version of this essay appeared in the October Communications of the ACM.

Posted on October 1, 2004 at 9:36 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.