Debating Full Disclosure

Full disclosure—the practice of making the details of security vulnerabilities public—is a damned good idea. Public scrutiny is the only reliable way to improve security, while secrecy only makes us less secure.

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers (See The Vulnerability Disclosure Game: Are We More Secure?). The problem, according to this position, is less the vulnerability itself and more the information about the vulnerability.

But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

To understand why the second assumption isn’t true, you need to understand the underlying economics. To a software company, vulnerabilities are largely an externality. That is, they affect you—the user—much more than they affect it. A smart vendor treats vulnerabilities less as a software problem, and more as a PR problem. So if we, the user community, want software vendors to patch vulnerabilities, we need to make the PR problem more acute.

Full disclosure does this. Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

It wasn’t until researchers published complete details of the vulnerabilities that the software companies started fixing them.

Of course, the software companies hated this. They received bad PR every time a vulnerability was made public, and the only way to get some good PR was to quickly release a patch. For a large company like Microsoft, this was very expensive.

So a bunch of software companies, and some security researchers, banded together and invented “responsible disclosure” (See “The Chilling Effect”). The basic idea was that the threat of publishing the vulnerability is almost as good as actually publishing it. A responsible researcher would quietly give the software vendor a head start on patching its software, before releasing the vulnerability to the public.

This was a good idea—and these days it’s normal procedure—but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.

The moral here doesn’t just apply to software; it’s very general. Public scrutiny is how security improves, whether we’re talking about software or airport security or government counterterrorism measures. Yes, there are trade-offs. Full disclosure means that the bad guys learn about the vulnerability at the same time as the rest of us—unless, of course, they knew about it beforehand—but most of the time the benefits far outweigh the disadvantages.

Secrecy prevents people from accurately assessing their own risk. Secrecy precludes public debate about security, and inhibits security education that leads to improvements. Secrecy doesn’t improve security; it stifles it.

I’d rather have as much information as I can to make an informed decision about security, whether it’s a buying decision about a software product or an election decision about two political parties. I’d rather have the information I need to pressure vendors to improve security.

I don’t want to live in a world where companies can sell me software they know is full of holes or where the government can implement security measures without accountability. I much prefer a world where I have all the information I need to assess and protect my own security.

This essay originally appeared on CSOOnline, as part of a series of essay on the topic. Marcus Ranum wrote against the practice of disclosing vulnerabilities, and Mark Miller of Microsoft wrote in favor of responsible disclosure. These are on-line-only sidebars to a very interesting article in CSO Magazine, “The Chilling Effect,” about the confluence of forces that are making it harder to research and disclose vulnerabilities in web-based software:

“Laws say you can’t access computers without permission,” she [attorney Jennifer Granick] explains. “Permission on a website is implied. So far, we’ve relied on that. The Internet couldn’t work if you had to get permission every time you wanted to access something. But what if you’re using a website in a way that’s possible but that the owner didn’t intend? The question is whether the law prohibits you from exploring all the ways a website works,” including through vulnerabilities.

All the links are worth reading in full.

A Simplified Chinese translation by Xin LI is available on Delphij’s Chaos.

Posted on January 23, 2007 at 6:45 AM36 Comments

Comments

Clive Robinson January 23, 2007 7:58 AM

With regard to testing other peoples Web Sites, I ssupect that unfortunatly the legal view will win out in this type of argument.

The reason being primaraly that the law is based around “property” and the transgretions people make against it.

Most of the time a very narrow view point is taken in that “you have transgressed against somebodies property” (tresspass if you will).

In the UK you might have a defence of “in the public interest” but I for one would not put any faith in it.

Unless the legal system starts to take the wider perspective on what the “researcher” has done then there is going to be trouble in store for anybody carrying out such activities.

Roy January 23, 2007 8:29 AM

Secrecy is bad policy even when the enemy is Mother Nature. FEMA had plans for New Orleans but they kept them secret. Only after Katrina struck did we find out almost all of the plans were stupid.

The plans were so stupid it’s obvious why FEMA kept them secret: publishing the details would have invited criticism — well, ridicule, frankly. But ridicule is healthy when plans are stupid.

The 200 mile traffic jam, the 150,000 abandoned pets, the countless drowned invalids, the rescuers forbidden to enter the evacuation area, the buses standing by in standing water, the supplies that couldn’t be delivered, the insistence on disarming people guarding their homes — all of that could have been avoided had the public known in advance how stupid the government was planning to be.

The government — federal, state, and local — has plans for every kind of catatastrophe, but since they’re keeping them secret they must know the plans are stupid and dangerous.

Remember the floor wardens of September 11th who told people to go back up to their offices? When you don’t know what the problem is, make everybody sit tight and wait for authorities to arrive to take care of things? Yep, there was a plan. The hijackers were not the only villains that day.

Paul Crowley January 23, 2007 9:01 AM

Ranum’s perspective is interesting, but I don’t think he’s taking into account the huge change that results from having attacks funded by organized crime.

Aaron January 23, 2007 9:47 AM

Essentially, by disclosing vulnerabilities you force software makers to internalize the cost of security.

As it stands now, it’s most cost-effective for them to release it insecure then patch it when needed.

But with this, it’s more efficient for them to build secure software to begin with and not have to deal with the negative publicity of those vulnerabilities (or at least, not as much).

This will inevitably raise the purchase price of software, but, if Bruce is right, the cost should actually come down.

Of course, those of us who love our Linux already live in this world somewhat.

Seems apropos:
“Though a superhero, Bruce Schneier disdanes the use of a mask or secret identity as ‘security through obscurity’.”

Foolish Jordan January 23, 2007 9:48 AM

I would like to see some discussion of this scenario:

Sometimes there are security holes and maybe the hackers know about them, and maybe they don’t, but nobody seems to be using them. Now, a researcher in Full Disclosure mode publishes details of a hole. Or perhaps he he just publishes the outline. In any case, some enterprising teenager has enough details that he writes some code to take advantage of the hole and wreaks actual havok until the hastily-released patch.

In this scenario, which has happened a few times in real life, it seems that the secrecy, while not making us “secure” in some sense, has prevented us from realizing “actual harm” in the sense of the havok produced after the hole is published.

I’d like to see how full disclosure is still better than secrecy even in this case.

False Data January 23, 2007 9:55 AM

Is secrecy versus full disclosure in software products a specific application of the general tension between privacy and security?

If so, are there principles and constraints from the privacy vs. security debate that would be useful to import into discussions about secrecy vs. full disclosure, or vice versa?

Roie January 23, 2007 9:57 AM

@Marcal

PR = Public relations. Basically, security is a publicity and reputation issue for vendors, more than anything else.

Ben Liddicott January 23, 2007 10:03 AM

Externalities

If I buy software from z-Corp, and a hacker attacks me, that is not an externality, because it is reflected in my willingness to pay for the software, and z-Corp therefore has the appropriate incentives. It’s only an externality if the hacker uses my machine to attack someone else.

(If I want more security, I can always pay more for it, and have always been able to, whether by buying a Mac, or an AV package.)

Arguments for regulation include asymmetry of information. z-Corp knows more about the security of the software than I do, so can deceive me. But then again I know more about how I use the software, and can deceive z-Corp much more, because what could z-Corp find out about my reputation when I buy z-Os pre-installed? This one actually cuts the other way, towards making the end-user liable.

The only economic (rather than moral) argument for making the software maker liable, rather than letting them go with the market, is that they can mitigate the problem more cheaply AND at a cost less than the external (third-party, not buyer) costs.

This is probably true, at least for some values of “mitigate”.

ISP Liability

But it is also true that ISPs can mitigate the problem at a lower cost than anyone. If an ISP was responsible for attacks which originate on their networks, or which originate elsewhere but could have been readily prevented by off-the-shelf filtering, they would firewall infected machines, embargo their email, and make them run a trojan scanner before being allowed back on.

Statutory Damages

Wooever is to be made lliable, since damage is potentially infinite, any liability must be statutorily limited, otherwise no-one could enter the business at all. Imagine if BT had to pay actual damages to everyone who was affected by some zero-day exploit in the BT Home Hub configuration software? £250 to 1million subscribers? Sure, it would make them think about security, but they would think “we would be more secure if we didn’t provide this service”.

On the other hand if they were fined a modest £10 for each affected user, and made to provide a downloadable fix and free trojan scan, this would be a more reasonable incentive. But it isn’t the same as “liablility”.

Fraud Guy January 23, 2007 10:05 AM

Read Ranum’s counterpoint.

He misses the point completely. The reason why there is continuous disclosure of security vulnerabilities is not because the security industry makes money/fame off of finding vulnerabilities; the reason is that the software is insecure. The software vendors are trying to make software cheaper and faster, not more secure, and have decided that the risk of adverse publicity is offset by the money they are making off of their product.

This leaves the end users responsible for the fallout of the security breaches, since the ubiquitous licenses that you must agree to protect the vendor from any real damages that their software vulnerabilities may cause.

Now, if we go back to hiding vulnerabilities from the public, its win/win for the software manufacturers (no pesky PR issues/can focus on maximizing profit from software) and lose/lose for consumers (uninformed decisions/still at risk of vulnerabilities).

Personally, I feel we should go in a different direction: no licenses. Software as a true product, with manufacturer liability. Why do we ask for ever increasing security from our physical products (even at the cost of some comfort) while software products are still exempted? I think we could accept a few years of no new/best feature rush while the software manufacturers fix their back end and move to more secure, but still useful software. Once the security baseline is set, then make sure the bells and whistles are in place.

Ed Davies January 23, 2007 12:36 PM

Actually, in the referenced article Ranum doesn’t provide any arguments against disclosure. He just says that with disclosure things haven’t got better and that the motives and antics of the disclosers are not good; this is not quite the same thing as saying that disclosure is a bad idea.

Pat Cahalan January 23, 2007 12:41 PM

Marcus does have a point, in that one of the claims made by proponents of vulnerability disclosure is that it will make companies make secure software to begin with, and that has demonstrably not occurred.

However, he is glossing over the fact that whether or not the industry has responded to the threat of disclosure by being “better citizens” and writing more secure software to begin with, at the very least they must respond to individual disclosures.

So, while I agree that “vulnerability disclosure”, conceptually, is not fixing the overall situation, at the very least it is forcing the fixing individual instances.

But in a real sense he’s correct -> if the IT industry concentrates on fixing the root cause of the problem (and spends resources accordingly), it’s more likely to produce long term results, and that’s where efforts ought to be focused.

However, that’s not an argument against full disclosure, it’s an argument for secure computing.

Aaron January 23, 2007 12:57 PM

@ Ben Liddicott

The point is the costs of the attack are an externality to the maker of the software, not to you. Right now software makers do not internalize the cost of security into their products – just as formerly high-polluting factories did not internalize the costs of their waste dumping into the costs of their products, but through taxation they have been forced to.

Mick January 23, 2007 1:09 PM

“Actually, in the referenced article Ranum doesn’t provide any arguments against disclosure. He just says that with disclosure things haven’t got better and that the motives and antics of the disclosers are not good; this is not quite the same thing as saying that disclosure is a bad idea.”

The latter point is an interesting one. C. S. Lewis wrote that one of the troubles with Marxists and Freudians was that they spent more time attacking people’s motives (or what they claimed were people’s motives) than addressing their arguments. IIRC, he said something like “If I say that for a right-angled triangle the square on the hypotenuse is the sum of the square on the other two sides, and you say, ‘You only say that because of your infantile jealousies,’ or ‘You only say that because of bourgeois production-relations,’ you haven’t offered a rational argument.”

It may be correct to say that the motives of disclosers are bad, and a discussion about that may be interesting in itself, but it is not an argument against disclosure.

Take a parallel example. If someone, motivated by envy of a criminal, reveals that a crime has taken place it doesn’t mean that the disclosure was a bad thing merely because the discloser wasn’t motivated by public spiritedness.

ac January 23, 2007 1:15 PM

@Foolish Jordan

The reason nobody’s answering your question is that there is a problem with your question.

“there are security holes…but nobody seems to be using them”

How exactly would one ever know that? Sure, there’s this background noise of amateur hackers writing worms with virus toolkits, but they’re not the serious threat. The threat is a real hacker who uses an exploit he knows about to hit not a million targets, but one. Say, a bank. So this professional hacker uses an unknown vulnerability to steal money from a bank. And the bank keeps the incident quiet to avoid a panic. How are you accounting for this? It doesn’t. Teenagers are not the ones we need to be worried about.

And on your last point…let’s say the vulnerability remains secret. Then the vendor never fixes it. How long until someone independently comes across this bug, in the unlikely scenario they already haven’t?

nameless January 23, 2007 1:46 PM

@Clive Robinson

“In the UK you might have a defence of “in the public interest” but I for one would not put any faith in it.”

Yes, I agree. The “public interest” defence is useful for journalists who generally have access to legal advice provided by their employer. I don’t think it’s much use to anybody else.

“With regard to testing other peoples Web Sites, I suspect that unfortunatly the legal view will win out in this type of argument.”

It’s unfortunate that the police may prosecute when it isn’t really necessary:
http://www.theregister.co.uk/2005/10/11/tsunami_hacker_followup/

That’s why professional penetration tester always get a written agreement about what they can do before trying anything on somebody else’s system.

There’s lots of software that researchers can test by getting copies and probing on a test rig but I agree that you can’t do security tests on a website without the owner’s permission. And perhaps that’s as it should be.

Pat Cahalan January 23, 2007 1:58 PM

I agree that you can’t do security tests on a website without the owner’s permission

Hm, not so sure I agree with that universally. One counterexample: if the website is providing you a commercial service and is a front-end to a system that contains personal or private information about you, as a customer, you should be allowed to do security tests (or, to be more accurate, the community of users/customers should have the right to demand security tests by some authorized party).

nameless January 23, 2007 2:23 PM

@Pat Calahan

“you, as a customer, you should be allowed to do security tests”

I had considered that angle before my last post; at first I found myself thinking along similar lines but eventually decided that the idea is wrong. Your next statement is much better.

“the community of users/customers should have the right to demand security tests by some authorized party”

Many trades have strict safety rules e.g. in my country, if I alter the electrical wiring in my house I must have the work inspected by a qualified electrician and I cannot fit any gas fittings at all without getting “Corgi” certified first.

When it comes to IT, there seem to be no security rules at all. This is not an abstract discussion for me. In the UK, we are computerising medical records systems and planning to compile huge databases of personal information for a national ID register. Hopefully, these systems will be penetration tested properly as they are assembled but I do not know how to verify. I just have to take the government’s word for it.

vmunster January 23, 2007 2:40 PM

I don’t think that this problem of full disclosure or secrecy is so black and white.

If you were to look at how a security company actually runs there are many releases that are made, each one updated with the latest security fixes. However, there probably are numerous bugs that the company knows about that hasn’t been fixed in the latest released versions. It is just the high-priority fixes that are made. So then, are these companies guilty of secrecy? I wouldn’t think so…

Perhaps the problem doesn’t lie within keeping secrets but instead lies in the fact that the company purposely hides the problem because it has no intention of fixing it…

With that being said, it seems that the essence of full-disclosure isn’t disclosing, but the willingness to disclose for the good of the customer, rather than the good of the company. After all, as B. Schneier said, it is the customer who is being most affected by such vulnerabilities.

nedu January 23, 2007 2:46 PM

“This [responsible disclosure] was a good idea — and these days it’s normal procedure — but one that was possible only because full disclosure was the norm. And it remains a good idea only as long as full disclosure is the threat.”

I disagree.

“Responsible disclosure” ignores its own second-order feedback effect on the social and legal context.

As long as full disclosure is the norm, then responsible disclosure seems like an attractive alternative. But when “responsible” disclosure is the norm, then the threat –or act– of full disclosure becomes “irresponsible”. Remember that lawyers have more influence on the system than hackers do!

How will a judge contemplate full disclosure compared to “responsible” disclosure?

So instead, I’d advocate anonymous disclosure. Just publish complete details of the vulnerability, together with proof of concept code in some less lawsuit-addled jurisdiction than the US. Publish anonymously. Use proxies.

If a “responsible” company wants more details, then they can offer an indemnification and release from liability. In writing. Signed by a responsible officer of the corporation.

George Fujimori January 23, 2007 2:50 PM

@Foolish Jordan

Your point is quite specious. Whether or not someone knows that the software is unsecure is irrelevant. The clear and present issue is in fact that it’s unsecure. The responsible party who created that piece of unsecure software should prevent and remedy that situation.

Your and Marcus’s arguments are based solely on the premise that attackers can’t identify software vulnerabilities without researchers. This is patently false. Therefore, the argument doesn’t hold. Marcus is a very bright individual, but he seems to have been indoctrinated with the of good, evil, and axes of bad guys mentality that’s very prevalent in the federal government and government contractor circles.

Perhaps rephrasing this helps, did the grad student in Indiana make airline flight less secure by showing people how easy it was to circumvent the system? Did the FBI raid on his house make airline flight more secure? Could criminals have figured this out themselves?

Rob January 23, 2007 3:26 PM

@Pat Cahalan
“Marcus does have a point, in that one of the claims made by proponents of vulnerability disclosure is that it will make companies make secure software to begin with, and that has demonstrably not occurred.”

It’s the economics. I think of it as “outsourcing to security researchers”, or even “outsourcing to attackers”. The cost is some loss of reputation — possibly, eventually. The benefit is zero dollars spent in finding security bugs.

If a manager came to me, the CEO, and said we could outsource for zero expenditure and possibly only a minor reputation hit in some future quarter, I’d be an idiot not to take him up on the deal.

On Wall Street, your reputation in future quarters is discounted according to the current quarter’s performance and your direct responses. That’s why when a company underperforms and the stock price drops, then the companys announces layoffs and the stock price goes up. In other words, HAVING good security doesn’t count for much, but FIXING security problems (or appearing to do so) does.

It may seem strange, but company’s are only acting on the perceived incentives and disincentives, and even a simple economic model should be able to predict their general behavior.

Pat Cahalan January 23, 2007 3:32 PM

Supremely off topic post, I’m just curious:

Bruce, you and Marcus seem to know each other pretty well (you’re in his wedding pictures and all) but you seem to come down on the opposite sides of some of these issues pretty regularly.

I also suspect your personal politics are kind of bipolar, just reading his site and yours.

Do you guys ever agree on anything? 🙂

derf January 23, 2007 3:52 PM

@Roy

There are only a handful of roads that exit New Orleans (it’s in a swamp and roads are no longer allowed in “wetlands”), so the traffic jam was inevitable, but exacerbated by the late evacuation order from Mayor Nagin. The emergency planning was public enough that quite a number of people in the state knew the city needed 72 hours to evacuate properly. Apparently, Nagin didn’t get or ignored the memo.

The example above shows one of the problems with “full disclosure”. Part of the full disclosure needs to be verifying that “the public” AND the proper people inside the software manufacturer are notified of the vulnerability. It does no good to post the vulnerability on Bob’s blog and call it “fully disclosed” if no one is reading Bob’s blog.

Foo January 23, 2007 5:08 PM

Here are my two cents from the vendor side.

I’m a product manager. A hole was discovered in my product and the hacker was kind enough to contact us first. The ground rules he laid out were simple: fix it in a week or he goes public.

The opportunity to talk to him was invaluable. As a policy, I treated such reports as high profile bugs only to be beaten by an actual customer outage. (Outages are rare, but when they happen they are more costly than a security issue.) Fortunately we had no outages during the week and the regular communication made him willing to work with us past the one week limit.

The key is for both sides to treat the event with respect. When one side doesn’t give it respect, then let the chips fall where they may — including full disclosure. On the other flip side, if the vendor is doing their diligence to address the matter and be respectful to the person making the discovery, full disclosure helps no one. The fix won’t come any sooner and users are put at risk without any benefit.

X the Unknown January 23, 2007 5:29 PM

@nameless & @Pat Calahan

‘”you, as a customer, you should be allowed to do security tests”

I had considered that angle before my last post; at first I found myself thinking along similar lines but eventually decided that the idea is wrong. Your next statement is much better.’

In the physical world, we do impromptu security tests on establishments we visit, all the time. When going to a new store, you automatically check out the neighborhood it resides in. You evaluate the class of patrons visible. You look for evidence of financial stability and solvency, before entering into large-ticket warantee transactions (in the vague hope that you can always sue them if they don’t provide service and support). These things are all not only accepted security-tests in everyday life, they are essential to maintaining a reasonable system of commerce.

Most of these security-checks cannot easily be made on a web site. However, especially with commercial transactions, we still require the same general categories of assurances as with a physical vendor. Sure, you can research reputations, dealing only with well-known and established vendors.

But, “On the Net, nobody can tell you’re a dog.” How do I know the web-site I am visiting actually IS PayPal, and not some phishing masquerade? I have to do some kind of checking and testing. A simple test is to check the URL displayed in my browser. However, we have seen many ways in which this can be spoofed, with various degrees of verisimilitude.

At a physical bank, you are offering to protect my monies, in return for being allowed to borrow some percentage of it on an ongoing basis. I need not actually attempt to rob the bank to get some feeling for the degree of security you offer. I am free to look around at publicly-accessible portions of the building, to judge structural integrity and the number of access-points. I am even free to try to open any doors not specifically marked otherwise. If the bank doesn’t want me going through the doors, it will lock them (or clearly mark them “no admittance”). I am free to make inquiries about procedures and security measures the bank employs to protect my money. They may decline to answer some of my questions, but if I don’t get some degree of reassurances, I’ll probably take my business elsewhere.

If it were illegal to even enquire about security, or to check a publicly-accessible door to see if it is locked (rather than just being an unmarked restroom facility, for example), then few people would patronize businesses unless there were some sort of governmental “seal of approval” – that includes some kind of indemnification against losses due to malfeasance on the business’ part. How else could I ever judge who is a “safe” business establishment?

Similarly, I think you need to be able to “establish the bona fides” of any purported web-site “establishment”. Unless and until some superhuman entity (like the government) establishes some standardized solution to this problem (maybe via public-key certificates and a web-site licensing program), I am basically “on my own” in evaluating the respectability of any given web-site. One of the ways to do this is to “rattle the doorknobs”, by trying modified URL’s. If I can get “in”, then the “door wasn’t locked”. Either it leads to another public area, or site security is poor. Either result tells me something important. Another test, especially on sites that request personal/financial information, is to put in some “test data” – such as cross-site scripting strings. If the invalid data is dealt with appropriately, I have greater confidence in the site. If not, I NEED to know that, and will probably go elsewhere.

nameless January 23, 2007 6:13 PM

@X the Unknown

Thanks for feedback.

My post was not with detecting phishing or fake/corrupt banks. I just want way to ensure that some sort of security baseline is in place for Internet banking systems when a user goes to their online bank. Phishing is a separate issue for me.

“I am free to make inquiries about procedures and security measures the bank employs to protect my money.”

There is no reason why any bank or security company should or would want to publicize its security procedures. I am not suggesting that security by obscurity works but I do think that being discrete about security issues should be the norm here.

“until some superhuman entity (like the government) establishes some standardized solution to this problem (maybe via public-key certificates and a web-site licensing program), I am basically “on my own””

The government would be better advised to mandate security standards for IT systems that process sensitive personal information such as medical records and banking systems. I know it will not be perfect and done badly it could create a horrible mess but right now we seem to have nothing.

“One of the ways to do this is to “rattle the doorknobs”, by trying modified URL’s. If I can get “in””

As I noted on my last post, that could lead to a criminal record in the UK. There has to be a better way.

Bank buildings mostly look the same to me. It’s hard to know how well the bank really is from the outside e.g. Baring, BCCI.

I sympathise with your desire to be able to test something “hands on” but unless you are a professional IT systems penetration tester just how much use is that really going to be?

Ben Liddicott January 24, 2007 4:31 AM

@Aaron

That’s not what “Externality” means. An externality is an effect on a third party unrelated to the transaction. Z-Corp should take into account my willingness to pay for my own security, and will produce it if I am willing to pay what it costs them.

If I am not willing to pay, the responsibility is mine. That’s not external to the relationship.

This is just the dictionary definition of “externaility” in any economics reference work, and I don’t know why it is turning out to be controversial or difficult to understand.

If you want to argue that I don’t know what I need, that comes under cost of information/transaction costs, which is a different market failure and could also justify regulation.

If you want to argue that software makers should take more account of my “needs” than I do, that’s a whole different argument.

Ben Liddicott January 24, 2007 4:43 AM

@X the Unknown

If you go around rattling the doorknobs in a real physical shop, you are likely to be asked to leave the shop, at the very least. Especially if you explain why you were doing it.

Few people in Real Life look kindly on that sort of behaviour. We tend to think you should stay out because you are not invited, not because the door is locked.

What you are describing is like sneaking into someone’s house, rummaging through their stuff, and then saying “look, your electrical wiring is not safe, aren’t you lucky I was here!”. They may or may not fix their wiring, but they are unlikely to remain your friend.

If you want to do that, don’t sneak around, ask permission.

Christoph Zurnieden January 24, 2007 12:13 PM

“I sympathise with your desire to be able to test something “hands on” but unless you are a professional IT systems penetration tester just how much use is that really going to be?”

In case of websites: of no use at all even if you are a professional penetration tester. To be sure that a website is secure you have to find all holes.
But such a website is a Black-Box, you can not know what happens inside. To test for all possible holes you have to test the alphabet of all possible input and–because the particular order of the inputs might matter–the powerset of a finite (finite only because the number of states in a physical machine is finite, otherwise it would be the full countable infinite Kleene star set. The powerset of it is uncountable and therefore testing with a countable infinite set of machines is not possible) subset of the Kleene star set of this input alphabet[0]. Have fun running through that tests while watching the universe dying.
So testing a website from the outside is not economically feasable.

On the other side is it very easy to find a vulnerability in a website: just try a handfull of known exploits–e.g. SQL injections–on a wide range of websites. Google will be glad to help you to reduce runtime and decrease transmission costs.

The only way to avoid a hit of at least the known vulnerabilities is to stop using that program (or a part of it) immediatly when you get aware of the existance of the bug and restart after you applied the patch.

Now tell me: how can you do that if nobody informs you about the security hole at the time it was found?

“On the other flip side, if the vendor is doing their diligence to address the matter and be respectful to the person making the discovery, full disclosure helps no one. The fix won’t come any sooner and users are put at risk without any benefit.”

The researcher reported the bug to the manufacturer and waited a week. Or two. Or a month. Or a year. Or until a patch is out. Or the bug gets an internal WONTFIX. Meanwhile a lot of crackers poke in a lot of software, including yours, looking for an exploitable bug; and they don’t do it for fame and honor only, they do it for profit now.

It’s a race where the black hats have a large competitive edge and in a race time matters, doesn’t it? And you want to make me believe that it is my moral duty to wait with the publication of the vulnerability until the manufacturer has a patch ready? No chance for all of the other customers to decide on their own about stopping usage until a patch is out because you think you know better? You are kidding, aren’t you?

There’s no alternative to full and immediate public disclosure of software bugs. You may delay the publication of a proof-of-concept 24h to give the sysadmins of all timezones the chance to act accordingly (stop the software, cut connections etc.) to avoid the actions of the average script kiddy but the description of the bug alone should be sufficient to repeat the steps which led to the finding of the bug (to avoid the DoS in a scientific and not a social way).

But that discussion distracts from the real problem: that still almost all software, comercial and non commercial is designed and written in some more or less sophisticated trial-and-error method. I don’t think it’s very professional.

CZ

[0] still no plans to include LaTeX support in MovableType? 😉

Ranum Fan January 24, 2007 2:50 PM

@Pat Calahan

Bruce, you and Marcus seem to know each other pretty well … Do you guys ever agree on anything? 🙂

1) Generally security still sucks.
2) Beer is good

I think those are the two issues they agree on.

frimble January 25, 2007 2:05 PM

Ben Liddicott is correct in injecting transactional costs/information asymmetries. One of the problems of full disclosure is getting the information to the client (not necessarily the consumer) in a form they can understand. For example, Apple has been touting the “security” of OSX. In point of fact, it’s full of holes that are well-known in a subsegment of the developer community. For years, Apple has left open “object swizzling” (InputManager) and mach_inject’ion — the latter only partially fixed on the intels, and the former considered a “feature, not a bug.”

The consumers are easily bamboozled; some of the developers feel it’s in their interest to cover it up, and a lot of IT folks, the major customers who would bear the brunt of the costs, are not well versed in either Cocoa programming or kernel system calls.

But just google mach_inject — a library to over-write running code was released back in ’02, and is used for good in a number of open-source utilities. Only in the latest release, and only on intels, do they require super-user permission for code injections. And they’ve done absolutely nothing about the fact that if you slip a file into the right directory (no special permission), you can intercept and re-write any objective-c function call, which is the basis of almost all interface elements on OSX.

Aaron Guhl April 9, 2009 11:59 AM

I’ve been reading a lot on the history on full disclosure. I think it is a controversial topic for many reasons. If you just look at the surface of it, it is easy to say that full disclosure is always good. But looking at it in a black and white fashion like that is not good. Partial disclosure many times is the best way to go and I think a lot of guidelines today are leaning toward that format. Releasing partial disclosure of the vulnerabilities gets the vendors to act fast to avoid PR disasters, while at the same time not releasing full information on the flaw out to the general untrusted public.

Clive Robinson April 9, 2009 2:42 PM

@ Aaron Guhl,

“Partial disclosure many times is the best way to go and I think a lot of guidelines today are leaning toward that format.”

Although I’m no fan of “name and shame” (I’ve made a few bluppers in my time). I am however a fan of “education by real world example”.

And this is the same problem. If you don’t use real world examples of “what went wrong” even the best of students start to nod (off) without realy taking it in.

The advantage of Open Source is that people see the before and after so they get the real world example. Also peoples reputations don’t appear to get hammered when mistakes are made.

This is not so of closed source, but I have to ask myself what are their reputations getting hammered for?

The fact they made a mistake, the fact they have a history of taking forever to patch, the fact their patching methods are unweildy and sometimes used to slide other things in at the same time (such as DRM).

Small companies I have worked with have had more success with their customers when they have put their hands up said there is a problem and worked towards work arounds and then fixes in reasonable time frames. Those that tried to keep it quite whilst they fixed the problem tended to get treated with suspicion.

I guess the important issue is to manage your customers expectations honestly.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.