Recent Developments in Full Disclosure

Last week, I had a long conversation with Robert Lemos over an article he was writing about full disclosure. He had noticed that companies have recently been reacting more negatively to security researchers publishing vulnerabilities about their products.

The debate over full disclosure is as old as computing, and I’ve written about it before. Disclosing security vulnerabilities is good for security and good for society, but vendors really hate it. It results in bad press, forces them to spend money fixing vulnerabilities, and comes out of nowhere. Over the past decade or so, we’ve had an uneasy truce between security researchers and product vendors. That truce seems to be breaking down.

Lemos believes the problem is that because today’s research targets aren’t traditional computer companies—they’re phone companies, or embedded system companies, or whatnot—they’re not aware of the history of the debate or the truce, and are responding more viscerally. For example, Carrier IQ threatened legal action against the researcher that outed it, and only backed down after the EFF got involved. I am reminded of the reaction of locksmiths to Matt Blaze’s vulnerability disclosures about lock security; they thought he was evil incarnate for publicizing hundred-year-old security vulnerabilities in lock systems. And just last week, I posted about a full-disclosure debate in the virology community.

I think Lemos has put his finger on part of what’s going on, but that there’s more. I think that companies, both computer and non-computer, are trying to retain control over the situation. Apple’s heavy-handed retaliation against researcher Charlie Miller is an example of that. On one hand, Apple should know better than to do this. On the other hand, it’s acting in the best interest of its brand: the fewer researchers looking for vulnerabilities, the fewer vulnerabilities it has to deal with.

It’s easy to believe that if only people wouldn’t disclose problems, we could pretend they didn’t exist, and everything would be better. Certainly this is the position taken by the DHS over terrorism: public information about the problem is worse than the problem itself. It’s similar to Americans’ willingness to give both Bush and Obama the power to arrest and indefinitely detain any American without any trial whatsoever. It largely explains the common public backlash against whistle-blowers. What we don’t know can’t hurt us, and what we do know will also be known by those who want to hurt us.

There’s some profound psychological denial going on here, and I’m not sure of the implications of it all. It’s worth paying attention to, though. Security requires transparency and disclosure, and if we willingly give that up, we’re a lot less safe as a society.

Posted on December 6, 2011 at 7:31 AM39 Comments

Comments

Michael Josem December 6, 2011 7:59 AM

Isn’t there an obvious middle ground to disclose the vulnerability to the affected party and give them a bit of time to fix it?

Doesn’t that satisfy the security requirements? Obviously, the publicity seeking researcher doesn’t receive as much publicity, but publicity doesn’t seem to be an appropriate goal for this situation.

hooduthunkit December 6, 2011 8:07 AM

Much original hacking was against the phone systems; the color boxes and other techniques. IIRC the phone companies prosecuted even small thefts as felonies; which succeeded in keeping the numbers of people doing this small. Once bulletin boards sprang up, the knowledge spread and became a real problem for the telcos. With the arrival of the internet, the BBs became nodes in a large, public network and the problem couldn’t be contained any longer, so the system itself had to change.

So damage-containment by threat and severe punishment is an old, established, and proven technique; but only if corporations can prevent the initial release.

Natanael L December 6, 2011 8:15 AM

If more traditional security companies could act as “relays” for the information, the researchers would not have to worry about the response of the affected companies.
All they’d have to do is to provide a complete description of what they’ve found, and the security companies could have a standard procedure, like notifying the affected company and publishing the details after a certain amount of time.

It’s harder to threaten an established and trusted security company that’s just passing information on thanit is to threaten an individual researcher.

RobertT December 6, 2011 8:18 AM

“Lemos believes the problem is that because today’s research targets aren’t traditional computer companies — they’re phone companies, or embedded system companies, or whatnot -”

I think to some degree you are right but another perhaps more important aspect is the emergence of “Systems on a Chip” (SOC). Today we have single chip smartphones with WiFi. BT, GPS, TDSCDMA, WCDMA, GSM……all on one chip, which also contains a couple of 2Ghz processor cores and an encryption core or two. In case I forgot to mention it the whole package is glued together by Millions of lines of Software / Firmware. and it’s all on ONE very expensive chip. Unfortunately you’ve decided it’s broken. There’s also that nagging little issue of winning maximum market share while the product is in a state of flux. Think of it this way and you’ll understand why the security vulnerability messenger, must be shot, several times if needed.

Thomas Mackenzie December 6, 2011 8:34 AM

I agree with Michael Josem. There should definitely be a time for the Vendors and the security researchers to discuss and try and get a fix out without having everybody in the world know about it.

This really depends on what you mean by full disclosure. Typically full disclosure means you release the vulnerability without alerting the vendor. However in the past it means 5 days or something like that.

This is what we tried to tackle with upSploit

jason December 6, 2011 8:45 AM

If the DHS discloses vulnerabilities it’s considered scare mongering. If they don’t, it’s denial. I don’t get it. Maybe blogs are themselves a sort of closed loop in which predetermined conditions can be set to anything you wish with desired conclusions inevitable.

Alan December 6, 2011 8:45 AM

What this shows then is that we want to feel secure more than we want to actually be secure. Maybe everyone should hire cheerleaders and therapists instead of security researchers.

Clive Robinson December 6, 2011 8:58 AM

@ Bruce,

There is another aspect you have not covered which is the companies involved with mobile phones and consumer electronics such as games consoles have of recent times become very litigious.

You only have to look at Apple, HTC, Microsoft, Motorola, Samsung et al suing each other in whatever court they are alowed to cross the threshold.

The old “I won’t sue you over my patents if you don’t sue me over your patents” appears to be breaking down due in part to patent trolls (of which Motorola is becoming one).

As the economic head room to make profit reduces due to recession I fully expect litigation to protect or enhance market share to increase significantly.

Peter Maxwell December 6, 2011 9:32 AM

With reference to recent security disclosures concerning companies that aren’t software houses, it doesn’t usually fall under the rubric of a “vulnerability”: much of the time it isn’t a simple mistake or omission but rather the company concerned has deliberately done something unpopular. e.g. CarrierIQ deliberately logs key-strokes, Apple deliberately logs GSM base station data on iPhones, etc.

It isn’t the traditional scenario of some development mistake, these are companies that are embarrassed due to malfeasance.

Dustin Schumm December 6, 2011 9:36 AM

This whole Carrier IQ thing i think is semi being blown out of proportion. i understand that yes potentially this company has access to your texts and call logs, however the people that are actually mad about this apparently don’t realize that Verizon/AT&T/Sprint whomever does own that network. Do they not think that these companies cannot simply look at their logs to see this information if they want to? It appears to me like people are just want to yell “big brother” at anything the media brings up. why are the people not mad at DHS/CIA/NSA for being able to tap your telephone line at whim? get rid of the patriot act before you care about your cell phone.

Alan Kaminsky December 6, 2011 9:39 AM

Like repressive governments, corporations are now trying to suppress free speech about security vulnerabilities in their products. Because of their bad behavior, corporations no longer deserve to be notified of vulnerabilities in advance of publication. It’s time for the gloves to come off. Security researchers should start posting their findings, anonymously, on Wikileaks.

Vikram Phatak December 6, 2011 9:45 AM

A truce is maintained only when both sides have something to lose. It appears some companies have forgotten what it was like before full disclosure. It may take a few researchers dropping some 0-Days to restore the balance…

vwm December 6, 2011 10:05 AM

@Michael Josem: Problem with such “Responsible Disclosure” is, it also gives the affected party time and incentive to get an restraining order against the researcher.

Anonymous666 December 6, 2011 10:18 AM

It’s the same thing over and over again. Power rejects transparency, power rejects fault, power rejects responsibility.

We see it with the govt and wikileaks and we see it with AT&T and Apple over ipad emails, and we see it again with Apple.

It is the corrupting force of power that makes people act unethically. And Apple is obvious unethical.

TimH December 6, 2011 10:30 AM

vvm is absolutely right on who has the power. If an individual copies private company data from their workplace, they can go to jail. If Verizon knowingly captures user’s private https banking login data streams, the most the company faces is a fine, eventually, with no admission of guilt and no jail for management.

Peter Maxwell December 6, 2011 11:27 AM

@Dustin Schumm at December 6, 2011 9:36 AM

“This whole Carrier IQ thing i think is semi being blown out of proportion. i understand that yes potentially this company has access to your texts and call logs, however the people that are actually mad about this apparently don’t realize that Verizon/AT&T/Sprint whomever does own that network. Do they not think that these companies cannot simply look at their logs to see this information if they want to?”

The CarrierIQ software also monitors keystrokes and outgoing login credentials for TLS/SSL secured sites, which normally a provider/carrier couldn’t do unless they remotely updated the phone’s firmware (vaguely remember reading that happened in a FBI case a while back). So it is far more intrusive.

“why are the people not mad at DHS/CIA/NSA for being able to tap your telephone line at whim? get rid of the patriot act before you care about your cell phone.”

We are. This issue is different as it has the potential to be done en masse.

Peter December 6, 2011 12:57 PM

Often times site owners don’t acknowledge vulnerability notifications. It’s like talking to /dev/null. Depending on how connected you are there is a possibility the vulnerability will be closed, but it’s not a given. You could just forget about it and move on, but that wlll build a culture of ignorance. Wouldn’t you want someone to let you know if your fly is open?

billswift December 6, 2011 3:51 PM

On the other hand, it’s acting in the best interest of its brand: the fewer researchers looking for vulnerabilities, the fewer vulnerabilities it has to deal with.

Actually, that is acting in the interests of current management. The brand is going to be around much longer and it would be more in the “brand’s interest” to locate the bugs as quickly as possible and close them before they are used as attack vectors.

Anustup December 6, 2011 4:30 PM

Sigh. Isn’t the real question: “Is the market (of security researchers, vulnerability providers, and vendors/companies/government agencies) an efficient market or not?”

If inefficient, obscurity and suppression of information could work. If efficient, it doesn’t. We can’t always assume all scenarios are efficient.

Thomas December 6, 2011 4:31 PM

@Peter
“It’s like talking to /dev/null. ”

I’ve never had /dev/null threaten to sue me.

@Dustin Schumm
“This whole Carrier IQ thing i think is semi being blown out of proportion. ”

If so then Carrier IQ have to take some of the credit for that, given their ham-fisted deny/sue/mislead reaction.

I think the sad fact is that the facebook/twitter/flikr/??? generation have no idea what privacy is and why one should have it.

http://www.abc.net.au/news/2011-11-30/british-journo-describes-tabloid-culture-of-fear/3702966

“””Privacy is particularly good for paedophiles, and if you keep that in mind, privacy is for paedos, fundamentally, nobody else needs it.”””

That was a former deputy editor of “News of the World”. When attitudes like that prevail in positions of such power and influence, is it any wonder?

It’s difficult to draw attention to an issue no one actually cares about.

hate loose thinking December 6, 2011 5:15 PM

With all due respect, “Apple’s heavy-handed retaliation against researcher Charlie Miller” is NOT an example of that.

Apple was actually talking to Charlie Miller about properly crediting him in a forthcoming security update, when they learned that he had implemented the vulnerability in an app that people had downloaded and used. That was what got him banned from the developer program, not the fact that he found the vulnerability.

Please fix the text. Thanks.

qim December 6, 2011 7:01 PM

While I think this post is partially right, for the security industry point of view, there are two sides to everything and I think that the security industry has a little problem of perception and has a blind spot regarding it.

For example, Charlie Miller made available trojaned programs to the public as part of his ‘security research’. Was this necessary to prove his vulnerability? No, it was him playing a “I’m cleverer than you” game with Apple, and Apple did what ANY company would do in those circumstances… they kicked him out.

Just look at they way that sections of the industry promote themselves, and the highly visible ‘Black Hat conference’ briefings and the ‘pwnie awards’. There is this arrogance, unpredictability and frankly childish mentality that runs through the industry and until that changes, companies will tend to deal with you at arms length.

If the security industry wants to have the trust of companies, they need to look at how they portray themselves, and act, as a group, more like professionals and be seen as such. Until then, you’ll be seen, not as a helper, but as a hindrance.

RobertT December 6, 2011 10:05 PM

Just one more comment on this.
For such a complex SOC as a smartphone there will be a know bug list. I’ve seen such lists that were over 100 pages long. That’s just known faults / bugs, that must be fixed.

There will be an additional “like to have list” that gets considered for future revisions of the chip. As far as the chip / system maker is concerned, Security problems that are not being actively exploited, belong at best in the “like to have fixed list”.

Each time you make a fix software, firmware, hardware, layout…. you can create new potential problems, so fixing the second tier problems opens a real Pandora’s box.

On top of all this you have application developers (like CarrierIO) making, what amounts to Rootkits. You have Carriers wanting to know how often the phone Wifi capability is being used for voice (SIP calls), you have location based Apps mining all sorts of information. Some of these guys are the direct customers of the systems maker, so they have more influence than the actual end customer (you). This all means that the value chain is very different from a Laptop or server and contains a lot more interested parties.

Aladdin December 7, 2011 6:12 AM

“It’s similar to Americans’ willingness to give both Bush and Obama the power to arrest and indefinitely detain any American without any trial whatsoever.”

As opposed to the power to arrest and detain any non-American (in any country) without any trial whatsoever – which is entirely uncontroversial.

Coyne Tibbets December 7, 2011 11:18 AM

“On the other hand, it’s acting in the best interest of its brand: the fewer researchers looking for vulnerabilities, the fewer vulnerabilities it has to deal with.”

I find this a remarkable statement; a reflection of an unfortunate inability of people today to correctly weigh risk.

The company can find out about vulnerability in two ways: (1) When a researcher investigates; or (2) after the thieves in “Azmenistan” take advantage of the vulnerability to rob thousands of their customers.

In the former case “brand damage” is minor from the start and can be relieved completely by positive, proactive response by the company.

In the latter case (which is probably inevitable given that the “Azmenistan” group continually looks for such vulnerabilities) the ensuing lawsuits will result in much larger financial losses, permanent damage to customer confidence, and hence permanent damage to the brand.

Which is better? Obviously the former, but it seems that people prefer to hide their head in the sand, hoping that the latter never happens; even though it probably will. So, clearly, they are not able to correctly respond to risks to the brand name.

Clive Robinson December 7, 2011 1:17 PM

@ Coyne Tibbets,

“Which is better? Obviously the former, but i seems that people prefer to hide their head in the sand, hoping that the latter never happens; even though it probably will.”

Actually from the point of “shareholder” value it’s the amount of percieved profit and thus increasing share value, with a secondary helping of dividend.

The Exec knows their bonus and importantly their next job in about a year is based on maximising return. So from their viewpoint anything that negitivly effects the next quaters figures is of the utmost importance which essentialy means “don’t spend money on bug fixes” “spend on legal representation on credit” is better value…

Coyne Tibbets December 7, 2011 8:47 PM

@ Clive Robinson

“The Exec knows their bonus and importantly their next job in about a year is based on maximising return. So from their viewpoint anything that negitivly effects the next quaters figures is of the utmost importance which essentialy means ‘don’t spend money on bug fixes’ ‘spend on legal representation on credit” is better value…'”

…and in 18 months, screw the shareholders. Which is a breach of fiduciary duty, but since the shareholders can’t evaluate him any better than he can evaluate risk, I guess they deserve it.

Jay December 7, 2011 10:07 PM

@RobertT: Actually SoCs are relatively cheap (at least compared to the devices they’re in). Think $20. Storage and LCD screens are the main cost factor in smartphones – take a look at an iPhone teardown / BOM sometime.

Also, the actual SoC hardware bugs (“errata”) are usually not security related (except by Linus standards), and generally have workarounds that can be implemented in the OS drivers.

Welcome to the future – security patches don’t need hardware changes…

RobertT December 8, 2011 1:18 AM

@Jay: “Welcome to the future – security patches don’t need hardware changes…”

You’re certainly right Jay, most of my security concerns are very antiquated. For the most part, I focus on security problems at the lowest levels, and thereby often loose sight of many modern methodologies. It is very true, that advances in systems engineering, and design abstraction, have made the lower level, device type security concerns, irrelevant, to most people. Higher level tasks / commands can handle all the messy implementation details.

With a little diligent study, I hope to be able to rectify my knowledge deficiencies, and catch-up with the rest of the world. However, I’ve got a feeling that the future, is not for my kind.

El Viejo December 8, 2011 8:02 AM

The off shoot of this policy of using the law as a bully against the petty thieves and ‘browsers’ is that the security problem is not fixed and foreign spies have a field day, but the psychology of it all bleeds over into other areas. The Israelis were astounded that we didn’t have locking cabin doors on our aircraft. The problem is traceable to the simpleton mindset that says, buy the cheapest. Get by with the cheapest. Don’t apply any intelligence to your purchasing. Never mind the fact that a cheap light bulb has to be replaced often just buy the cheapest.

Dirk Praet December 8, 2011 12:02 PM

When litigation or incarceration becomes cheaper than exposure to embarassment and fixing the problem, for many companies it becomes the logical thing to do, especially if their government is showing an identical disposition in dealing with similar issues.

Nick P December 9, 2011 12:13 AM

@ RobertT and Jay

RobertT. You’re hilarious. You must be joking. It’s funny because I’ve worked from the top to the bottom in INFOSEC & you’re working bottom-up. Yet, we’ve both come to the same conclusion: the hardware & firmware must be secure if the layers on top are to be called secure. There exist malware in the wild that exploit CPU errata. That alone proves that abstraction won’t cut it. Abstraction is just misdirection & malware authors are usually good at overcoming misdirection. Secure software on hardware designed w/out security in mind is still insecure. Jay’s ill-informed comments don’t change this.

RobertT December 9, 2011 2:42 AM

@Nick P
“You must be joking….”
I usually like to see the monkeys, move to the end of the branch, jump up and down with joy, and throw their own faeces in the air, before I show them the chainsaw…

Oh well, what do they say, there’s one born every minute…., so another one will be along soon.

Clive Robinson December 9, 2011 7:05 AM

@ Nick P,

“Yet, we’ve both come to the same conclusion: the hardware & firmware must be secure if the layers on top are to be called secure.”

But do they ?

Sorry I don’t agree with it as an over generalised statment it fails, and even in a more restricted scope it can be dealt with logicaly and if not practicaly as well.

To start of we need to examin the roots of the word Atomic and what it actually means in real terms at all levels.

Put over simply it means “indivisably, the smalest part possible”. Now if something can not be broken down further, what effect does this have on it’s security?

Security is a way of thinking rather than an actuality, that is nothing is inherently secure it’s how we design it to behave with other pars that puts it in the “secure” as oppose to the “insecure” method of behaving. And, as it’s also a mind set not an actuality it can change with time, thus our notion of what was “secure” say ten years ago may now be regarded as “insecure” without the system or it’s mode of operation having changed in any way.

The point is we start building our systems with atomic parts that are not secure and by imposing some kind of order we can make them comply to our current notion of what is “secure”. Thus at each and every level of the design and usage of a system it starts with underlying “effectivly atomic parts” that can built into a system that is either secure or insecure regardles of if those atomic parts are secure or insecure.

We’ve actually known this for several thousand years, there is the old story of the two automatons gurding a gateway with two buttons or leavers etc which if you use the right one you pass safely, the wrong one you die. You are alowed but one question, of either automaton, but all you know of them is that one always tells the truth the other always a falsehood…

The solution as we’ve know since ancient times is to ask one automaton which button the other automaton would say is the wrong button. The result is you get told the same answer irespective of which automaton you ask the question, because effectivly your single question is asked of both automatons and the falshood of one automaton is negated by the form of the question.

So a couple of important points to take away,

1, Use logicaly different automatons.
2, Use all automatons to derive the answer.

The only overrider in this particular case is that the automatons behave consistantly all the time but as I will describe later that is not actually a constraint that cannot be mitigated.

Now by far the majority of our systems use a single CPU or two or more identical CPU’s this is obviously a violation of point 1 above.

Also we tend to design our systems to be “efficient” which usually means that if there are two or more CPU’s they are usually used to do different jobs thus obviously violating point 2 above.

Although high asurance systems usually use several CPU’s in parallel and in some cases performing the same task. Few if any use different CPU’s. However the likes of NASA have in the past used different isolated teams to write software against the same specification and run this with a voting protocol to detect an unsafe condition in a three or more CPU system.

Needless to say the difference between “safety” and “security” is largly a semantic one and in some languages the ideas are covered by a single word. So it can be seen that the voting idea will work to detect some if not most forms of “insecure” operation by a minority of the CPU’s with respect to the majority.

So we have a known starting point for our secure design from effectivly insecure but atomic parts. So yes we can use SoC parts provided we know that they are genuinely different internaly at the higher levels.

Now the main objection is “cost of duplication” that is we have a system that uses three or more parts to do the same job…

Does this actually matter in the long term?

In most cases no, and infact actually if you go about it correctly it gains you something at a more moderate cost than the current ways of doing things.

To see this, think about safety systems in cars. If you took a 1950’s car and added all the modern safety features we’ve come to expect then not only would cars be prohibitivly expensive to buy, but the running cost with all the extra weight would so badly effect both fuel consumption and handeling it would be doubtfull if their utility let alone their safety would be improved.

But we can clearly see this is not the case especially in Formula 1 racing cars, many of which still use engines designed and first put in racing cars in the 1950’s. What has happened is that the safety features have been “designed in” not “bolted on”. That is by sacrificing a little efficiency in one place we gain an increase in resiliance, by doing this in several places the combined results are considerably more than the sum of the parts. In everyday cars we can see this in the use of Side Impact Protection systems and crumple zones, because they get designed into the monocoque. The result is a lighter stronger and safer car that would be possible by the techniques of seperate chasis and bodywork of the 1950’s. However there is a downside of the monocoque is if you bend it you often have to replace it in it’s entirety to regain the strength and safety, not the old “cut-n-patch” and “cut-n-shut” repairs of dubious safety of 1950’s repairs.

Now a little earlier I mentioned that there appeared to be a constraint on the two points of “consistant” behaviour and indicated that it could be mitigated. That is it can be “designed out” to a certain degree. Think back to the voting system, if one system produces a result that is at variance with the other two it is effectivly “switched out” of the system by the majority vote.

But in NASA’s and other safety critical systems voting systems this is generally regarded as a hard fault and the unit is switched out permanently. But a little thought shows that this is far from necessary provided the overall system follows the majority vote and one system disagrees the result is not in question, its just that you go from a majority vote system to an agree or fail system.

But what if you let it run on and it starts agreeing with the other systems again?

This is what effectivly you do in testing a system you find a potential fault but cannot repeat it you then let it run to see if it re-occurs, if not then you call it one of several names such as “a transitory fault” or “a non reproducable fault”. In both cases the error might be due nott to an actual failure of the hardware but a transitory effect caused by say an ionised partical. In communications systems you accept that transitory effects occure and these are not faults of the system, and you build in either or both threshold or error detection and correction systems.

Adding a threshold system to a three or more participant voting system allows for transitory behaviour likewise error deection and correction add to the reliability of the overal system.

In many cases the cost of adding these systems is at worst marginal and at best can provide significant gain at a lower overal cost.

The simple fact is that although we know of these systems and their characteristics and have applied them very successfully to safety systems and quite a few physical security systems we appear not to have considered them for computer security systems in the accademic community.

It is one of the things I’ve considered in the “Prison-v-Castle” idea.

haxxmaxx December 9, 2011 5:53 PM

Bruce there is a whole mini movement against this going by the name AntiSec which are out to prove ‘whitehats’ are more corrupted than blackhats.

they essentially rail against the full disclosure of bugs and flaws, which typically end up on a 0-day list favored by criminals who use it to launch unsophisticated attacks and rape huge amounts of money. i don’t know how old the members of AntiSec are but I remember before these lists were around and how easy it was to bypass system after system after buying or trading bugs in underground crime forums back in the day. i don’t see how not disclosing is the answer since it didn’t work in the 90s

Rick Damiani December 11, 2011 3:40 AM

A more interesting development, I believe, is the practice of some security firms of finding and then selling access to 0-day exploits.

Josef Roehrl December 12, 2011 6:29 PM

It seems obvious that the company responsible for the security breach should be given an opportunity (and sufficient time) to fix it before it is made public. That’s the right thing to do.

Are you serious December 15, 2011 7:32 AM

Apple’s heavy handed retaliation? He violated the conditions of the developer program when he published his proof-of-concept to the app store. Apple had every right to revoke his status. As a user, I would expect no less. There is no excuse for him using the general public as a tool in a publicity stunt. Had he given Apple reasonable heads-up to fix the issue before disclosing it publicly, I think their reaction would have been very different.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.