Simultaneous Discovery of Vulnerabilities

In the conversation about zero-day vulnerabilities and whether "good" governments should disclose or hoard vulnerabilities, one of the critical variables is independent discovery. That is, if it is unlikely that someone else will independently discover an NSA-discovered vulnerability -- the NSA calls this "NOBUS," for "nobody but us" -- then it is not unreasonable for the NSA to keep that vulnerability secret and use it for attack. If, on the other hand, it is likely that someone else will discover and use it, then they should probably disclose it to the vendor and get it patched.

The likelihood partly depends on whether vulnerabilities are sparse or dense. But that assumes that vulnerability discovery is random. And there's a lot of evidence that it's not.

For example, there's a new new GNU C vulnerability that lay dormant for years and was independently discovered by multiple researchers, all around the same time.

It remains unclear why or how glibc maintainers allowed a bug of this magnitude to be introduced into their code, remain undiscovered for seven years, and then go unfixed for seven months following its report. By Google's account, the bug was independently uncovered by at least two and possibly three separate groups who all worked to have it fixed. It wouldn't be surprising if over the years the vulnerability was uncovered by additional people and possibly exploited against unsuspecting targets.

Similarly, Heartbleed lay dormant for years before it was independently discovered by both Codenomicon and Google.

This is not uncommon. It's almost like there's something in the air that makes a particular vulnerability shallow and easy to discover. This implies that NOBUS is not a useful concept.

Posted on February 25, 2016 at 1:14 PM • 29 Comments

Comments

Who?February 25, 2016 1:30 PM

Claim that a "NOBUS" vulnerability is something that can only be exploited by the NSA is an incredibly exercise on self reliance. On the other hand, I have no doubts the NSA staff is ethical --this one is the very reason Clapper lied to Congress a few years ago-- but I have no doubts either about their complete lack of morality. NSA should not have unauthorized access to devices. Period.

MikeAFebruary 25, 2016 1:52 PM

In my (limited) experience, discovery of a flaw can be triggered by a change in environment. A bug in the Xinu networking code (option handling) was only discovered at my employer when a new Linux box started using the window-scale option. Similarly, a rogue "network printer adapter" was not discovered until a new WinNT box (with slightly different ARP behavior) started having issues talking to another (different) box. If a latent bug is triggered by the (even legit) behavior on the part of a new system, one would expect the rise of interest in that bug as the new system is deployed.

Cyborg2237February 25, 2016 1:57 PM

Threat modelling is a core component today for app sec. It has always been around. Often it is done ad hoc. Key component: "ease of discovery". Intel agencies are best served by finding very hard to find security issues. They won't go stale so quickly, and are much less likely to be already known by other governments and so on their covert firewalls that look for such things.

(Why is it likely that governments would create covert firewalls with signatures for previously discovered but undisclosed security vulnerabilities? For one, if they suspect the zero day was used by another intelligence agency, then this could help them detect other attacks by the same organization. That is invaluable protection and information for them. They can then use the attacks against the attackers. Just as we have so well documented what was down in WWII.)

These principles are similar to what you see in the history of science, as well, frankly. Multiple inventors inventing the same thing, at the same time. Inventors rushing to patent before anyone else discovers what they found.

It is often clear much technology is inevitable. If the person who found it did not find it, someone else would have. If there was not Bill Gates or Steve Jobs, there would have been others, similar. And so on.

When I worked on finding security vulnerabilities in other people's products, saw this happen. It was not entirely common, but it happened. It was not always disclosed to the press.

No small part of these trends have to do with the trends of the technology used to find security vulnerabilities. Also, researchers often race to investigate hot new technology, not infrequently, simultaneously, independently. Why, because they are playing the same game. And there are varieties of market making, publicity making "hot" areas to look.

I would hesitate, strongly, calling "conspiracy", what is merely coincidence and inevitability. Most "conspiracies" are unconscious, such as these I have just pointed out. Movements of people in similar patterns, due to behavioral commonalities. Shared behavioral outlooks. Such as using the same sorts of tools, looking in the same small ponds and caves, for the same "gold".

Likewise, it is often what is in front of our faces which is what we miss, where "we" does imply "collectively". Exactly because of these shared commonalities.

AnuraFebruary 25, 2016 2:55 PM

NOBUS works if the exploits require a secret key, and you assume that the US Intelligence Community is immune to leaks and spying. As long as we keep strict control over our secrets and we don't try and privatize operations of these intelligence agencies, and keep the personnel size to a very small group of easily vetted individuals...

Oh, wait...

ZFebruary 25, 2016 3:26 PM

Should government hoard or disclose vulnerabilities: every time this question is asked, I feel there's a third answer that nobody is mentioning. If, for some reason, the government was disallowed from keeping secret zero-days, then why would they be even looking for them?

The NSA isn't in the business of doing the job of the private sector by becoming some kind of free vulnerability research shop. Especially since the entire world would benefit from it. Why the hell would they be even interested in entering that market? And why would we want the NSA to put themselves in such an obvious conflict of interest? People would always doubt that they are REALLY publishing everything they find, and the NSA would always have internal pressure to keep one secret, just that one time, because of an upcoming foreign operation.

Would make a lot more sense to simply create a new organisation, completely segregated from US intelligence, to perform this free vulnerability research service. This organisation wouldn't have to be 100% financed by the US either.

DanielFebruary 25, 2016 3:58 PM

There is a similar concept in the arts--the precise name for it I cannot think of now--but it amounts to "geniuses attract". "Creative clustering"??? In any event it has long been noted that there are particular periods and places in time when the arts tend to flourish (think Enlightenment, Renaissance) and other times when they don't.

So it would not surprise me that computer bugs follow a similar pattern.

ZFebruary 25, 2016 4:28 PM

@Matt: They do support business in their security; I was strictly talking about finding software vulnerabilities (zero-days), which is a subset of their activities.

goatscapeFebruary 25, 2016 6:21 PM

NOBUS is not useful. That's the same kind of red herring as torture doesn't work. It evades the real issue.

US NOBUS policy maintains NSA can do what they want if they're the first to find a new way to sabotage critical international infrastructure. That's a perfect example of arbitrary privacy interference in breach of the ICCPR. It doesn't just violate conventional international law. It attacks rule of law as such. If anyone still thinks this form of illegal warfare is less serious than, say, NSA's role in indiscriminate extrajudicial killing, remember that NSA destroyed critical national communications resources in Syria during a humanitarian crisis, and they're counting on impunity for the deaths they caused. NSA's complicity in nuclear sabotage proliferated a threat to critical industrial control systems worldwide. To the chairborne rangers of NSA this is fun and games but outside their Heimat this is crime. Satisfaction for internationally wrongful acts may entail prosecution of specific criminal officials. As US influence and power erode, sacrificing a few bad apples is a painless way to shore up international standing. NSA saboteurs should be careful how they sneer at their victims.

LoraineFebruary 25, 2016 7:27 PM

Simultaneous discovery is well documented in science too. "Commonly cited examples of multiple independent discovery are the 17th-century independent formulation of calculus by Isaac Newton, Gottfried Wilhelm Leibniz and others[...]; the 18th-century discovery of oxygen by Carl Wilhelm Scheele, Joseph Priestley, Antoine Lavoisier and others; and the theory of evolution of species, independently advanced in the 19th century by Charles Darwin and Alfred Russel Wallace."

DaveFebruary 25, 2016 8:35 PM

Changing the topic slightly, it's not a problem in Gnu C, it's a problem in glibc, which means Ulrich Drepper. Having a serious flaw like this ignored for months by the maintainer isn't that unusual.

JohnFebruary 25, 2016 8:41 PM

"bugs" allow plausible deniability, this seemingly puzzling behavior of not fixing "bugs" is nothing more than "leaving the door open".

Sounds crazy right? And I'm certain you believe "goto fail" and the other "bugs" were not intentional.

Why do it this way? Just imagine if it came out that Apple allowed the FBI/NSA unfettered access to your iPhone.

What would you do?
If it's a bug (and real ones exist) then it's not Apple's fault.

So Apple's "encryption" battle (now joined by M$, Google, Facebook) is just theater, it's compliance with government while maintaining public image and financial position.

Is this "conspiracy" too much for you to believe in?
Then you better get started reading EVERYTHING Snowden released that you can get your hands on.

WTFU.

WhiskersInMenloFebruary 26, 2016 12:02 AM

Perspective is important.
Today software is global. i.e. Your flaws are the same as your opponents.

By way of example Purple was a Japanese code. But what if both the US navy and the Japanese navy used the same code. In the hand of the NSA looking out there is power, in the hands of an opponent looking in at you there is a vulnerability.

This global perspective is important and obviously ignored or mis understood by the FBI and DOJ in the Apple writ involving Farooks phone.

The DHS and DOD should ponder what the other hand is asking.
I suspect the narrow view of goals is not in our national interest.

ParallelFebruary 26, 2016 2:47 AM

@Bruce: "This implies that NOBUS is not a useful concept."

Your "implies" makes hidden assumptions. It can be wrong under the following assumption:

Assume that, once the NSA detects that it is not more the only one exploiting a 0-day, they modify some computer (DNS registration for getaddrinfo, a rogue sshd for Heartbleed) to trigger that bug in several corporations (Codenomicon, Google, Redhat ...).

MarkFebruary 26, 2016 3:56 AM

The role of government is to protect and serve its people. The fact that governments around the world -- especially the USA government -- horde vulnerabilities makes us all more vulnerable to attacks. If they disclosed the vulnerabilities that they find to vendors (and, of course, to the public), they would make us all safer.

Remember that most intelligence agencies -- the NSA, GCHQ, etc. -- have dual (read: contradicting) missions: attack and defense, as Bruce has written.

It is totally unreasonable that governments horde these vulnerabilities. They're used to further political and economic greed, whereas they should be used to help protect us. Think all this surveillance and spying is about national security?

Bullshit.

It's about the elite maintaining their power. It's about economic benefit by spying on foreign companies. It's about maintaining political power by knowing what other countries' leaders are doing/thinking. It's about implementing and maintaining the police states in which many of us live without us even knowing.

Clive RobinsonFebruary 26, 2016 4:35 AM

@ WhiskersInMenlo,

But what if both the US navy and the Japanese navy used the same code.

It's funny you should say that, stranger things have happened...

The British and Germans did use the same basic system (Typex / Enigma respectivly) in all but name, casing and key wiring. The US also used a version of Typex as well.

In many of the Bletchly Park photographs you see WRENs etc using Typex machines. These had just been rewired and had a minor mechanical mod to behave like the Enigma machines for message decoding after key recovery by the bombes or Kysaggi sheets etc.

Clive RobinsonFebruary 26, 2016 5:10 AM

@ Bruce,

This "has come to time" issue has been happening for some time.

Some people may know of Allen Dulles's "Single Bullet Theory" that predated the JFK Assasination (and kind of got written out of history by the CIA afterwards).

His theory was "A single well placed bullet can change the course of history".

If asked to explain he would mention the assasination of Archduke Ferdinand which immediately preceded hostilities that gave rise to WWI. Then ask the question about the bullet that just missed Hitler in a beirkeller by asking if WWII would have happened if Hittler had been killed?

As any half decent 20th Century historian will tell you the theory is both bad history and science by arguing from macro effect to chosen micro cause.

THe simple fact is there are thousands of micro causes that preced macro events such as World Wars. As with all cascade events, it's not which snow flake starts an avalanche, it's the weight of all the flakes that preced it that count as well.

It's the same with technology. The electrical relay was invented in 1848, George Bool had come up with his logic. Thus the first electrical computer could have happened in the mid 19th Century. But it was not untill just before WWII that Konrad Zuse got his relay based computer working. Around the same time Church and Turing came up with the same idea that they expressed differently. The rapid development of "radio valves" --tubes in the US-- gave the oportunity for Tommy Flowers to prove that the relay switching ten times a second could be replaced with a couple of triode valves that could switch half a million times faster, that gave rise to the first practical electronic state machines. Whilst in the US they had hung up on reley/switch bar technology to make a bombe five times faster than the Turing / Welchman bombe.

Ideas do come of age, however as I noted on the friday squid page a few days ago, somebody like Google in a position to see peoples searches can draw a lot of deductions just from the queries they make. Because the idea takes a while to crack outof it's embrionic shell just seeing a few similar eggs will give others the same idea. There can be one heck of a lot of weakth and kudos in being either first, or knowing who's shares to buy. Further in the case of an agency like those of the FiveEyes just knowing who to give a nod to can buy favours and also be worth thousands of times the cost in terms of R&D costs saved and the effect that has on the National Economy which is the fundemental of National Security.

EGOTISTICALGIRAFFEFebruary 26, 2016 4:48 PM

Z said:

>Would make a lot more sense to simply create a new organisation, completely segregated >from US intelligence, to perform this free vulnerability research service. This >organisation wouldn't have to be 100% financed by the US either.

This would never work. The SIGINT side will always have dominion over the COMPSEC side of things. Even if they become separate agencies with separate leadership, they are still controlled by Congress. As long as that's the case, you can bet that the SIGINT people will have control over what vulnerabilities the COMPSEC people are allowed to publish. Besides, we already have such an agency, it's called CERT. But you can bet CERT doesn't get the funding the NSA does.

But you make a good point in your first paragraph. If the NSA didn't need exploits for offensive capabilities, they wouldn't bother looking for them in the first place. If NSA were purely a defensive agency, their efficacy would be limited only by budget and personnel. But do you think our politicians are going to allocate billions a year just to look for vulnerabilities in software? Nope. It isn't flashy enough. It doesn't "do" anything. It doesn't shut down centrifuges in Iran. It doesn't provide our politicians talking points on Fox News and doesn't get them reelected.

Humans are bad at planning ahead in the long-term. We often don't take action until it is almost absolutely necessary and the world starts crumbling. There's always been Paul Reveres out there, but they are in the minority and are usually ignored. Unless we witness a true "cyber Pearl Harbor" (God I hate that term) our politicians are not going to allocate much money for cyber defense (except for their own classified systems, of course). Sure, they will put on security theater with the "Cyber Operations Command" or whatever it's called, but such an organization will never get the resources or personnel the SIGINT or TAO people get. Even a well funded "Cyber Command" will only perform EBUS duties anyway (protect our computers from "Everyone But Us").

These intelligence agencies are never going to reveal all they know and are always going to keep a trump card in their pocket. It's what gives them and our politicians power.

SkepticalFebruary 26, 2016 5:41 PM


I wonder though whether this reasoning neglects an assessment of the sensitivity of a tactical or strategic advantage to:

1 - the particular rate at which both knowledge of a vulnerability and the comprehensive capability (technological and institutional) to exploit that vulnerability spreads;

2 - the time differences between each actor's acquisition of that knowledge (whether by original research or by the work of others);

3 - most importantly, the overall context (the particular technological, political, social, economic, configurations of each actor and of the overall system they together compose).

Obviously, NOBUS isn't a static state; it's a temporary state. And this is the case with nearly all advantages.

But - and this is somewhat oblique to the point I think that the post is driving at - perhaps NOBUS should not be understood to refer narrowly to knowledge of a particular vulnerability or an associated exploit, but rather to refer to the capability to exploit that vulnerability in a particular context such as to produce an unmatched tactical or strategic gain from it - regardless of whether other actors are aware of the vulnerability, or even also possess the capability to exploit it.

Let me lay out a quick, highly abstract scenario to illustrate.

Suppose actors Blue, Red, and Orange all discover vulnerability V_0 at time t_0. No actor is aware that any other actor has made this discovery.

Actor Blue has the assets (broadly encompassing everything from organizational expertise and processes to relevant deployed equipment to relationships with entities useful to the exploitation of V_0) to most rapidly exploit V_0 and to do so in manner that furthers its strategy in domains other than cyber to a greater extent than does Red or Orange.

Actors Red and Orange will both be able to exploit V_0 at a certain point, but the extent and effect of the exploitation may be more limited due to other factors. Nonetheless, viewed in an isolated perspective from Red's or Orange's position, exploitation of V_0 appears to produce gains for them.

Now, disclosure also does more than tell others about the vulnerability; it tells others that the disclosing party knows, and knew, about the vulnerability.

Even if Blue knows that Red and Orange are likely aware of V_0, Blue may wish to conceal its own knowledge of V_0 with the goal of encouraging Red and Orange to avoid taking steps that would advertise their respective awarenesses of V_0 - thereby, perhaps, leaving Red and Orange susceptible in a way that preserves a net advantage for Blue.

Moreover - because of the different capabilities of Blue, Red, and Orange, in some cases Blue may have capabilities that render Red's or Orange's exploitation of V_0 into a source of opportunity for Blue, whether in the cyber or in other domains.

To put this somewhat scattered musing more succinctly, NOBUS may not refer to merely knowledge of a particular vulnerability or to possession of a particular exploit, but may actually refer to an entire framework in which V_0 as undisclosed confers appreciably greater advantage to Blue.

Vulnerabilities also might be exploitable in an indirect manner that, in a sense, could qualify for NOBUS status. Let me give one extraordinarily speculative example for illustration. Let's say V_0 could be exploited to grant access to a particular class of systems ranging from certain critical infrastructure to particular segmented government/contractor defense projects networks.

Let's say the line of attack to these systems runs by necessity across your sensors. Perhaps you choose to allow V_0 to remain in place for a limited time. The cost will be compromise of certain bona fide defenes projects and plans, and the ability of a highly sophisticated actor to mount, temporarily, CNA operations against certain dams and power plants.

However, you include among those defense projects a program developing a vital piece of software that enables a 5th generation fighter aircraft to fly predictably and for its systems to interact in a predictable, safe, and effective manner. Another party has already stolen some of the hardware plans, and has clearly undertaken a major effort to copy the weapons platform entirely.

And this vital piece of software is both magnificently complex and yet marvelously susceptible to an abstracted understanding. Once one grasps the abstractions, which are clearly documented and referenced frequently, one acquires the sense that one understands the software. Sure, one may not have gone line by line through the code, looked at the runtime in granular detail, etc., but one loads it into the hardware and it works; one runs the tests already documented in the program, which seem very comprehensive, and they pass. One puts it into a prototype - and with a bit of learning, it works under all testable conditions at your disposal.

Should the other party steal it, this vital software would form the keystone in an enormously expensive military platform to which he has invested very substantial resources and on which certain key elements of his strategy for certain scenarios will rely.

However, under conditions that would only arise in case of conflict, or in any case at the initiative of yourself, that system will depart from expected operations with perhaps noticeably catastrophic effects, or perhaps more subtle (but quite unwanted from the perspective of the adversary) effects.

And this is because, being genuinely several steps ahead in this field, and having observed the theft of hardware associated with the project, and knowing the resources that the other party is pouring into its development efforts, you fully anticipate a large magnitude of resources to be aimed at obtaining other critical, but highly complex, components to the platform. And you realize that there is an opportunity here to waste vast amounts of the other party's resources while compromising one of his key projects - and so you develop parallel critical components that are well-guarded, but are intended to be stolen.

And so - you stay quiet about V_0 - the other party scrambles to exploit it, knowing what it might be able to access using it. And the other party is intelligent about its use of V_0 - it does not chance discovery of its knowledge by utilizing it for lower value operations. Finally, the flag goes up: V_0 has been exploited. You lose some unaltered projects; you temporarily exposed some critical infrastructure to attack, but your deterrent to such attacks is credible and effective; and you've inserted weaknesses into a key foreign military program that will likely remain in place for some time. When discovered, that program will not only need to undertake years of auditing and additional research, but the integrity of other programs that incorporate stolen elements will be called into doubt as well. This weakens the other party's strategies, and sets back their development years. The advantage may be temporary - but that is always the case.

This type of effort is - in the same sense which one might exploit an opponent's vector of force in judo - an exploitation of V_0. And because it reuslts from a likely unique set of factors in a given time-frame, the exploitation here is of a NOBUS variety.

In any event, it's just an example, but one would imagine that in considering the value of NOT disclosing a vulnerability, there would sometimes be advantages unique to the US (or to any party) due to the larger context in which the vulnerabilities are discovered and the way in which those vulnerabilities, undisclosed, can raise the probability of the creation of further advantages or opportunities.

Obviously, this is all entirely speculative. However, viewed from my very limited perspective, the general considerations above, and the general form of net assessments incorporating those considerations, along with the means by which potential adversaries might be led astray down expensive pathways that terminate upon unfavorable ground, seem at least within the realm of reasonable possibility.

WaelFebruary 26, 2016 6:58 PM

@Skeptical,

Suppose actors Blue, Red, and Orange all discover vulnerability V_0 at time t_0. No actor is aware that any other actor has made this discovery.

You're describing three mutually distrusting entities. You should have also narrated the same story about Red & V1 as well as Organge & V2. Then it may be modeled as a multi-dimensional prisoner's dilemma problem.

@Greg London,

Are you still there? See the need for this request: What I would like to see is an extension of this device to more than two players :)

WhiskersInMenloFebruary 26, 2016 7:53 PM

@Clive Robinson
Thanks...
I hope that I made it clear that a vulnerability in this global context cuts both ways.
Troubling especially since most outside of our laws are all but free to abuse a flaw.

The defense adversary model and mindset where one could build a better mousetrap does not apply.

Engineering a weakness is foolish.

I am listening to this interview as I type.
http://www.theverge.com/2016/2/24/11110802/apple-tim-cook-full-interview-fbi-iphone-encryption

Tim remarks that it is a cancer.
What if this was a court order mandating that a biological agent be engineered and delivered to one living animal. If that agent was anything beyond benign or not understood should that company decline. If that biologic agent was compelled by a court order could the court keep it under control.

https://www.fbi.gov/about-us/history/famous-cases/anthrax-amerithrax

If only once perhaps but that is not ensured by law as shown by history at Apple.
The biologic analogy is telling WP reminds me:
"Based on the testing, the FBI concluded that flask RMR-1029 was the parent material of the anthrax spore powder. Ivins had sole control over that flask."

name.withheld.for.obvious.reasonsFebruary 26, 2016 10:20 PM

@ Skeptical

Back at it again I see, what I don't understand is why the first order eigen vector remains at T_0 (at least the way I read it). Your stated problem space is not bounded, V_0 should introduce a LIM function, a bounding operator, or a vectored n_of_m. Lastly the use of functional analysis could improve your hypothetical statement though I believe even a simple "threat tree" enumeration can suffice for this overly simplified pronunciation.

But I could go on and on; where is the statistical modeling of the V_0 space? Size, complexity, implementation, knowledge-base, depth of expertise, resource availability, stimulation/simulation opportunity/modeling, or the level of acquisition, management, process, and quality controls.....ad nausea. From my garage this is a simple problem to state or observe--but from my black project lab it is far less co-terminus to anything stated here.

Cyborg2237February 26, 2016 11:01 PM

@EGOTISTICAL GIRAFFE, Z, others

This would never work. The SIGINT side will always have dominion over the COMPSEC side of things. Even if they become separate agencies with separate leadership, they are still controlled by Congress. As long as that's the case, you can bet that the SIGINT people will have control over what vulnerabilities the COMPSEC people are allowed to publish. Besides, we already have such an agency, it's called CERT. But you can bet CERT doesn't get the funding the NSA does.

The NSA has long been mandated to perform security code reviews on any code which touches DoD systems. So, that is a very large portion of American software. (Probably not a little foreign.) This is not strange, China and Russia both have similar federal mandates, which was why they both demanded on seeing Microsoft source code a number of years ago.

How this is not such widespread knowledge, I do not know. It is not classified.

(Probably because the details are simply boring. Much more exciting to read about political fights over backdoors in major handsets, then to read about the daily hum drum of security bug finding.)

That is a lot of security vulnerabilities they have found, and demanded companies fix.

Why would companies fix the security vulnerabilities in a very timely manner? Because federal sales of their products hinge on them. Those are big contracts.

That's it, that's the real world.

Humans are bad at planning ahead in the long-term.

The military is not so bad at planning. That is a key part of the training of their leaders. They routinely keep and work out battle plans for wars and similar incidents which "could happen". Exactly under this manner of constrait is how the arpanet came about. Not that you do not have a good point, it is good. People tend to be very bad, generally, at proactive defense. Defense tends to be, instead, reactive.

These intelligence agencies are never going to reveal all they know and are always going to keep a trump card in their pocket. It's what gives them and our politicians power.

That would be true.

Anyway, as just about anyone in the field, I have strong criticism, but primarily centered around concerns that domestic surveillance does not get out of hand. Probably like a lot of people in the industry. We understand and agree with spying. We may want more bounds. But, domestic spying we can be very cautious and cynical about.

I would posit, however, in this "threat economy" there can be a very real danger of bias on part of defense. Because they have their jobs because of offense.

Doctors make their money from sickness.

Look at Blockbuster. Booming, nationwide store, gone before anyone knew it. Or Myspace. Replaced with better technology. Made obsolete.

A secondary issue which has come up lately in these regards is the combining of the defensive and offensive divisions of the NSA. That could cause problems.

@Skeptical

Vulnerabilities are how intelligence gets into systems. It is inevitable that they will be used for this.

It is also inevitable that vulnerabilities be used in cover defensive programs used for disinformation strategies, counterintelligence purposes. Sophisticated counterintelligence purposes.


@John

"bugs" allow plausible deniability, this seemingly puzzling behavior of not fixing "bugs" is nothing more than "leaving the door open".
Sounds crazy right?


If it is crazy that you are in the business of being a thief and so learn vulnerabilities of locks in order to break them without keys, then it would be crazy.

That which you are saying is simply business.

Why, after thousands of years, can locks still be picked? Why produce locks which can be picked?

Software security vulnerabilities tend to be products of error. It is logical that any security vulnerability could be intentional, as opposed to unintentional, however it is also logical that most security vulnerabilities would be unintentional.

Proving that logic does require considerable research.

@Clive Robinson

While a well written, highly brilliant exegesis...

Ideas do come of age, however as I noted on the friday squid page a few days ago, somebody like Google in a position to see peoples searches can draw a lot of deductions just from the queries they make. Because the idea takes a while to crack outof it's embrionic shell just seeing a few similar eggs will give others the same idea. There can be one heck of a lot of weakth and kudos in being either first, or knowing who's shares to buy. Further in the case of an agency like those of the FiveEyes just knowing who to give a nod to can buy favours and also be worth thousands of times the cost in terms of R&D costs saved and the effect that has on the National Economy which is the fundemental of National Security.

How often is a major scientific advancement scoffed at? Often. How often is brilliance which is sufficiently advanced scoffed at, ignored, or even not seen for what it is? Brilliance mistaken for madness. If we were to try and explain basic concepts of today's world to people a hundred, five hundred, a thousand years ago, there would be many accusations of madness.

Because madness and magic is exactly what the future looks like.

Otherwise, we would already be there, and know what next quantum leaps to take.

So, on this point, while that may be a danger for highly competitive research markets where the technology is at a fast pace of advancement, and many might make it to the same goal post... it is not relevant for cases that are truly world changing. Where there are but lonely runners as if on the desert, making their way to the finish line. No audience. No competitors.

Not even to mention how often, in history, significant advancement has been missed, or forgotten. Only to be refound decades, centuries, even millenia later.

NikoFebruary 27, 2016 5:58 PM

The argument is weak. Let S = the number of zero day exploits independently discovered by at least 2 countries and T = the total number of exploits. S/T gives you some quantitative measure of how correlated zero day exploits are. The problem is we have no idea what S is and we have no idea what T is, so the ratio could be almost anything.

SkepticalFebruary 27, 2016 7:28 PM


@name.withheld: You lost me at eigen I'm afraid, but I very dimly sense that Were my intention to have described formally a complete set of strategies for well defined game, at least some of those points could be important.

But my intention was much more limited: to simply suggest that the point of NOBUS may lie in the connection between a capability and the full strategic context (military, economic, social, political, etc). Apologies if some of the language I used suggested anything beyond that.

As to the hypothetical I provided, the point was simply that not all means of exploiting a vulnerability need be direct - and that therefore the importance of not disclosing one's knowledge of a vulnerability can be difficult to assess unless one is acquainted with the full set of relevant facts.



Specialist with AdvantechFebruary 29, 2016 8:17 AM

Governments should be expected to disclose this information. Yes it can give them an advantage when fighting against hackers however it still leaves everyone else vulnerable. I think the issue lies in the way they disclose the information and to whom has earned or deserves the right to be informed of applicable vulnerabilities.

Peter GalbavyFebruary 29, 2016 9:57 AM

Perhaps some simultaneous discovery of bugs in code is more benign and down to new releases of testing and code coverage etc. tools?

ChrisMarch 15, 2016 6:15 AM

In my experience, bugs don't get fixed because nobody who cares (if there is one!) can be reached by the person finding the bug. It doesn't help when places like CERT have a policy that they do not report vulnerabilities to vendors. (They themselves told me this in writing last week).

There's always the other problem... when a vulnerability is "introduced", was that a mistake? $ Billions are given to the worlds smartest people to ensure they can "reveal our secrets". You'd be a pretty annoyed taxpayer if they weren't spending your money on engineering many of those "mistakes".

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient, an IBM Company.