Disclosing vs. Hoarding Vulnerabilities

There's a debate going on about whether the US government -- specifically, the NSA and United States Cyber Command -- should stockpile Internet vulnerabilities or disclose and fix them. It's a complicated problem, and one that starkly illustrates the difficulty of separating attack and defense in cyberspace.

A software vulnerability is a programming mistake that allows an adversary access into that system. Heartbleed is a recent example, but hundreds are discovered every year.

Unpublished vulnerabilities are called "zero-day" vulnerabilities, and they're very valuable because no one is protected. Someone with one of those can attack systems world-wide with impunity.

When someone discovers one, he can either use it for defense or for offense. Defense means alerting the vendor and getting it patched. Lots of vulnerabilities are discovered by the vendors themselves and patched without any fanfare. Others are discovered by researchers and hackers. A patch doesn't make the vulnerability go away, but most users protect themselves by patching their systems regularly.

Offense means using the vulnerability to attack others. This is the quintessential zero-day, because the vendor doesn't even know the vulnerability exists until it starts being used by criminals or hackers. Eventually the affected software's vendor finds out -- the timing depends on how extensively the vulnerability is used -- and issues a patch to close the vulnerability.

If an offensive military cyber unit discovers the vulnerability -- or a cyber-weapons arms manufacturer -- it keeps that vulnerability secret for use to deliver a cyber-weapon. If it is used stealthily, it might remain secret for a long time. If unused, it'll remain secret until someone else discovers it.

Discoverers can sell vulnerabilities. There's a rich market in zero-days for attack purposes -- both military/commercial and black markets. Some vendors offer bounties for vulnerabilities to incent defense, but the amounts are much lower.

The NSA can play either defense or offense. It can either alert the vendor and get a still-secret vulnerability fixed, or it can hold on to it and use it to eavesdrop on foreign computer systems. Both are important US policy goals, but the NSA has to choose which one to pursue. By fixing the vulnerability, it strengthens the security of the Internet against all attackers: other countries, criminals, hackers. By leaving the vulnerability open, it is better able to attack others on the Internet. But each use runs the risk of the target government learning of, and using for itself, the vulnerability -- or of the vulnerability becoming public and criminals starting to use it.

There is no way to simultaneously defend US networks while leaving foreign networks open to attack. Everyone uses the same software, so fixing us means fixing them, and leaving them vulnerable means leaving us vulnerable. As Harvard Law Professor Jack Goldsmith wrote, "every offensive weapon is a (potential) chink in our defense -- and vice versa."

To make matters even more difficult, there is an arms race going on in cyberspace. The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to. But if we patch all the vulnerabilities we find, we won't have any cyber-weapons to use against other countries.

Many people have weighed in on this debate. The president's Review Group on Intelligence and Communications Technologies, convened post-Snowden, concluded (recommendation 30), that vulnerabilities should only be hoarded in rare instances and for short times. Cory Doctorow calls it a public health problem. I have said similar things. Dan Geer recommends that the US government corner the vulnerabilities market and fix them all. Both the FBI and the intelligence agencies claim that this amounts to unilateral disarmament.

It seems like an impossible puzzle, but the answer hinges on how vulnerabilities are distributed in software.

If vulnerabilities are sparse, then it's obvious that every vulnerability we find and fix improves security. We render a vulnerability unusable, even if the Chinese government already knows about it. We make it impossible for criminals to find and use it. We improve the general security of our software, because we can find and fix most of the vulnerabilities.

If vulnerabilities are plentiful -- and this seems to be true -- the ones the US finds and the ones the Chinese find will largely be different. This means that patching the vulnerabilities we find won't make it appreciably harder for criminals to find the next one. We don't really improve general software security by disclosing and patching unknown vulnerabilities, because the percentage we find and fix is small compared to the total number that are out there.

But while vulnerabilities are plentiful, they're not uniformly distributed. There are easier-to-find ones, and harder-to-find ones. Tools that automatically find and fix entire classes of vulnerabilities, and coding practices that eliminate many easy-to-find ones, greatly improve software security. And when a person finds a vulnerability, it is likely that another person soon will, or recently has, found the same vulnerability. Heartbleed, for example, remained undiscovered for two years, and then two independent researchers discovered it within two days of each other. This is why it is important for the government to err on the side of disclosing and fixing.

The NSA, and by extension US Cyber Command, tries its best to play both ends of this game. Former NSA Director Michael Hayden talks about NOBUS, "nobody but us." The NSA has a classified process to determine what it should do about vulnerabilities, disclosing and closing most of the ones it finds, but holding back some -- we don't know how many -- vulnerabilities that "nobody but us" could find for attack purposes.

This approach seems to be the appropriate general framework, but the devil is in the details. Many of us in the security field don't know how to make NOBUS decisions, and the recent White House clarification posed more questions than it answered.

Who makes these decisions, and how? How often are they reviewed? Does this review process happen inside Department of Defense, or is it broader? Surely there needs to be a technical review of each vulnerability, but there should also be policy reviews regarding the sorts of vulnerabilities we are hoarding. Do we hold these vulnerabilities until someone else finds them, or only for a short period of time? How many do we stockpile? The US/Israeli cyberweapon Stuxnet used four zero-day vulnerabilities. Burning four on a single military operation implies that we are not hoarding a small number, but more like 100 or more.

There's one more interesting wrinkle. Cyber-weapons are a combination of a payload -- the damage the weapon does -- and a delivery mechanism: the vulnerability used to get the payload into the enemy network. Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won't know about it. But if it doesn't, it's deliberately leaving the US vulnerable to cyber-attack. Maybe someday we can get to the point where we can patch vulnerabilities faster than the enemy can use them in an attack, but we're nowhere near that point today.

The implications of US policy can be felt on a variety of levels. The NSA's actions have resulted in a widespread mistrust of the security of US Internet products and services, greatly affecting American business. If we show that we're putting security ahead of surveillance, we can begin to restore that trust. And by making the decision process much more public than it is today, we can demonstrate both our trustworthiness and the value of open government.

An unpatched vulnerability puts everyone at risk, but not to the same degree. The US and other Western countries are highly vulnerable, because of our critical electronic infrastructure, intellectual property, and personal wealth. Countries like China and Russia are less vulnerable -- North Korea much less -- so they have considerably less incentive to see vulnerabilities fixed. Fixing vulnerabilities isn't disarmament; it's making our own countries much safer. We also regain the moral authority to negotiate any broad international reductions in cyber-weapons; and we can decide not to use them even if others do.

Regardless of our policy towards hoarding vulnerabilities, the most important thing we can do is patch vulnerabilities quickly once they are disclosed. And that's what companies are doing, even without any government involvement, because so many vulnerabilities are discovered by criminals.

We also need more research in automatically finding and fixing vulnerabilities, and in building secure and resilient software in the first place. Research over the last decade or so has resulted in software vendors being able to find and close entire classes of vulnerabilities. Although there are many cases of these security analysis tools not being used, all of our security is improved when they are. That alone is a good reason to continue disclosing vulnerability details, and something the NSA can do to vastly improve the security of the Internet worldwide. Here again, though, they would have to make the tools they have to automatically find vulnerabilities available for defense and not attack.

In today's cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time. The world's militaries are investing more money in finding vulnerabilities than the commercial world is investing in fixing them. The vulnerabilities they discover affect the security of us all. No matter what cybercriminals do, no matter what other countries do, we in the US need to err on the side of security and fix almost all the vulnerabilities we find. But not all, yet.

This essay previously appeared on TheAtlantic.com.

Posted on May 22, 2014 at 6:15 AM • 44 Comments

Comments

SkepticalMay 22, 2014 8:10 AM


Strong, balanced essay.

One half-baked thought tangentially arising from the imbalance of vulnerabilities point (that a North Korea is less vulnerable than a United States because the US is more reliant upon computer networks and information systems):

If other nations do start (or increase the rate at which they already are) adopting systems unique to themselves, e.g. an OS developed by the PRC for PRC government computers, then the calculus changes. Obviously if NSA finds a vulnerability in the special flavor of OS used by the PRC, or in a particular firmware special to Russia, the offense/defense dilemma is relaxed somewhat (assuming the vulnerability to be unique as well). The US would be able to devote more focused effort on securing its own systems while simultaneously seeking and developing vulnerabilities in those of other nations.

Ironically, then, if other nations, driven by domestic lobbying groups and the Snowden leaks, do adopt idiosyncratic systems in the name of security, it's possible that the end result would be greater security, from both an offensive and defensive perspective, for the US and allied nations.

KnottWhittingleyMay 22, 2014 9:39 AM

Bruce:

Imagine that China knows about a vulnerability and is using it in a still-unfired cyber-weapon, and that the NSA learns about it through espionage. Should the NSA disclose and patch the vulnerability, or should it use it itself for attack? If it discloses, then China could find a replacement vulnerability that the NSA won't know about it.

More importantly, if we disclosed this sort of thing by default, China would quickly clue in to what we had been spying on, and close the channels of information.

That is basic in spying---you usually can't exploit most of the information you gather, because doing so would reveal what you've got information about, compromising your sources and methods.

In war, planners must often make sacrifices to keep the enemy from knowing what they know---e.g., letting some defenseless convoys get attacked by U-boats, some assaults on beaches fail, etc.

Presumably, if we have good espionage about what vulnerabilities the Chinese are ready to exploit, the last thing we'd do is immediately patch those vulnerabilities.

The Chinese know that vulnerabilies are numerous, and that most of the ones they find won't be found independently by others any time soon---and to be very suspicious if by some amazing coincidence they are.

EricMay 22, 2014 9:52 AM

Isn't there a third option: Detect and defend?

If we are resigned to a situation where all or most internet traffic goes through the NSA, couldn't they in theory build algorithms that detect usage of known-to-them zero day exploits?

In this way, the NSA is much more likely to learn who the bad actors are and how they operate, which seems to be valuable intelligence. They could optionally shut down or inform the attacked of their vulnerability during or after the attack.

CodeedogMay 22, 2014 10:01 AM

A missed nuance to the disclosure debate:

Revealing vulnerabilities to software companies, and thereby fixing them, educates developers about good security practices. Say what you will about the need to ensure good security programming work before a product goes out, nothing teaches developers about secure programming like having to deal with a very public broken algorithm.

Having spent time working for a large s/w company and responsible for both post-release critical patching and pre-release security reviews, I found it quite valuable to induce humility in an otherwise egoist know-it-all whose primary security knowledge came from logging into a computer. "No one will ever figure this out." "They can't decompile my code."

Otherwise quite competent people fail miserably on the security side because being good at computer security takes years of effort. Some of that requires either making mistakes or watching others make mistakes. Those mistakes are hidden until someone finds and reveals them. Of the population finding vulnerabilities (Government Agencies, White/Grey/Black Hats, Tool Vendors, Internal Teams), who reveals them to the s/w company?

It would be nice if our government, serving to protect us, helped out a bit and raised security expertise by reporting bugs to companies.

CodeedogMay 22, 2014 10:11 AM

One other thought:

Imagine a government agency tasked with protecting the citizenry from malfunctioning software akin to the NHTSA (Snort, ROFL, I know, right?). Perhaps it's primary work would be to seek and notify about security vulnerabilities in COTS s/w, h/w and live web systems. It could establish standards for secure programming and fine companies who are out of compliance and put their customers at risk (privacy leaks, financial theft, identity theft).

Now, I've built enough s/w to know that the industry would scream murder and economic ruin were such an agency put in place. However, as Bruce outlined above, it's not in the Intelligence Agencies best interest to have such an agency - the US government would prefer we are all vulnerable so they can better do their job protecting us.

I wonder if there was no spy value to security vulnerabilities, would the government have long ago created NITSA (National Information Technology Security Agency).

ZygoMay 22, 2014 10:16 AM

Everyone uses the same software

...for now. It used to be the only way to be interoperable. Everybody, including the other guys, bought Intel and one of three1 software stacks to run on it, and bought the new one a few years later because of requirements creep.

This could change. Heartbleed gave people a reason to consider using something that isn't the most popular software on the Internet. Snowden gave people a reason to consider using something that isn't American. Power constraints are starting to give people a reason to consider using something that isn't Intel.

The leading edge is still very important, but the gap between the leading and trailing edges is growing, and all of it is creating a lot of opportunities for diversity. That could mean that in the near future we will have more things that we can use for attack that are irrelevant to defense and vice versa.

Then again, it probably just means we'll be vulnerable to router firmware bugs from 2007 forever, because ruthless market pressure won't be able to kill them as collateral damage like they used to.

1 or 4, or 2, depending on when you stop counting.

Andrew YeomansMay 22, 2014 10:25 AM

If vulnerabilities are plentiful, then the NSA has a valid third choice - notify the vendor, while exploiting it before the patch comes out. With sufficient vulnerabilities, there will always be enough to be exploited.

Even for Open Source software, it will take a week or two before a patch comes out. Closed source software may take longer.

In many cases the vulnerability only needs to be exploited once, to drop non-vulnerable malware onto the device. Then no need to exploit it again. So the rapid turnover in vulnerabilities helps improve code quality, while letting the people with biggest budgets still keep on top of exploitation.

(Perhaps I ought to use an alias of Machiavelli!)

keinerMay 22, 2014 10:31 AM

"There is no way to simultaneously defend US networks while leaving foreign networks open to attack."

I doubt that, patches/updates stratified for IP-region, language/time on computer, whatsoever, is more than likely.

Microsoft was in the boat with something like that for infecting Iranian nuclear plants (uranium centrifuges) and that was long, long ago...

NobodySpecialMay 22, 2014 10:46 AM

Its the same dilemma the NIH faces, should they release a new cure for a disease or keep it secret so the disease can be weaponised in a future war.
What do they do ?

Chris AbbottMay 22, 2014 11:14 AM

I'm of the belief that the vulnerabilities pose a bigger threat to our national security than terrorism. US companies and consumers have lost way more money to cyberattacks than terror attacks. That directly impacts our economy and threatens everyone.

Clive RobinsonMay 22, 2014 12:14 PM

One of the failings of "armchair generals" is not understanding that the most important part of an attack is defending it. Thus an attack without suitable defence is a "suicide run".

The west is significantly more vunerable to attacks and suicide runs than the less "electronicaly developed" parts of the globe, and this should be the main point of consideration.

We should ensure we have strong defences to repell attackers not stock pile weapons to commit suicide with should we be attacked.

I know people will consider the MAD stratagie, but in all honesty the reasoning behind MAD was based on many assumptions that have since been found to be either false or seriously flawed, and thus appeared to be a form of FUD designed to increase military spending.

Thus are we falling into yet another cold war style stratagie in which a few enrich themselves needlessly from the majority.

I suspect we are whilst distant war used to bring economic advantages to nations, this is nolonges true. The greatest economic improvment can be seen in nations with the lowest percentage of GDP spent on offencive military forces. Which sugests that the best spending on resources would be to protect the wests infrastructure and assets with suitable defencive actions.

Contrary to many commonly held belifes much of the infrastructure and informaton assets held in the West has no need of connection to a world wide insecure network. The reason it is, is due to a short sighted race to the bottom by executives and the shareholders they supposadly represent.

Look at it this way, if you leave your house with all the doors and windows open and come home to find out you have been robbed, what is your insurance company going to say... Likewise what will all your friends think whilst saying how horrible it must be for you...

Now look at it this way you have a warehouse which you employ security guards to protect, how are you going to feel after it has been robbed when you find the security guards were not there protecting it because they were too busy stacking guns in the basment of their office or running around chasing suspicious looking people on the other side of town? Which is in effect what the NSA et al are doing...

DanielMay 22, 2014 2:27 PM

I'm not sure balance is the best solution in this situation because it overlooks the /domestic/ problem. What scares me about zero days is the way that they can be used by one friendly segment of the population against another--what's to stop the NSA from spying on Congress with a zero day? Oh wait, it appears they already have. So a lot of trust is being placed in the NSA both in terms of determining what is revealed but also who it is used against.

I think it is a serious mistake to treat the NSA or the CIA as neutral government agencies struggling to do "what's right". History suggests that the spook agencies are themselves a political interest group who have no qualms about using the tools at their disposal to target those they perceives as their domestic enemies.

Mr. PragmaMay 22, 2014 5:40 PM

(@Bruce)

If we show that we're putting security ahead of surveillance, we can begin to restore that trust.

No.

You may regain the "trust" of usa-inclined people in colonies and "allies" (read: vasall states).
You may, however, not regain the trust of billions of people and of millions of companies.

Whatever positive steps the usa takes will be understood as temporary and/or official ("PR") steps.

The two "suicidal" sins were spying on "friends" and on usa's own citizens.

Mr. PragmaMay 22, 2014 6:26 PM

I'm wondering whether many of us are not singularly focussed. I'd like to propose some complementary perspectives.

The probably most important one first. *The* vulnerability with the system, particularly nsa, that lead to massive, disastrous dammage was to not applying adequate security practices within itself, both concerning data and personel.

The consequences of this failure had consequences that are quite probably considerably more grave that many of the scenarios the usa tries to defend itself against in the "cyber" and espionage field.

While this sounds and is obvious I feel that point is important enough from (not only an Op) Sec perspective to be mentioned in that context.

Interestingly the weaknesses that lead to that disaster for usa are quite similar to those in many companies and even households. *Obviously* the whole system security of nsa/fbi/similar are based on untenable and/or unsound grounds.

Cynically, while nsa and other agencies discuss defense, mitigation, and potential attacks on others, they still and again miss the mother of all their vulnerabilities, themselves and their structure, views, and approaches.

In effect nsa did not have any sensible level of self-protection in place and, even worse, they to a shockingly high degree relied on "the system working" and obscurity, i.e. both their defense and their attacks were based on "nobody knows" and "the system works" assumptions to a large degree.

But there is an even deeper reaching vulnerability strongly indicated by their current proceedings, approaches, and statements. It largely rooted in us-americans mixing up "usa" and "correct, good thing to to" and "correct, good way to act", i.e. a form of "patriotism" grossly confused with a near complete lack of proper reflection and self-reflection along with the classical hubris of considering oneself to be the one party in charge and leading along with "academic hubris" à la "But we still have the best cryptography and are light years ahead.

Looking closer one finds that nsa to a large degree wasn't guided by technical, operational, or mathematical wisdom but by superiority assumptions (a us classical and a profound weakness), by a bureaucracy and authority self-understanding, by vanity ("all others are weaklings compared to us so we are pretty safe"), by blunt stupidity, by legal standing and means, by the readiness to be ultimately rude, ignorant, and brutal, and some more.

But then, what's the effective security and capabilities of a secret agency that can be profoundly fu**ed by some youngster who, even after the fact, was considered and painted as an academic, military, and general loser and failure?

Connected to this mega-vulnerability which is next to impossible to cure in any reasonable amount of time are loads of other vulnerabilities which we can discuss another time and place.

StuartMay 22, 2014 7:55 PM

While there might have been mention, what is the calculus (more correctly Bayesian priors) regarding the probability that the vulnerability is more likely to be used by common criminals vs intelligence agencies?
While it may not be of much interest to spy vs spy to protect their own populace from criminal elements, it is of interest to the governments who use intelligence agencies that their populace and elite are protected from the common criminal

bara baraMay 22, 2014 10:28 PM

Credit where credit is due, skeptical is making good sense today. As it happens, indigenous security is on the way. Russia's entry is far along - Astra linux, approved for Russian government use, making it an attractive choice for everyone, since Russia's threat model, the NSA, is identical to that of the general public worldwide. More importantly, tools exist to disseminate security to civil society as well as states: http://www.linuxfromscratch.org/hlfs/ , https://leap.se/en . NSA will of course shoehorn every security initiative into its nationalistic worldview and attack all reliable security and privacy as a threat. That's why we're going to blind it.

IndependentMay 23, 2014 1:45 AM

Clive Robinson:

"Contrary to many commonly held belifes [sic] much of the infrastructure and informaton [sic] assets held in the West has no need of connection to a world wide insecure network. The reason it is, is due to a short sighted race to the bottom by executives and the shareholders they supposadly [sic] represent."

As long as private profit is placed above public good, isn't what you describe inevitable?

"[...]the security guards were not there protecting it because they were too busy stacking guns in the basment [sic] of their office or running around chasing suspicious looking people on the other side of town? Which is in effect what the NSA et al are doing..."

Certainly sounds like what the focus of the U.S. military has been...

(BTW, Ever consider a spell-checker?)

erichMay 23, 2014 2:28 AM

The Chinese, the Russians, and many other countries are finding vulnerabilities as well. If we leave a vulnerability unpatched, we run the risk of another country independently discovering it and using it in a cyber-weapon that we will be vulnerable to.

The perfect solution for that dilemma would be a vulnerability, that can only be exploited on Chinese systems. Like e.g. this one:

"Successful exploitation of this vulnerability allows execution of arbitrary code, but requires the Grammar Checker for Chinese (Simplified) feature to be enabled."

https://technet.microsoft.com/en-us/library/security/ms14-023.aspx


It's also a funny coincidence, that Microsoft Office, in it's encryption passwords, converts a whole class of Unicode-characters to a fixed value. That throws away a lot of entropy without any technical reason.

Mr. PragmaMay 23, 2014 2:45 AM

erich (May 23, 2014 2:28 AM)

I don't now a lot about windows and I might be wrong (or got you wrong), but ...

While I think it's quite possible (even probable) that microsoft acts as a willing accomplice of nsa, that particular Unicode issue might have the innocent reason that Windows (from what I know) by default uses UCS2, that is the 16 bit Base Plane which does include chinese code points but does not support other planes with ancient languages, some math symbols, musical notes, etc, which might possibly be all trimmed down to some "unsupported code point".

Which btw is not at all unreasonable to do in (the very many) situations were one can reasonably assume that pretty all actively used global languages will be sufficient and can then be read/written using just 2 Bytes rather than 4 Bytes per code point. (Consider: For a single document that might seem irrelevant, for e.g. large databases with a large part being text, however, the 2 vs 4 Bytes/code point issue might well become a major one).

yesmeMay 23, 2014 3:10 AM

@erich

"The perfect solution for that dilemma would be a vulnerability, that can only be exploited on Chinese systems."

I am quite sure that perfect solutions do not exist and in the worst case they backfire exponentionally.

// little rant

I think that accepting that something bad happens every once in a while is a much better approach. Let's face it, the Bostom bombings were (again) blown out of proportions. Both by the gov and the media. It looks like that once "they" have an excuse, all gloves are off and they are pitbulling (pitbullying) all over.

The week after the Boston bombings however a massive explosion in West, Texas[1] killed more people than in Boston and somehowe it didn't became a news item.

If we could just accept that terrorism attacks happen, and investigate these attacks properly, there is no reason for hysteria.

Because, let's face it, that's what todays policy of the NSA is built on. It's the hysteria, combined with an extreme aggressive and polirizing US gov policy[2].

And let's also face the fact that what George W. Bush did was in 8 years far worse than 9/11. Not only did he invaded 2 countries on false assumptions, he also ruined the US financially and created lots of enemies and I don't think he made one alley.

// end little rant

I think the answer should be found in making systems more secure and also in cooperation instead of competition.


[1] https://en.wikipedia.org/wiki/West_Fertilizer_Company_explosion
[2] http://georgewbush-whitehouse.archives.gov/news/releases/2001/09/20010920-8.html - "Either you are with us, or you are with the terrorists. (Applause.)"

WaelMay 23, 2014 4:51 AM

@yesme,

I should have used a spell checker...
It'll certainly help but will occasionally cause other sorts of errors. I once sent an email to a group, and the spell checker changed a miss-typed "inconvenience" to "incontinence". So it said: "Sorry for the incontinence, the binary is unavailable..."
The replies were funnier. Things like: "it
depends", etc...

Speaking of tools, Microsoft office and outlook have a built-in translation engine that take your text and send it to a Microsoft hosted server for translation. The text is sent using https protocol, without the S, as in HTTP. It's sent in clear text, without transport protection...

Clive RobinsonMay 23, 2014 5:56 AM

@Independant,

As long as private profit is placed above public good, isn't what you describe inevitable?

Not quite, consider a companies IP, it's what their private profit is made on. So why be daft enough to put it on systems with either direct or indirect access outside of the company?

My thought on this is that executives are putting very short term cost savings ahead of not just long term profitability but actual organisational existance.

In the past I've commented that non founding/owning execs view their organisational alegiance in months or a couple of quarters or a year and a half at most. That is their sole purpose being to up the share price and bug out to another job. As you are probably aware real growth in an organisation only happens with sustained inwards investment, which tends to cause a lower dividend or shareprice. Thus I can only assume many high share prices are either due to short term speculation or internal cost cutting, neither of which is conducive to longterm profit or existance.

Thus you get the feeling the execs will quite cheerfully slit the organisational throat to sell the life blood and then jump from the organisational corpse before the other blood sucking parasites realise and follow them, to repeat the cycle.

Coyne TibbetsMay 23, 2014 6:35 AM

@Skeptical: "If other nations do start (or increase the rate at which they already are) adopting systems unique to themselves, [...] The US would be able to devote more focused effort on securing its own systems while simultaneously seeking and developing vulnerabilities in those of other nations."

Actually, the focus is not likely to change.

Much of the NSA mission, by its own definition, is watching foreign nationals (and everyone within three degrees of separation thereof) here within the United States. Since most or all of those people will be using domestic systems it will still be as "essential" as ever (by NSA lights) to stockpile exploits against those systems.

Alan BostickMay 23, 2014 9:00 AM

What Bruce is describing here is a literal conflict of interest. It's a conflict for governments as a whole, and it is especially a conflict for the NSA in particular.

This conflict can be relieved by separating the defensive and attacking functions into different agencies.

One would be the spying and hacking agency, the one that gathers SIGINT, bugs computers and networks, cracks encryption, and hoards the vulnerabilities it discovers or purchases for cyberattack.

The other is the data security agency, which develops and promotes strong, uncompromised encryption, and network and computer security. The vulnerabilities it discovers or purchases are published and patched.

The important part of this is that these two agencies act independently. The data security agency doesn't let the spy agency know about its zero-days before publishing them. The spy agency doesn't tell the security agency to stay away from particular vulnerabilities. If the data security agency ruins the spy agency's pet attack, too bad. But the spies don't have to tell the security people about their zero-days either.

tOM TrottierMay 25, 2014 5:13 PM

@eric
One possible strategy for newly found vulnerabilities is to use them while keeping an eye out for others using them - and to develop corrected code. If others start using the vulnerabilities, then inform the code originators of the problem and give them the quick and effective correction for a very speedy fix, minimizing the vulnerability window.
This does lead us down a hall of mirrors - others could try to detect the detectors, or the detector-detectors, or detector-detector-detectors, and so on...
It also assumes very wide surveillance, not only of enemies, but of friends one wishes to protect.
It implies that vulnerabilities which are hard to correct (e.g. firmware/ROM) should be carefully assessed and, if no fast fix is possible, slow and quiet recalls for fake reasons need to be staged if friends would suffer more than enemies. "It might catch fire!" "Free performance trade-in upgrade!" Best to offer "faster alternatives" than trumpet "liable to be hacked".

@zygo
The more variations in software, the more likely
- vulnerabilities will be present (brain power is divided - more entropy)
- vulnerabilities will be more asymmetric - so more ability to keep quiet about opponents' vulnerabilities and to fix friends'

65535May 26, 2014 7:26 AM

I am late to discussion I will make my comments short.

“In today's cyberwar arms race, unpatched vulnerabilities and stockpiled cyber-weapons are inherently destabilizing, especially because they are only effective for a limited time.” –Bruce S.

I agree with that statement.

“I'm of the belief that the vulnerabilities pose a bigger threat to our national security than terrorism.”- Chris Abbot

In the big picture, or “net-net” harm inflicted upon American citizens and others, that is a reasonable statement given what we now know.

“I'm not sure balance is the best solution in this situation because it overlooks the /domestic/ problem. What scares me about zero days is the way that they can be used by one friendly segment of the population against another--what's to stop the NSA from spying on Congress with a zero day?” – Daniel

That is a real problem. Worse, the problem extends down the ladder from the richest politician to the small business person. These Spy agencies are stimulating the market for all Cyber weapons.

Further, these Spy agency Cyber weapons will find their way into the civilian market. They will hurt the most vulnerable parts of society first.

The hacking and theft of small business customer list can destroy that small business (this doesn’t include the hacking of a Point of Sale terminal, theft of credit card information and lawsuits which can quickly ruin a business).

I constantly see financially over-stretched and under-protected small business (businesses with less than 25 employees and 30 marginal computers) getting attacked with cyber weapons.

I was looking at customer’s firewall log and I constantly see Russia, China, France, Brazil and (even Washington DC) attacks logged. These attacks are include: Stealth fin port scans, SYN Floods, UDP Echo Chargen, UDP floods, IP spoofs and various attempts at remote access connections.

Although his firewall appliance does drop most attacks some slip through. The email server (I will not name the brand) and the Apache, Sql, Php, openSSL (and a well known website top-end content engine) machine still get hit hard.

It is a travesty that large; grossly over funded US Government organizations buy and hoard these Zero Day attacks – when the little guy suffers – while paying taxes to buy these Cyber weapons out of his meager business margins.

[Forbes]

“…Bekrar won’t detail Vupen’s exact pricing, but analysts at Frost & Sullivan, which named Vupen the 2011 Entrepreneurial Company of the Year in vulnerability research, say that Vupen’s clients pay around $100,000 annually for a subscription plan, which gives them the privilege of shopping for Vupen’s techniques. Those intrusion methods ¬include ¬attacks on software such as Micro¬soft Word, Adobe Reader, Google’s ¬Android, Apple’s iOS operating systems and many more—Vupen bragged at HP’s hacking competition that it had exploits ready for every major browser… For just over a year the Grugq has been supplementing his salary as a security researcher by acting as a broker for high-end exploits, connecting his hacker friends with buyers among his government contacts. He says he takes a 15% commission on sales and is on track to earn more than $1 million from the deals this year. “I refuse to deal with anything below mid-five-figures these days,” he says. In December of last year alone he earned $250,000 from his government buyers. “The end-of-year budget burnout was awesome.”

[The leakage problem - selling a Zero-day exploit to state sponsored thugs]

‘…the Open Society Foundations, calls Vupen a “modern-day merchant of death,” selling “the bullets for cyberwar.” After one of its exploits is sold, Soghoian says, “it disappears down a black hole, and they have no idea how it’s being used, with or without a warrant, or whether it’s violating human rights.” The problem was starkly illustrated last year when surveillance gear from Blue Coat Systems of Sunnyvale, Calif. was sold to a United Arab Emirates firm but eventually ended up tracking political dissidents in Syria. “Vupen doesn’t know how their exploits are used, and they probably don’t want to know. As long as the check clears.”’ -Forbes

http://www.forbes.com/sites/andygreenberg/2012/03/21/meet-the-hackers-who-sell-spies-the-tools-to-crack-your-pc-and-get-paid-six-figure-fees/

and

http://www.forbes.com/sites/andygreenberg/2012/03/23/shopping-for-zero-days-an-price-list-for-hackers-secret-software-exploits/

I wouldn’t hesitate to say that some of these governmental purchased/funded Zero-day attacks end-up attacking American civilian business with a vengeance. Vupen and it middlemen aren’t immune to selling Zero-day exploits sold to a one time governmental buyer – and once more to unknown thugs. That is pure economics.

“…what's the effective security and capabilities of a secret agency that can be profoundly fu**ed by some youngster who, even after the fact, was considered and painted as an academic, military, and general loser and failure?” -Mr. Pragma

I concur. These Spy agencies have a record of insecurity. Allowing these Spy agencies to create a bubble market for Zero-day cyber weapons is a recipe for disaster!

“The important part of this is that these two agencies act independently. The data security agency doesn't let the spy agency know about its zero-days before publishing them. The spy agency doesn't tell the security agency to stay away from particular vulnerabilities. If the data security agency ruins the spy agency's pet attack, too bad.”- Alan Bostick

On paper this sounds good. But, the reality is the NSA and other agencies are entranced in the system so deeply that it will take a jack-hammer to dislodge them. These spy agencies are not going to give up power or budget money.

I believe the near term solution is to cut these Spy Agency's budgets by 30% to 60% or more! They have gone too far! After they have felt the budget knife, then, and only then, should we throw more money at them for reform and reorganization.

Martin ShaunessyMay 27, 2014 8:11 AM

Everything we learn about the NSA involves a loss of trust. Both in the way they operate and in our top elected officials who have let them run wild. I don't really see how we can ever get that trust back. Given the massive classification of everything the NSA does and thinks, how could we ever know enough to trust them again? I can't imagine a realistic way that could happen.

AndyMay 28, 2014 12:45 PM

Very good blog post.

I think you missed one important tool/weapon that NSA is using. Planting backdoor. Authenticated backdoors. By introducing backdoors that only NSA, by design can use, you introduce a vulnerability that only we can use, but which is denied to the enemy. Of course, this is very volatile and must be carefully used. In fact, I don't like it all, but it has been done, and is been done, so I wanted to add it to the discussion.

Anon10May 29, 2014 10:36 PM

The article seems to ignore the fact that the NSA lacks the legal mission to protect US commercial networks. NSA's mission by law is to collect foreign intelligence. Until Congress creates and the President signs a new law that changes NSA's mission, NSA disclosing all vulnerabilities would probably be a violation of federal law, as they would be spending money for a purpose not authorized by law.

Clive RobinsonMay 30, 2014 7:06 AM

@Anon10,

Err the NSA is actually tasked with two objectives, firstly the one you give of listening to others, the second is protecting the communications of the US. When it was given that task many assumed it was just for mil/dip traffic, it's not, it's open ended.

Prior to the Bush administration the NSA had started quite a few initiatives in protecting computers and communications, since then they have more or less dropped them.

Some have argued --myself included-- that the dual role produced a slightly schizophrenic behaviour in the NSA when viewed from outside. The soloution they appear to have gone for is a labotomy of the " good side"...

AlanSMay 30, 2014 8:35 AM

@Anon10, @Clive Robinson

On this issue Dave Aitel has a go at Jennifer Granick and "privacy and civil liberties advocates" here.

He quotes a little bit from Granick's opening so he can set up a straw man (woman?) argument. He doesn't address any of the substantive points she makes related to the law, trust, and accountability.

Aitel writes "If you want to get technical, it’s not in the NSA’s charter to protect all US communications - only classified national security information. Private communications fall under the responsibility of the FCC, FBI and DHS."

To which one has to respond, so why is NIST's Computer Security Division,  which is "responsible for developing standards, guidelines, tests and metrics for the protection of non-national security federal information and communications infrastructure" required by law to consult with the NSA on security standards if this is not their responsibility? And is it in the NSA's charter to subvert standards and standards bodies that provide protections for information that isn't classified national security information, which is apparently what they have been doing?

Clive, I think it may be that their mission as laid out in laws, regulations, charters and what have you may be slightly schizophrenic but their self-understanding of their mission and their behaviour are not.

Anon10May 30, 2014 11:03 AM

@clive

Which section of the us code or which executive order gives nsa the ia mission for private commercial companies that don't deal with national security info? Provide your source

Nick PMay 30, 2014 12:17 PM

@ anon10, AlanS, Clive

The executive orders and laws that created the NSA, its mission, and it's regulations were pretty clear about the NSA's job. Their main mission, talked about incessantly, is to get intelligence (esp COMINT). Their only defensive requirement is COMSEC for U.S. government systems. There was no mention that they were required to protect all the nation's computers or communications. If anything, it would contradict their primary mission as we've seen with BULLRUN.

Much as it get's repeated, anyone saying they're supposed to protect us is simply wrong. There might be a law somewhere (eg NIST reference) implying it. However, there's a ton of clear law going against it and the national security trump card they can play. They're spies, not guardians. That's our job.

Note: They did take up a protect role back when they created NCSC and Orange Book to ensure security of IT products sold to Federal Government. Even then the mandate was only for what was sold to Feds. And that went bye bye. And Common Criteria standard for US govt agencies sets highest mandated level at EAL4+, "certified insecure." So, apparently they aren't even forced to protect federal systems past their COMSEC.

Nick PMay 30, 2014 12:18 PM

I forgot to add this declassified document that shows their history and activities. Makes the case clearer to neither expect nor trust their protection for civilian systems.

Anon10May 30, 2014 6:59 PM

@Alan S

I think you're reading too much into the NIST issue. It's hard to say without knowing which section of the code you're referencing, but NIST could be the exception that proves the rule. If NSA had the general mission of protecting all US networks, Congress probably wouldn't need to explicitly tell NSA that they needed to work with NIST on the limited topic of encryption algorithms. Regarding the CSD statement, in the real world, there's probably some gray area, since most COTS software probably incorporates NIST guidelines and some USG unclassified systems use COTS software for purposes that could have national security implications.

AlanSJune 1, 2014 8:32 PM

@Anon10 @NickP @Clive

NIST is charged with developing standards for civilian agencies and contractors and more recently voluntary guidelines for Improving Critical Infrastructure Cybersecurity. The latter is likely to become a de facto standard (i.e. implementing it demonstrates the basic level of corporate competence necessary to mount a credible legal defence against post-data breach litigation). The catch here is that the infrastructure and technology that is used to secure national security systems doesn't live in a separate bubble. So NIST is required to work with NSA to ensure that:

"...the development of standards and guidelines under section 20 of the National Institute of Standards and Technology Act (15 U.S.C. 278g–3) with agencies and offices operating or exercising control of national security systems (including the National Security Agency) to assure, to the maximum extent feasible, that such standards and guidelines are complementary with standards and guidelines developed for national security systems"
Title III of the E-Government Act 2002 (aka FISMA)

As Clive pointed out, however, there is potential conflict between the information assurance mission and the SIGINT mission (see NSA's description of its two missions). The NSA's information assurance mission is supposed to be restricted to national security systems but by regulation the NSA is allowed to have its grubby fingers in the civilian side of information assurance as well. In practice, I think many people would argue that this has given it a means to manipulate civilian standards to enhance its SIGINT mission.

For some history of the problematic relationship between NSA and NIST, prior to the recent fuss about Dual_EC_DRBG, see EPIC's discussion of Computer Security Act of 1987 (that preceded the E-Government Act of 2002) and and various associated directives and memos (NSDD 145, Poindexter memo, 1989 MOU, etc.).

Clive RobinsonJune 2, 2014 4:01 AM

@AlanS,

There is a bit more to the IA side than is generaly stated directly, the term "National Security" is very broad and used to hide a multitude of sins.

For instance it also covers "Economic Wellbeing" of "US Trade and Commerce". Which covers both International and Interstate (people tend to forget about Alaska, Hawaian Islands and one or two other dependencies where domestic signals are actually international).

Then there is it's involvment with the likes of NATO, the NSA are fully aware of the "trickle-down" effect where certain aspects of IA in those reigions will make it out into other nations civilian manufactures and suppliers...

The fact it's not "writ bold" like "neon in the night" does not make it any less a responsability for it's less than obvious visability. It's the same with GCHQ and other countries signals security establishments.

This behaviour can be seen back in the early days of cell phone standards in Europe. There were a number of encryption systems considered for what became GSM, and at one point there were two systems identified, a weak one for the standard pushed by the likes of the British and French --signals security establishments through their tame standards commity members-- and a more secure one that was supposed to be a secondary standard that was not to be openly published. It was a bit of a running joke at the time because many could see it was not European companies that would be manufacturing the phones, nore would many of the envisioned form factors alow for a chip socket etc for a second system to be added easily. Similar went on in the US and other nations that developed their own phone standards.

As it turns out the neigh sayers were only partly right, which is why there is a small market for phones using much stronger encryption as an add on...

Nick PJune 2, 2014 11:05 AM

@ AlanS

"The catch here is that the infrastructure and technology that is used to secure national security systems doesn't live in a separate bubble."

I totally agree. I've said it here and to NSA types. So far, the one's I've met see themselves as elite and entited to the extra power/trust. I'm not sure if that attitude is why they ignore statements like I just quoted. Alternative is it's just a top-down policy thing that could be more easily changed, reinforced by elitist attitudes.

Ok back to the laws. Noticed the page with NSA's mission first mentions IA, but then in clarifies it in third bulletpoint of following list. Previous laws were for IA and COMSEC of classified information. The previous interpretations of them allowed weak security for most workstations and servers, too. The new requirement sounds similar to the old, will be treated that way, and so this page doesn't help us. It's just NSA public relations.

The FISMA Act you mentioned is definitely relevant. Btw, thanks for bringing it up as I mistakenly thought FISMA was a regulation/policy rather than a law. I posted a FISMA presentation for small businesses here one time as what was in it made a lot of sense and could be cost-effective. Standards like that across federal government are a nice start at security. Seeing things get put into law (and NIST publications that followed) is a positive thing.

So, does it apply to NSA? This is an apparent contradiction of law. One set of laws (with teeth I might add) tells NSA to maximize COMINT/SIGINT and do a limited IA mission. This one expands it greatly in a way that contradicts (they'd say) their SIGINT mission, their main reason for existence. Courts generally handle these situations by adopting an interpretation that is the least disruptive. A lower court might rule either way, but federal courts ruling against the NSA seems unlikely. So, I'm maintaining the status quo that NSA doesn't have to do this stuff and can pervasively make everything weaker for SIGINT. This NIST law has potential, but Congress needs to give new directive enough teeth to bite NSA. Good luck...

Side note: Rulings like this about what constitutes reasonable security might apply. The courts are too behind on these things.

Wesley ParishJune 3, 2014 5:27 AM

The NSA sitting on vulnerabilities expecting that they will be the only ones to be able to exploit them?

Sounds like they qualify for a Darwin Award.

Everybody knows that technical ability isn't restricted to one given nation. And others are hungrier ...

(Moderator, where are you? I can count at least three examples of spam on this page.)

Ann OnymouslyJune 18, 2014 8:50 PM

"Disclosing vs. hoarding" is a very, very, very ... well, let's say it's just a very old and very tired issue. Aside from that, I think you miss some important points about Cyber Command and NSA/CSS. I don't think the NSA needs a defender, actually, but your essay misses the target.

Incidentally, whatever else might be said about Mr. Snowden or his immense popularity in the media and among pundits in recent months, the fact remains that he decided to become a traitor to his nation and then did so with financial, logistic, and moral support from people and organizations who clearly had vested interests in the outcome of an espionage project from which they could stay safely at arm's length, exactly as they did with Bradley Manning or even the now nearly-forgotten Mr. Assange. We (including yourself) might consider tempering our view of Mr. Snowden's self-serving allegations with that in mind.

Moving on: Even if we completely ignore Cyber Command, the NSA has always had two roles. As you know, it's a foreign intelligence agency and it's the DoD's comms/cyber-defender. Hence, for NSA the disclose/hoard decision is an incredibly tough one and has been so since the earliest days of the Cold War. The basic problem is that adversaries might be using day-zero exploits to break into sensitive US industrial or government comms or systems. Once these are discovered, the NSA is obliged to field defenses and deploy them quickly, thereby possibly disclosing the fact of their discovery.

At the same time, it's possible that some (perhaps most) of the adversaries' systems aren't patched on day zero, because their governments or leaders don't publicize their cool day-zero exploits on day zero. So the NSA is obliged to keep secret day-zero exploits secret until someone else reveals them in public and everyone on the planet successfully installs a patch (unless, of course, you're one of the millions of newly-orphaned XP users, in which case you're just fundamentally screwed). If you follow Verizon's annual DBIR roll-ups, it's clear that patching by itself could take well over a year. Or a lot longer.

It's not a matter of whether the NSA *can* play offense or defense, in any case. They're obliged to play both and do so extraordinarily well. And I hasten to point out that they are a foreign, not a domestic, intelligence agency. My strictly personal opinion on recommendation 30 is that imposing arbitrary restrictions on 'hoarding' exploits by US foreign intelligence agencies is an absurd notion. We pay those agencies to do precisely that kind of thing, after all. But publishing bugfixes as soon as you can (as a citizen) is clearly not absurd, and generally we're doing fairly well at that as well.

Nick PJune 18, 2014 9:26 PM

@ Ann Onymously

"and then did so with financial, logistic, and moral support from people and organizations who clearly had vested interests in the outcome of an espionage project from which they could stay safely at arm's length, exactly as they did with Bradley Manning or even the now nearly-forgotten Mr. Assange."

I about stopped reading your comment at that point. Saying it about Snowden I can see. Claiming the same thing about Wikileaks and Manning seems speculative at best. Wikileaks did their own thing with their own funding source and methods that predated their organization. They embarrassed a ton of powerful countries and organizations with their leaks. Manning leaked directly to them and got busted because he bragged to the wrong guy (Lamo). The idea that they had "financial, logistic, and moral support from people... vested interests in... outcome of espionage project" is unsubstantiated and existing explanations work well enough. Your comments on them are just a smear attempt. And have nothing to do with NSA hoarding or releasing vulnerabilities on top of that.

Back to that subject.

In short, I agree with your points on NSA. (Glad I kept reading) Their actions are consistent with their mission and how intelligence field has done things for a long time. Only point I dispute is your saying they're supposed to play it both ways. The defense side of their mission is just COMSEC and has narrow interpretations that doesn't necessarily require protecting endpoints, the main attack vector. So, they can get away with leaving us all vulnerable for their primary mission. They could also rightly say that people or markets wanting more secure IT assets could always pay for it to be developed. And their spending priorities have been on anything but high security.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Resilient Systems, Inc.