How Many Vulnerabilities Are there in Software?

Dan Geer proposes some techniques for answering this question.

Posted on April 16, 2015 at 6:27 AM • 39 Comments

Comments

AnuraApril 16, 2015 7:29 AM

An important thing to realize, which this paper ignores, is that it depends on the language and the size of the codebase. I can make small programs in C and medium size programs in C# that have no vulnerabilities. Make a large C or C++ program, and I pretty much guarantee you will have vulnerabilities.

Bob S.April 16, 2015 8:16 AM

Are vulnerabilities dense or sparse?

Vulnerabilities are of infinity variable density and subject to transitioning to previously unknown vectors. The only way to know is to test everything.

I wished I had known Geer is with In-Q-Tel/CIA before I clicked the link.

ScottApril 16, 2015 8:36 AM

An interesting twist on this is that the filtering software used at my company generated a false positive because the URL included the letters "fgm". That made it an "adult" site. So a paper on vulnerabilities exposed a vulnerability and a possible attack -- get people in trouble by using certain letters in your URL!

JeffPApril 16, 2015 9:03 AM

"...motivated by the question of whether patching matters..." If you ask a PCI auditor, the answer is, "Yes. It is on my checklist." From a security perspective, wouldn't a moving target of changing vulnerabilities be better than an unchanging target?

paulApril 16, 2015 10:00 AM

The question of whether patching matters seems orthogonal to this examination, at least as far as if goes here. To decide whether patching matters, you also need a model of the black-hat process. For example, if vulnerabilities are sparse but black-hat and white-hat examination methods are anti-correlated, then patching won't help because even if you're reducing vulnerabilities by a large amount you're not reducing the ones that are being used by bad actors. Or, the other way around, if black-hat and white-hat examinations were perfectly correlated, patching would be good even if vulnerabilities were dense, because the bad actors would be finding the same ones as the patchers.

Obviously the reality is somewhere in between, and further complicated by what you're trying to do. Protecting ordinary mortals against widely-deployed threats is a different goal from that of protect high-value targets against cautious, sophisticated adversaries.

I wonder, though, whether the terms "sparse" and "dense" are even necessarily useful in this context. Attack resources and vectors keep increasing, so that qualities that previously didn't qualify as vulnerabilities (say, local power consumption or dissipation) may come to do so.

Martin WalshApril 16, 2015 10:26 AM

We constantly apply tests and static analysis to our application code but it still leaves a lot to your judgement. NDepend for example, is fantastic. But the report itself requires significant investment in time to understand and decide what if anything should be done. Security audits? Apart from the fact these people are being churned out like cigarettes now, does anyone know anything except for middle school pen-testing? I don't know, except that if the people conducting an audit don't understand your application, then they're just taking your money.

Xiong Chiamiov April 16, 2015 10:35 AM

When the author finally gets to his suggestion in the last paragraph, it seems to have a fundamental flaw to me: ignoring the rate at which vulnerabilities are introduced.

What I've seen in my own experience is a constant rate of security patches in a project, but with almost all of them in newly-released code. That doesn't imply a high density of bugs at all, but rather just that any new code (at least from that set of developers, in that system, with that workflow) is extremely likely to have vulns in it.

David LeppikApril 16, 2015 10:59 AM

Geer compares counting frogs in a pond via capture-and-release and counting flaws in code. This is an interesting analogy, considering he's a CIA contractor.

The equivalent of capture-and-release in software is finding bugs and not reporting them. If you don't release your frogs, you've lowered the number of frogs in the pond. If you report a security flaw, not only do you (potentially) get that flaw fixed, you draw attention to that piece of code-- possibly leading to the discovery of even more flaws.

His version of capture-and-release works best if you have no interest in fixing the security flaws.

MikeAApril 16, 2015 11:29 AM

@Xiong Chiamiov

"What I've seen in my own experience is a constant rate of security patches in a project, but with almost all of them in newly-released code."

This coincides with my impression as well. Yet, the opposite contention is made in the paper associated with the presentation:

http://data.proidea.org.pl/confidence/9edycja/materialy/prezentacje/SandyClark.pdf

"(This excludes, of course, defects introduced by patches, which are a minority in practice)."

I suspect there are two causes for this disconnect:

1) As the paper's authors point out, _vulnerabilities_ are not the same as _bugs_.
They arise from various sources, some outside the the control of the programmer, while bugs may have bad (for the user) effects that do not lead to exploitable vulnerabilities. Further, attackers have a learning curve discovering newly introduced vulnerabilities.

2) The paper seems to consider vulnerabilities _within_ a "Product", whereas to the average user, updates are "take it or leave it" propositions, and usually include a negative component (to the user, while being positive in the sense of profit-maximizing or ideologically more pure, for the author), so the user has to weigh pros and cons. When an exploit is published, the scales shift pretty rapidly. You can buy a new computer to run the new OS which is the only one for which a vulnerability was fixed, or you can be prey to every bored script-kiddy. Your choice.

The other aspect is that (purely anecdotally), software seems to have become much more tightly coupled and heavily interdependent as time goes by. There is no such thing as fixing _a_ bug in _a_ module, when everything has to be re-compiled (or at least deal with incompatible libraries) when the maintainer of some library or API has a "cool new idea". Heartbleed and ShellShock come quickly to mind.

I don't know the "solution" to this. It will always be to the author's benefit to force churn. While Open Source theoretically means "you can fix it yourself", I can also theoretically walk from coast to coast of the U.S. in sandals. But I have a life. Not to mention, e.g. that the Linux kernel seems totally dependent on GCC_isms, so "reflections on Trust" comes into play.

BeverlyApril 16, 2015 12:01 PM

@Anura: "I can make small programs in C and medium size programs in C# that have no vulnerabilities."

Correction: you can make small programs in C and medium size programs in C# that have no vulnerabilities that you can find.

Nick PApril 16, 2015 12:01 PM

Work like this is barely worth anything. The effort needs to go into tools to identify bugs in software. That both identifies bugs and helps get them out. The tools proving their absence, such as SPARK, are also useful as one can just code until the tool certifies the result. Additionally, as new attacks are found, we should make both manual and automated processes of identifying those at the specification or code level.

That our software is such a confusing, bug-ridden mess that we're considering statistical techniques to analyze it is the real problem. So, that's what should be fixed. Fortunately, there's many companies doing it different and a subset of them are adopting static tools.

BooApril 16, 2015 1:29 PM

WD-40 and a Craftsman wrench solutions.

Fry: So, Chrissy, we seem to be hitting it off. If you're not doing anything later might I escort you to a kegger?

Chrissy: Not even if you were the last man on Mars.

[She slams the book shut, gets up and leaves. Fry watches.]
[Cut to: Outside Cafe. Fry watches Chrissy through the window as she writes something on a piece of paper and hands it to Guenter. She giggles, chews her pencil bashfully and leaves. Fry watches her, dumbstruck. Guenter raps on the window and gets Fry's attention.]

Guenter: [shouting] Hey! You like bananas?

[Cut to: Cafe. Guenter slaps the piece of paper onto the window.]

Guenter: [shouting; from outside] I got her number. How do you like them bananas?

[He walks off and Fry growls.]
[Scene: Mars University: Mathematics of Quantum Neutrino Fields Lecture Hall. Farnsworth has drawn a diagram and some algebra on the blackboard under the heading "Today's Lesson: WD or 'Witten's Dog'".]

Farnsworth: And therefore, by process of elimination, the electron must taste like grapeade.
http://theinfosphere.org/Transcript:Mars_University

I just fixed the coffee machine with WD-40. Any bigger problems and the wrench will be needed.

Peter GerdesApril 16, 2015 2:43 PM

I suppose this kind of analysis *might* be able to show that bugs are dense if you kept secret the bugs found in one bug contest and found sufficiently low overlap with those found in a second. However, it's almost useless in getting any real estimate of the prevalence of bugs or showing they are scare.

The link covers the issue of not all bugs being equally easy to find. However, that in itself isn't such a huge deal. Even results that demonstrated all sufficiently easy to find bugs were rare would be quite useful and interesting...after all at some point it would no longer be profitable to discover sufficiently hard to identify bugs.

The bigger problem is that bugs are only hard or easy to find relative to a certain background. Once new tools, new methods of thinking etc.. come out previously hard to find bugs become easy to find and I don't see any plausible way of dealing with the constant march of progress in our ability to locate bugs in this kind of analysis.

John PepperApril 16, 2015 4:54 PM

To keep a focus on the primary question:

"the question of whether patching matters, a question that just will not go away"

Patching does matter. If you are aware of a security issue in a piece of software and someone malicious uses that issue to hack it, you have some responsibility there. You could have told them about it and had them fix the issue.

There is a 'third way'. That way is to absolutely, positively detect an exploit against that vulnerability. But this means you are using corporations and other organizations as, effectively, honeypots.

Security issues are rated in terms of their criticality, that is, the potential damage they can cause. And they are cross rated with other factors. For this discussion, a key factor is 'how difficult is it to find the vulnerability'.

For example, the DREAD model: http://en.wikipedia.org/wiki/DREAD:_Risk_assessment_model

For government hacking purposes you want bugs of a critical rating which are exceedingly difficult to find.

Security issues of a critical rating which are easy to find -- you can guarantee others will find them, and they probably even found them before you did.

How can this be proven? Use popular vendor DAST and SAST tools and perform a cross analysis. The conclusion there will be results will be very similar.

In terms of statistics, bug density and fix rates do matter. They are important to be aware of if you are the application owner, or if you are engaging in an offensive action.

In terms of "hack value", the DREAD sort of model can be expanded on, and at least, ad hoc is and has been for years:

How critical is the system the application runs on?
How critical is the application its' self in terms of functionality?
How critical is the application its' self in terms of privilege level?
How many users have the application?
How dense is the security vulnerabilities in the application?

How dense the security vulnerabilities are in the system - again, which modern SAST and DAST systems cross comparison can answer to a decent degree - is another useful question in terms of 'the value of the security vulnerability'.

If all other factors did not apply... A security vulnerability found in an application with a very low security bug density is more valuable then a security vulnerability in a system with a very high security bug density.

As for statistics, a question which is also covered in this paper, several firms are gathering these statistics via 'anonymized data'. And they have, for years, been working hard at 'making sense of that data', as well as expanding on the collection of it. This is provided to users of their products and services.

VeraCode is one of those companies, and I would be surprised if Mr Geer has not sat down with them on this very issue.

I am unaware of any major research project aimed at performing a similar analysis on 'full disclosure' vulnerabilities and applications relying on that public source data. But that will be done eventually. There are many interesting questions which can be found there. Considering the value of the potential data and the ease of obtaining the data, like with rating a security vulnerability, one can well predict that someone is probably already doing this, or has done it, or most certainly will do it in the future.

Lastly, a more interesting question arises about utilizing zero day for both offensive purposes and defensive purposes at the same time.

A few notes on this.

Zero day detection systems are weak, it can be very difficult to predict a 'very hard to find' critical vulnerability. If one is found it is very valuable for governments in terms of utilizing a possible signature for that attack in a covert, wide reaching IPS type system. A central problem here is that the government also would not want to tell anyone about such an system and this has a high cost in terms of confidence for their people. People having confidence in your capabilities matter, funding depends on this, as well as authority and power.

A secondary problem with this is that signatures of vulnerabilities which are known are very far more accurate then for signatures for vulnerabilities which are unknown, however there is still going to be some manner of failure rate which can span from 'low' to 'very high' depending on the sort of vulnerability it is.

Disclosure and patching, therefore, is the cheapest route. Yes, it could mean a sort of disarmament policy. But the fact is code is constantly be written and with it new security bugs. No one is anywhere near the place where it can be said 'no one can hack it', and that with almost any application. Besides this, there is always the social engineering angle.

Disclaimer: I am neither condoning nor condemning, but merely commenting on some 'ins' and 'outs' of these matters from a technical perspective.

John PepperApril 16, 2015 5:12 PM

@Martin Walsh

Security audits? Apart from the fact these people are being churned out like cigarettes now, does anyone know anything except for middle school pen-testing? I don't know, except that if the people conducting an audit don't understand your application, then they're just taking your money.

While it is true that often people are undertrained in the industry, many of the solutions out there are SaaS or consultancy model and they get trained as they go. Further, while no one is as knowledgeable about an application as the developers, the developers are going to be involved intimately in - at the very least - adjusting ratings of security issues found.

Where there are near or exactly pure 'business logic' security vulnerabilities out there, these security vulnerabilities are usually under the 'difficult to find' rating. That is, if your analysts can not find the security vulnerabilities because of lack of knowledge of the application, much less sure may a remote attacker.

John PepperApril 16, 2015 5:32 PM

@David Leppik

The equivalent of capture-and-release in software is finding bugs and not reporting them. If you don't release your frogs, you've lowered the number of frogs in the pond. If you report a security flaw, not only do you (potentially) get that flaw fixed, you draw attention to that piece of code-- possibly leading to the discovery of even more flaws.
His version of capture-and-release works best if you have no interest in fixing the security flaws.

That actually raises a good point, that returning the security vulnerability to the vendor, who ultimately releases it, can raise the possibility of more bugs found in that area of code, and, as well, of that type of bug being found elsewhere in the code. From that, one might argue there is a detrimental effect to reporting security vulnerabilities.

There is some truth to that. Anytime a flaw is discovered, that knowledge is shared with everyone and anyone. While the method used to find the flaw is not usually disclosed, the source is.

That both instances are true can be well confirmed by looking at the history of publicly disclosed bugs. The source can be major or minor. For instance, if one starts at the first discoveries of types of vulnerabilities, and major variants of security vulnerabilities, it is easy to see how people do work off of these disclosures.

Had no one ever discovered buffer overflows can be exploited, would someone have discovered this in the future? Or if someone discovered that buffer overflows can be reported and did not disclose this, would not someone else have found it and reported it inevitably? And probably there can be some analysis there with some degree of meaning. How difficult was it to find the first buffer overflow and exploit it? Or the first SQL Injection? The first blind SQL Injection? The first error based SQL Injection exploitation method? And so on.

But, still, the DREAD sort of model works here as well. Rating by 'difficulty to find', while not entirely scientific, is not much of a 'big question' anyone is asking. Time allotted, sophistication of the exploitation, and many other factors give a strong degree of confidence that a security issue can be rated in terms of 'difficulty to find' with accuracy.

IMNSHO it all comes back to: 'what if a major hack happens where you knew the hack was possible'. What is the possible cost there if the worst case scenario happens? There are related factors to estimate the risk, such as, 'how often is the system being targeted now'.But, ultimately, criticality of the security vulnerability is the major factor. eBay or Google are attacked all the time, while a critical SCADA system may rarely be attacked. But, if just one attack happens successfully against that SCADA system, what is the potential damage? That is where cost analysis should be, just as people do in physical security.


65535April 16, 2015 7:20 PM

@ Bob S.
“I wished I had known Geer is with In-Q-Tel/CIA before I clicked the link.”
Ha! Good point.

“Are vulnerabilities dense or sparse? Vulnerabilities are of infinity variable density and subject to transitioning to previously unknown vectors. The only way to know is to test everything.” – Bob S

That is true. But Geer just wants to find and estimation… which I guess he will then recommend to his friends at the CIA the most fruitful method of applying scarce resources to exploit them.

@ Nick P
“Work like this is barely worth anything. The effort needs to go into tools to identify bugs in software… our software is such a confusing, bug-ridden mess that we're considering statistical techniques to analyze it is the real problem.”
That is the unvarnished truth.

@ John Pepper
The DREAD Risk Assessment model is interesting.

Damage - how bad would an attack be?
Reproducibility - how easy is it to reproduce the attack?
Exploitability - how much work is it to launch the attack?
Affected users - how many people will be impacted?
Discoverability - how easy is it to discover the threat?

http://en.wikipedia.org/wiki/DREAD:_Risk_assessment_model

I don’t think Microsoft really had their corporate “heart” in this risk model.

I would guess they used their customers as beta 1 testers and beta 2 to 1000000 testers to identify bugs. In other words their R&D and “quality control” was left to their customers.

Their attack surface is very high so MS might use the 'dense-customer' method to discover their bugs [this saves money. The 'dense-customer' does the work and bears the financial result].

I once read where Windows 2000 Pro was shown to work with about 480,000 different applications [please don’t hold me to those numbers because it was a long time ago]. Also, I don’t know exactly out of the 0.5 million applications usable on Windows 2000 Pro which ones were viruses or root kits.

[Next to Geer]

As I understand his paper, here are his points [which could be off the mark]:

1] “Bruce Schneier asked a cogent, first-principles question: “Are vulnerabilities in software dense or sparse?” If they are sparse, then every vulnerability you find and fix meaningfully lowers the number of vulnerabilities that are extant. If they are dense, then finding and fixing one more is essentially irrelevant to security and a waste of the resources spent finding it.”

2] “It seems to me that the most straightforward way to make a first quantitative effort here is to employ three or more independent penetration tests against the same target. Or have your software looked over by three or more firms offering static analysis… Perhaps we can take a large body of code and look at the patches that have been issued against it over time. If you take a patch as a marker for a previously undiscovered flaw… the rate at which patches issue is a removal-capture process. Were that process to maintain a relatively constant hum, then it might imply that software flaws are indeed dense—too dense to diminish with removals. Of course, patches for a commercial software system are not necessarily unitary—one apparent patch may actually fix several flaws.”

3] “…Rescorla concluded that fixing without disclosure is better than fixing with disclosure (and thus was “an advantage for closed source over open source”), but such a policy certainly doesn’t help us do quantitative research with real data” [The argument for open software and disclosure of bugs… Which the CIA may have some interest in not disclosing]

[A plea for help]

4] “There is something here to work with for those who test or who can closely observe those who do. Be in touch; I’d like to work with you.”

http://geer.tinho.net/fgm/fgm.geer.1504.pdf

I may know little about coding software but I am available for subcontracting at $175.00 per hour. /


BooApril 16, 2015 7:51 PM

I'm consulting at $350.00/hour. Business is sparse and the clients are dense. Nothing is secure. Free advice: Don't patch anything for 100 years.

Listen to this, and I'll tell you 'bout the heartache
I'll tell you 'bout the heartache and the loss of God
I'll tell you 'bout the hopeless night
The meager food for souls forgot
I'll tell you 'bout the maiden with raw iron soul

I'll tell you this
No eternal reward will forgive us now for wasting the dawn

Hang On http://www.ndsu.edu/pubweb/~cinichol/ChagallPromenade.jpg

i-MammonApril 17, 2015 4:49 AM

I'm consulting at $700.00/hour, and I barely know how to switch my computer on.

John PepperApril 17, 2015 12:16 PM

@65535

The wiki article states MS founded the idea and principle, but it is the basic skeleton of a model which evolved from a combination of the 'full disclosure' movement & various security vendor products in the late 90s and early 2000s. Microsoft was one of the most active companies in that, typically as the 'adversary'. DREAD explicitly, or some variant thereof, remains in place for many application security practitioners.

I would guess they used their customers as beta 1 testers and beta 2 to 1000000 testers to identify bugs. In other words their R&D and “quality control” was left to their customers.
Their attack surface is very high so MS might use the 'dense-customer' method to discover their bugs [this saves money. The 'dense-customer' does the work and bears the financial result].

To be fair, the entire field of 'application security' was just being formed 'back then', and Microsoft was the primary target for security researchers. There was no google, no facebook, and linux - even macs - had a very low percentage of market share. Market share does matter for security researchers who want their research to make the front pages.

Initially, Microsoft, like with many corporations and organizations were very resistant to this process of other people finding security vulnerabilities in their product. Organizations like l0pht and many others went on the media blitz offensive to change their minds. The initial attitude was one of avoidance, denial, counter-accusations.

But, after a few years, Microsoft got serious about security and turned around. They invested heavily into security, including in hiring security researchers. They re-engineered how they did things and learned a lot, as well as contributed a lot.

Their hard hearted attitude melted. In the process, both processes: as an useful adversary, they helped strengthen the field. Their denials became a challenge and it was based on a weak stance which meant it guaranteed strong news coverage not just because everyone ran them, but also because the argument was clear they were in the wrong. Every new security vulnerability proved this. Their turnaround was even more useful. They developed processes and produced results which not only helped secure their software - and with it all the users who used it as well as their respective organizations and nations - but also bled into the rest of the industry.

I do not consider them the best at security by any means, and not even among the big firms. Right now, Google has more imprimatur there among the big firms. But without Microsoft first treading that path, Google security would not be where it is today. I am sure Google has directly benefited from employees with work experience from there, and indirectly benefited from their efforts in many ways.

This is very pertinent to the discussion at hand, because we already well know fixing bugs does matter. Arguing against that because of the value for nations and their intelligence services pretending that this is not a foregone conclusion is foolish and dishonest.

Since I am giving kudos out, I will emphasize that the real applause belongs to those very security researchers who were finding those security vulnerabilities and pushing that companies adapt to their need to do so. It was very evident that this was the right course to take, and diversion from this course today would be extremely detrimental to the security of systems everywhere.

John PepperApril 17, 2015 12:45 PM

@65535

Other points.

I once read where Windows 2000 Pro was shown to work with about 480,000 different applications [please don’t hold me to those numbers because it was a long time ago]. Also, I don’t know exactly out of the 0.5 million applications usable on Windows 2000 Pro which ones were viruses or root kits.

One problem is any security vulnerability of a critical impact could potentially be intentional and so a plausibly deniable backdoor. And it could come from anywhere in the world, not just the US Government or a rogue developer. Companies hire people from around the world, and beyond that, working for a software company is vastly different then working for the FBI or CIA. Software developers and management did not sign up for that. They could easily be paid by a foreign country to put in backdoors, and maybe even indirectly. They could be told that their modification was for their favorite political cause, or maybe some innocuous group or individual claiming to be just interested in some profit. Or some other innocuous pitch that would appeal to the target's preferences.

This is yet another reason why governments should do all they can to report found security vulnerabilities. This is especially pertinent with security vulnerabilities that are specifically 'hard to find'. That they appear to be innocent mistakes means nothing.

You may think this is something "they" are very aware of, but it is not going to be the case. There is severe compartmentalization, which is one component of that. But a very major component which applies to all fields, is it is not their area to think in this way. They are not trained there and they do not have experience there. It may seem common sense, but like any field these things are not common sense. Part of the reason is because while you can give someone the correct information about another field, even one which is related to them in someway, they do not know how to properly weigh that information. So it gets lost in intentional and unintentional disinformation. (The vast majority of disinformation is, of course, unintentional. Consciously, at least.)

For instance, because someone is a brain surgeon specialist does not mean they are also a heart surgeon specialist. Or a network security person is not also an application security person. Outsiders may view them both as 'knowing everything' about computer security. They may themselves overestimate their own knowledge levels in related fields they have no experience in. But, experience and training matter.

While NSA code auditors and defense oriented auditors may have some grasp of these problems, they are not even knowledgable about exact attribution. They may be generally aware of the possibility, at best. Much moreso, technical organizations focused on technical offensive projects should not be expected to be experts at defensive requirements. And while it is sage advice to have many counselors of many viewpoints, this is sage advice for the very reason it is so rarely done!

Again, not my area, and not much interest for me specifically. Generally, I deal with these sorts of problems in my own area, that is on matters I give a f** about. Technically, in general, I suppose my conclusion is close to 'they should focus on defense, not offense'. One problem is that they can not process wide sweep dragnet information, and they are continuing to affirm that they can by continuing these programs. This affirmation supports and builds deepening confidence in a dangerous delusion.

But building deeper confidence into a dangerous delusion is just what nations and peoples do.

And there is a bigger meaning to that, eventually.


Nick PApril 17, 2015 1:00 PM

@ John Pepper
12:16PM

Excellent analysis of Microsoft's transformation. Only a minor gripe: DREAD is so dead I haven't heard the word in years. Microsoft ditched it many years ago. It just led to arguments over specifics of the categorization. The current approach is to collect lists of bugs, prioritize severe ones, and fix as many as possible given a certain budget. This was a good change given one could often fix the code as fast as doing a full DREAD analysis of the bug.

BooApril 17, 2015 1:20 PM

The story of the strip was of two shops. The very first story told the story of how all of the shops in a terrace of shops closed up, one by one, leaving Bloggs & Son General Store, a popular small corner shop that seemingly sold anything and everything, owned by Mr. Bloggs, a kindly old man wearing the traditional white coat, and his son Ted. Mr. Superstore, a bowler-hatted long-nosed man one day walked into Bloggs' shop and promptly decided to build a new superstore on the site of the demolished shops. https://en.wikipedia.org/wiki/Store_Wars

The Bloggs Cloud on Amazon has all the software and the superstores are doomed. The people can get hardware from the clouds too. The superstore is like the Flintstones and we're in the Jetsons Age. Navy has electric boats and airmail operators. Build electric ultralights to cut noise down.

BooApril 17, 2015 1:37 PM

Software is balloons!

"The high point of this period was during 1910 and 1911 when the Wright and Curtiss companies fielded factory exhibition teams with aircraft flown by some of the most famous pilots in early American aviation history. The profits expected to have been derived from exhibition flights attracted a large number of individuals. The Wright, Curtiss and Moisant companies did a very large exhibition business. In 1911, The Curtiss Company covered 210 places with 541 days of flying. In 1911 the major teams appeared in 282 towns and flew 814 times. There was definitely prize money to be earned at these events. In the Boston-Harvard meet of September, Grahame White collected $29,600 in prizes; Johnstone and Brookins of the Wright Team won $39,250; Curtiss fliers, $16,500. The meet had receipts of $126,000 from 67,241 paid admissions. A pilot could earn as much as $10,000 for two or three flights of 10 or 15 minutes duration--a great deal of money in 1910 dollars" http://celticowboy.com/The%20Dietz%20Paraplane.htm

Have more air expos. Spend less time concerned if the code is secure. NSA can break all the codes. Empty superstores can be balloonports.

John PepperApril 17, 2015 2:10 PM

@Nick P

Excellent analysis of Microsoft's transformation. Only a minor gripe: DREAD is so dead I haven't heard the word in years. Microsoft ditched it many years ago. It just led to arguments over specifics of the categorization. The current approach is to collect lists of bugs, prioritize severe ones, and fix as many as possible given a certain budget. This was a good change given one could often fix the code as fast as doing a full DREAD analysis of the bug.

Thanks!

The article does stated that MS ditched it, but that sort of model was already well in use by vendors. Microsoft simply clarified it, and branded it. A similar model remains in use in both patch management systems and application security systems. (Be they internal organizations, consultants, or vendor products.) Many of these groups refer to the model by name and use it specifically and explicitly, even if MS did themselves abandon it for their now more accurate version.

eg, https://www.owasp.org/index.php/Threat_Risk_Modeling

When they make those lists, they use some form of modelling not too dissimilar to the DREAD model. It does not take much work and typically can be performed initially "at a glance" by analysts or developers and product managers. From there, when it gets to bug fixing, developers and product managers may argue that a security vulnerability is too highly rated or too low rated.

Usually, they just try and fix the bug, but sometimes it is worth their while to lower a criticality rating as higher criticality ratings demand more resources and shorter allowance time for fixing (which, ultimately, means 'more resources'.)

There are purely automated security vulnerability finding systems, however none of these are strong enough to take the results at face value. They will have high false positive rates, in general. This is much more true with code review scanning systems then dynamic 'black box' scanners, but it remains true with the later category as well.

Whether someone is a consultant or internal analyst or a solutions vendor, they all have a very vested interest in accuracy, so they are well served to rely on a systematic approach. The more seasoned professional ones are strident in their attempt at accuracy in rating vulnerabilities. Failing to do so hurts them in terms of confidence with the people they report these findings to.


JohnPApril 17, 2015 4:11 PM

For 5 yrs, I worked in a fully independently certified CMMI-5 environment. Our bug rates were nothing compared to any other non-trivial software written in the world. I don't remember the SLOC - but it was more than 500K. Oh - and this was multi-threaded code.

When I started with the team, their rate was 4 bugs/yr per programmer. During code reviews, spelling mistakes were "minor" issues that had to be corrected. There was an expectation of perfection.

When I left the team in mid-1994, we'd improved the process and our skills to having 1 bug discovered every other year. There was still an expectation of perfection and the later levels of reviewers and testers hated us, since they wouldn't find anything for all their effort in some years.

In my 5 years coding there, I caused 1 major bug that was discovered by the client. It was not Sev-1. All this success is because we all worked as a team, together, and didn't allow bugs to leave our team. It was a matter of pride for everyone on the team.

I recall when the higher ups declared no Sev-1 bugs remained in the software and we believed it. Sev-1 means "loss of vehicle and/or crew." This was due to statistical analysis, which is a cornerstone for CMMI-5 software development teams. We used statistics for everything, even with small samples, which mean you can't really use statistics. Oddly, ours did show patterns that helped us improve.

So - jump forward to 2010 and a report is released about the life of that software project with all the bugs categorized. The most interesting aspect of the report to me was that in 1994, over 100 Sev-1 bugs still existed which was declared to be Sev-1 free. They knew this because every bug discovered in the software got a root-cause analysis created by the team which introduced the bug, so that steps to prevent any similar bugs would be added to the process to catch any. Usually, someone would be tasked to audit existing code for similar issues after a discovery too. Most of the time, those audits would confirm it was a single failure point, but sometimes, it would uncover a systematic issue.

Since that time, I've worked at a number normal SW companies. There is ZERO comparison in bug rates. At one place, some developers introduced 10 bugs per coding day - that QA caught. It would be sent back, they'd fix one and break 3 more things. It was really sad. It really came down to personal responsibility, IMHO. If the software had meaning for the programmer, they cared more and were much more conscientious in their coding. People creating back-office DB apps for office-drones wrote the worst code I've ever seen - full of bugs.

So - if a "bug" == "vulnerability" (it often does), then I believe there are 1K-10K vulnerabilities in most OSes.
Sorry for the post length, but backgnd seemed important.

John PepperApril 17, 2015 7:04 PM

@JohnP

>

Vulnerabilities are, very much, software bugs and are classified and treated as such in bug tracking systems. They tend to be of a much higher criticality then ordinary bugs, but this is not always the case.

For instance, in software that powers mission critical hardware where a bug may mean human life loss, that bug is, obviously, very critical. If the error condition the bug raises can be triggered by an outside human, that already very critical bug becomes even more critical. Depending on such factors as context. Like would someone ever want to do that, plausibly. If so, then it is even more critical. If not, less critical then if so. And so on.

The real issue underlying this article is that software vulnerabilities have become a form of fungible assets worth considerable value to parties that have considerable value to part and considerable demand for them.

They use them to obtain high value information, and this high value information, in turn, provides them additional confidence with higher ups in power and authority. Additional confidence translates to the responsible parties more power and authority their own selves.

[Sidenote: "Power" often used in these contexts is in *abuse of power*. But, abuse of power is just a different aspect and an error condition of its' own. Do people find substance in having power and authority and maintaining it, even growing it? Absolutely they do. Problem with abuse of power, like taking meth for happiness, is it reduces the chance of longevity of that power and authority. eg, sooner or later you are going to caught, go to jail, etc. In the meantime, the abuse of power and authority is like printing your own money. It throws the entire system to hell.]

Problem is that value of vulnerabilities is that they have two major possible uses. One, is for defensive purposes. The other is for offensive purposes. Largely, these purposes must be "either or". Not "and". Which is an ironic and interesting situation.


On 'code written yesterday' versus 'code written today', I am not entirely sure of the patterns there. High density of normal bugs does tend to relate to the corresponding density of security bugs, however. But the two systems of detection have matured at different paces.

One thing is probably sure: you have much more code landscape due, at least, to memory availability skyrocketing. Which also means you have a much more complex system grown to fit that increased landscape. So, there you have one major factor for why there will be more bugs in today's code. The complexity has significantly increased.

Nick PApril 17, 2015 7:05 PM

@ John Pepper

I see what you mean. I'm all for developing systematic approaches to things like that. There's at least academic research, including alpha testing in companies, in that area. Something might come of it at some point.

@ JohnP

Nice anecdote. So, what was the app and is the report available for download? I often push Correct by Construction development but with few references available. Even Praxis's are mostly gone these days for some reason. The more examples I can show the new generation, the merrier.

65535April 18, 2015 12:35 AM

@ John Pepper

Fair points.

Some parts of my comment were Tongue-in-cheek. I do agree that Microsoft did push toward more secure products. These products are used day in and day out by ten’s of million’s of people – which says something positive about the company.

To be fair, the entire field of 'application security' was just being formed 'back then', and Microsoft was the primary target for security researchers… after a few years, Microsoft got serious about security and turned around. They invested heavily into security, including in hiring security researchers. They re-engineered how they did things and learned a lot, as well as contributed a lot.

I agree.

I think around the introduction of AD or their version of the LDAP directory they became security aware. There products were so successful that they became the de facto target for hackers of all kinds.


…any security vulnerability of a critical impact could potentially be intentional and so a plausibly deniable backdoor. And it could come from anywhere in the world, not just the US Government or a rogue developer. Companies hire people from around the world, and beyond that, working for a software company is vastly different then working for the FBI or CIA. Software developers and management did not sign up for that...

That is a good point.

The gigantic power of the US government cannot be overcome by one single company. Further, I suspect many people including developers really did not know the enormous powers of the NSA/DEA/CIA until the recent disclosures. The developers were not prepared to properly harden their code.

On the other hand is doesn’t help to get in bed with the Monster. For example Skype should still be a private platform with little or no government back-doors. By happenstance, cough... it is a pipeline to the NSA.

My comment on the large number of application that Microsoft OS’s could accommodate is a testament to their willingness to help developers and ultimately their customers work more efficiently. Which is good. I don’t believe any other large OS manufacture achieved that level of adaptability.

Unfortunately, this is a double edge sword. In any of those hundreds of thousands of applications could be a number of bugs or outright back-doors.

In short Microsoft has indeed worked towards better security with some success – but they still have a long way to go.

JohnPApril 18, 2015 7:22 AM

If it isn't clear, I was a rocket scientists. It is always fun to say that. :) I wrote GN&C software for the space shuttles. Also wrote software, at a later job, used in every NASA mission control center for shuttle and space station operations around the world and used onboard the shuttle and ISS.

BTW - just re-skimmed the paper. My memory may have been off slightly - not much.

Loral always had internal design/test/code reviews before sending the "work product" to IBM for their review process. Not certain that Jim would have known that. I was working at the subcontractor ('89-94) - it was actually 3 different companies in that period. Ford Aerospace, Loral, Lockheed/Martin. Nothing changed,
got a sticker with the new company name for our badges and lost the lease-car benefit when Ford sold us.

The main thing we learned was if a code review found 1 bug, then it was reasonably likely that the code had another bug missed by the reviewers. Finding a bug at review time was unusual. Each SW dev worked extremely hard to have perfect reviews and checked everything BEFORE putting any review package out for peer review. It is hard to explain this - each reviewer actually, carefully, reviewed the code and signed off on it BEFORE the review took place with notes and actions provided to the independent moderator BEFORE the review occurred. Personal responsibility for everyone involved was high. I've never seen that level of commitment in any job since. I was more careful with that code (both reviewing and coding) than I am doing any other activity since. My signature on a document was important to me.

When I moved to a commercial job, my pay raised 25% and I was still "cheap."

John PepperApril 18, 2015 12:30 PM

@JohnP


If it isn't clear, I was a rocket scientists. It is always fun to say that. :)

I figured it was some kind of 'mission critical', government sponsored app that was used for important vehicles. I was thinking planes and humvees when noting how a bug could cause injury of life in my response to you.

I did not bother to look up the terms you used, because you made the context clear. (And as a principle, I generally do not pry.)

Astronaut thing song on my music list lately: (Weezer 'Back to the Shack') https://www.youtube.com/watch?v=3H89GXU9OeU

I've been paranoid for only about 8 yrs now, though that first job did teach me to write defensive code that wouldn't crash. It was mandatory there.

Job experience does that, especially when the role you play is very serious. Hard to ever to really break out of it. Easy to slip back into it.

Personal responsibility for everyone involved was high. I've never seen that level of commitment in any job since. I was more careful with that code (both reviewing and coding) than I am doing any other activity since. My signature on a document was important to me.


I do not know much about ordinary qa tools and processes, so can not comment much in those regards.

I would be surprised if they are anywhere near as sophisticated as security tools and processes have become. The market demands are simply so strong in those regards, and have been for years. Wide demands is a component of that strength of pull.

Global talent is pooled and there are extremely strong incentives for talent in these ways. Pay is easily six figures for near entry level jobs.

Every piece of software, including that used for hardware has strong security concerns.

IBM does not strike me as very good. At least in areas of vulnerability testing products. They have a strong market share but also a very strong resistance to the adaptation required to remain competitive. So, like HP, very high false positive rates. And, like HP, they are not very desperate or motivated to adapt quickly and strongly to customer demands. This does not mean I am not rooting for both companies, I am. Especially because they both remaining competitive means that better products are continually made.


John PepperApril 18, 2015 1:11 PM

@65535

Some parts of my comment were Tongue-in-cheek.

I take my comedy very seriously.

Bill Murray and John Cleese are my ideals.

Though I can take a joke way, way too far.

I think around the introduction of AD or their version of the LDAP directory they became security aware. There products were so successful that they became the de facto target for hackers of all kinds.

Not much value in finding vulnerabilities in applications which do not matter. They were also early in the market, and retained legacy code. Google products were mostly written from the ground up with security in mind, as a contrast. And they were able to hire people who actually had years of experience by then. Because the field had existed for a few years.

(Yes, NSA has been doing security code audits for many years, and you can find Air Force papers back to the early seventies on code security. But, that was a very small, specialized pool.)

The gigantic power of the US government cannot be overcome by one single company. Further, I suspect many people including developers really did not know the enormous powers of the NSA/DEA/CIA until the recent disclosures. The developers were not prepared to properly harden their code.
On the other hand is doesn’t help to get in bed with the Monster. For example Skype should still be a private platform with little or no government back-doors. By happenstance, cough... it is a pipeline to the NSA.

Well, the US is not the only player in the game. It is all ultimately about the economics of information. If there is demand, and if there are capabilities, there will be a game. It can also be said to be about proper threat analysis.

Skype was always a MITM affair. It was not stated as being end to end encryption. For most people, that does not matter much. For others it did, and they were cognizant of that.

Unless the technical details made the information sufficiently unknown to them by obscurity.

Probably developers are much more concerned about their personal risk. If a severe security vulnerability is found in code and makes the front pages, that is a bad situation for a person to find themselves in. If government (note no "us" preface there) finds and uses a security vulnerability in their code, likely no one will ever hear of it.

Unfortunately, this is a double edge sword. In any of those hundreds of thousands of applications could be a number of bugs or outright back-doors.

'Tis true.

I was working on a paper the other night, however, on the modern landscape, and ended up realizing in a lot of ways there are more reasons today for people not to 'give a f**' then there were five, ten years ago.

Adware is a bitch. But the big hacks are happening at the server level. Mass transfer of privileged information in one pull at a time. Personal focused attacks typically are impersonal in that they are not interested in 'you', but your organization.

Many of those who want to attack someone just do it against their static online presence.

Smartphones are the major personal system vector, and they are deeply secured. There are attacks possible, and many applications are poorly tested. I am not saying otherwise. But getting to root is much more difficult then with a Windows system or Linux or Mac system. The telco is in strong control of what apps can be put on there and obtain high privileges. Government strongly regulates telco security because they are the infrastructure of all communications, including their own.

This is not to say strong attack vectors, even for malicious spreading viral attacks is impossible. It definitely is. And wifi, like the way the cell network connections are made, are weak. With wifi being significantly weaker. Not everyone can build their own stingray.

But it is not that expensive.

Ole JuulApril 18, 2015 9:23 PM

I think this research is useful, but let's not forget that a vulnerability is only harmful if you allow others to exploit it. Like many who administer their own computers, I find that using a system of permissions is very useful in keeping security high. Those that tout the constant upgrades and patches are more likely to be those that insist on doing things which are inherently unsafe. I realize that their reason for doing so might range from greedy software vendors to lazy or ignorant consumers, (all legitimate reasons) but I point this out only to show that vulnerability has a lot to do with personal beliefs and that the actual code is a smaller problem.

65535April 19, 2015 9:02 PM

@ John Pepper and/or JohnP

“Not much value in finding vulnerabilities in applications which do not matter. They were also early in the market, and retained legacy code. Google products were mostly written from the ground up with security in mind, as a contrast. And they were able to hire people who actually had years of experience by then.” –John Pepper

Point well taken. It’s depends upon what you call things that “do not matter”. The wide spread use of buggy code which leaks person information can matter in totality.

“NSA has been doing security code audits for many years, and you can find Air Force papers back to the early seventies on code security. But, that was a very small, specialized pool”

Yes, there is a huge difference in coding mission critical rocket projects that carry tons of fuel and have lives at stake. In Microsoft’s case they usually look at profits, shareholders, and the number of customer complaints.

For example, MS Security Essentials [now named Defender in win 8.1] has removed the heuristic AV module or castrated it due to customer complaints of false positives. In my humble opinion is probably corporate move to reduce consumer complaints and increase sales – at the expense of security – that is an example of the huge difference in mission critical coding and civilian coding.

“Adware is a bitch. But the big hacks are happening at the server level. Mass transfer of privileged information in one pull at a time.”

Nail hit squarely on the head. Server side stuff is much more prevalent than 15 years ago and much more dangerous.

“Skype was always a MITM affair. It was not stated as being end to end encryption. For most people, that does not matter much. For others it did.”

This is a case in point where a hybrid P2P/server communications system turned into a pure server platform – in which MS surely does transmit data to the government in large batches.

“Smartphones are the major personal system vector, and they are deeply secured. There are attacks possible, and many applications are poorly tested… getting to root is much more difficult then with a Windows system or Linux or Mac system. The telco is in strong control of what apps can be put on there and obtain high privileges. Government strongly regulates telco security… not to say strong attack vectors, even for malicious spreading viral attacks is impossible. It definitely is. And wifi, like the way the cell network connections are made, are weak. With wifi being significantly weaker. Not everyone can build their own stingray.”

Again, this is an example of significant Server side system(s). Although, it can be argued that these are more secure it is also more of concentration of computing resources – which can be hackable or backdoored by the government. Because of the concentration of IT resources governments and other malicious actors could bug, manipulate or insert malware much more easily.

As for the Stingray’s only being in the hands of government officials – that will change.

I would estimate that in a few years Private Investigator and the like will have those devices due to the “trickle-down” effect of sophisticated electronic devices.

This “trickle-down” effect will have negative privacy consequences at best and ‘parallel construction’ consequences at worst [well, extortion and theft of trade secrets could be worse - as is assassination].

The elimination of bugs and attack vectors at the consumer electronics level would be a benefit to our economy and our country.

[Please excuse the grammar and other errors]

RARApril 20, 2015 1:30 AM

@Anura "I can make small programs in C and medium size programs in C# that have no vulnerabilities." & following on from @Beverly.

Even if the code you wrote is theoretically secure (is there such a thing?) I suspect you have limited or no control over the compiler or libraries. On a real computer, you also have limited or no control over the the other programs,operating system, firmware and hardware (and even people). All of these things interact and because the complexity is so high you cannot predict with certainty that your code will not result in a vulnerability.

You could of course argue that "my code is secure" and it is a problem with the other parts of the system. Your argument would not however remove the vulnerability which had come into being as the result of the introduction of your code.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of IBM Resilient.