Comments

Clive RobinsonJanuary 2, 2009 11:14 AM

I just loved their comment,

"Our objective is to impose some science on the often messy and subjective field of software security. We figure we'll get about as close to science as Anthropology ever does."

And then they launch of into the thorny issue of metrics (and the lack of real ones that also bugs me 8(

However the first two points (9&8) tie up with their comments about the size of the SSG's with respect to the development teams. Realisticaly how can you expect that smaller fraction to be visable as anything other than at the extreams, they are so outnumbered they have to shout and jump up and down to be a blip on the radar let alone be integral to the process...

The use the propose (point 7) for "web application firewalls" would be reasonable if you had the onfidence to belive they are going to work. However the logic behind their use is such that a lot of developers are going to want to put it in the code themselves...

Their point(6) about involving the QA Dept sounds like a good idea, but it's not. QA Dept's run on frame works and check lists which are a "get out" system. Security although having obvious parrelles is a more complex problem and needs a more indepth methodology, which needs suitable metrics (that don't currently exsist).

Which brings you on to their audit point(5). Security is not an audit process in the same way real QA is not an audit process, both need to be designed in from the start if either has a hope of achieving anything other than a tick on a check list. The SSG as "a resource" view is not going to achive this (a velvet glove however lacy and delicate, needs an iron fist to ensure the strength required for the job).

Their point (4) about the architecture analysis start of on very shaky ground by ignoring the fundemental issues with security which is what lies under the code (C standard and OS model). This is possibly due to the fact that the majority of attacks these days are against apps but this is a fad of the times due to it being the next lowest layer of fruit on the tree.

Their point (3) about "attacker perspective" is an oh no moment. Simply an attackers perspective changes with what they are trying to achive, and the advice "think like a bad guy" is about as usefull as "think like a rich man". Being able to think "hinky" is harder than spotting "hinky behaviour" and as Bruce knows this is very hard and appears to be something you develop best when you are very young. It is a state of mind few possess and I do not envisage it is something that can be eaisly taught or learned (and what software bod has time to any way). Their "learn how to build code properly" is an issue that is going to be a non starter in most code shops. Experiance has shown that the functional part of the code is usually

Their point (2) about training realy gets my goat. Most practitioners do not have the time to take out for indepth training. And Universities etc in the main teach what the industry wants which is people adept at handeling the tools. Long term readers of this blog will know I bang on about this over and over and... So I won't do it this time.

Their point (1) about pen testing kind of missess the why. Pen testing usually is testing for "Known Knowns" but not for "Known Unknowns" and "Unknown Unknowns". So it is realy part of regresion testing.

Their point (0) about fuzzy testing is an "oh dear they don't get it moment" confirmed by their comment,

"Wow. Who would have guessed that reliability trumps security?"

Simply traditional regresion testing is determanistic you are looking for a "known fault" based on past experiance and due to the complexity of software this is only a very very tiny fraction of the fault set.

Fuzzy testing is probablistic testing it's like shaking the tree to see what falls out. It does not find all faults and will not find some of the known faults or sometimes any at all.

All it realy tests for is how the code responds to "pink noise" input "that is not totaly random but shaped (usually to get further into the code faster). The problem with "pink noise" testing is that although (apparently) more efficient than "white noise" testing it leaves a large number of test cases out. Does this matter well, it depends on your software.

Fuzzy testing is a symptom of poor input / exception handeling in the code. Currently it is used primarily for testing what hapens to the input of the code. It is very very unusuall to see it for being used to test what happens when resources such as other parts of the code, operating system subs or IO etc says wait or not available after the app is up and running.

As I noted earlier the "attackers perspective" is based on what gets the job done usually with the minimum of required effort (lowest hanging fruit). At some point in the (near?) future inuput validation attacks will become more trouble than they are worth and the attackers will look else where. Finding a DOS attack against a back end server causing high input level to excercise bad exception handeling on a middle ware server is definatly well within existing attack capabilities and is going to happen (if it already has not).

As a bit of reasearch there work will make interesting "coffee table" reading but it does not go any where near enough to do any realy interesting digging. So it lives up to their initial expectations,

"We figure we'll get about as close to science as Anthropology ever does."

As I was repeatedly told when wearing the green,

"Plan to fail and you will succeed in your objective".

I guess they have lived up to their expectations.

Due to being out and about I have banged this in from the phone so there may be some rough bits and typo type errors. My appols if they spoil the read but I hope the general message rises above them.

Happy new year to all those who are nolonger waiting for theirs 8)

Knowler LongcloakJanuary 2, 2009 11:16 AM

Excellent Article. I look forward to reading the full analysis of data captured.

I think the surprise #5 (Successful programs evangelize instead of audit) seems very natural to me. If you are one of the software security people you evangelize software security all the time.

You can't make other programmers/architects do what you want, but you can share your passion for security, and help them "catch the software security bug".

It may be that Software Security people naturally want to help other people and are the type of personality to teach rather than "be an overlord".

Knowler LongcloakJanuary 2, 2009 11:50 AM

@Clive Robinson

Your criticisms seem to me to be a bit off. It seems that you are criticizing based on the article being a "this is how you do Software Security" article.

If I am not mistaken this article is just a compilation of what companies who have a "successful" Software Security initiative are actually doing. And the authors (who do write books on "how to do software security") were surprised at some of the findings.

This is an article that has preliminary data, so the full analysis of the "why this is the case" are not in this article.

Also, I found you criticism about surprise #4 a bit off the mark. Software Security (according to the article authors' definition) is about building security into the applications you build. It is not about the entire computer/information security problem. So why would an article about Software Security address the OS model at all or the C standard unless your applications are written in a C based environment?

RandallJanuary 2, 2009 12:26 PM

Fascinating that architectural analysis was rated useful yet ridiculously painful. Hard for the security group to be in a position of saying "no" a lot, I guess.

xd0sJanuary 2, 2009 1:46 PM

"The thing is, architectural risk analysis often uncovers staggeringly important problems. We discovered that even though important real problems were found using architectural analysis, software groups still found the process painful enough that it didn't become a regular part of their security efforts."

In my (somewhat limited) experience, this is true, but more a function of the economics of it than anything else.

If your team is built for the purpose of architectural analysis, you can do your job well find flaws and cause delays or rework (granted early enough to not be highly costly we hope), or you can not find flaws and the project proceeds as planned.

The ability to sell this as a "value added" function can be daunting. However the devs themselves are already highly leveraged to get speed to market, and if they even have the exposure and skill to do a real architecture analysis of the infrastructure and code both, they often don't have the time or direction from their management to do it.

That leads me back to the comments on education. From a cost perspective education promises better results because it allows architectural flaws to be avoided vs found and corrected after the fact or during design. I don't know for sure if that actually holds true, but it would seem to hold more overall promise from a cost effectiveness perspective enough to explain why one is found to be more prevalent than the other.

Clive RobinsonJanuary 2, 2009 4:05 PM

@ Knowler Longcloak,

"So why would an article about Software Security address the OS model at all or the C standard unless your applications are written in a C based environment?"

Well,

I take it you would agree that security is about the whole and how the individual parts interact?

Or as Bruce and others put it it's the strength of the system not the individual parts (weakest link in the chain etc).

But importantly individualy secure parts can interreact in an insecure manner (they are not always atomic in operation or secure in all call orders).

As you said "article about Software Security" the OS the application sits on is also software as are the drivers etc.

The application usuallly gets it's access to the OS via the standard APIs for the C and other libraries (not the kernal directly) which in turn access the kernal via other APIs. Likewise the kernal accessess via other APIs the drivers and thus the hardware. Thus the "software stack" like the "network stack" has the application as the uppermost level and the hardware as the lowest with each level defined by (semi) standard API's (otherwise software re-use would not be possible).

Most OS's have been designed with the C/Unix model in mind and are thus designed around the C and POSIX standards and thus inherit not only the strengths but the weaknesses as well.

Some of these weaknesses are direct (ie specific calls are known to be insecure but need to be included for legacy reasons), and those parts of the API are (or should be) unused by newer programing languages.

However some weaknessess are due to interaction between calls or the order they are called in so an application programer needs to be acutly aware of this to avoid inadvertantly opening up security holes.

Further any existing problem below the API the application is designed to work with will present an oportunity to an attacker to attack the application from below (the "soft underbelly" aproach). The programer who is aware of this can actually take precautions in the application code to detect this and raise an exception (pushing nonces onto stacks as guards etc).

As I noted the current attacks are against input validation and exception handling in the input or top side of the application. As programers resolve the issues in this area attackers will move to different attack vectors, and at some point underbelly or down side attacks will be tried. As I said some will be by DOS on resources and some will be upwards through the OS as some current DLL attacks have the potential to do.

So anything in the OS from the past that is still there is going to provide a potential attack vector.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..