I just loved their comment,
"Our objective is to impose some science on the often messy and subjective field of software security. We figure we'll get about as close to science as Anthropology ever does."
And then they launch of into the thorny issue of metrics (and the lack of real ones that also bugs me 8(
However the first two points (9&8) tie up with their comments about the size of the SSG's with respect to the development teams. Realisticaly how can you expect that smaller fraction to be visable as anything other than at the extreams, they are so outnumbered they have to shout and jump up and down to be a blip on the radar let alone be integral to the process...
The use the propose (point 7) for "web application firewalls" would be reasonable if you had the onfidence to belive they are going to work. However the logic behind their use is such that a lot of developers are going to want to put it in the code themselves...
Their point(6) about involving the QA Dept sounds like a good idea, but it's not. QA Dept's run on frame works and check lists which are a "get out" system. Security although having obvious parrelles is a more complex problem and needs a more indepth methodology, which needs suitable metrics (that don't currently exsist).
Which brings you on to their audit point(5). Security is not an audit process in the same way real QA is not an audit process, both need to be designed in from the start if either has a hope of achieving anything other than a tick on a check list. The SSG as "a resource" view is not going to achive this (a velvet glove however lacy and delicate, needs an iron fist to ensure the strength required for the job).
Their point (4) about the architecture analysis start of on very shaky ground by ignoring the fundemental issues with security which is what lies under the code (C standard and OS model). This is possibly due to the fact that the majority of attacks these days are against apps but this is a fad of the times due to it being the next lowest layer of fruit on the tree.
Their point (3) about "attacker perspective" is an oh no moment. Simply an attackers perspective changes with what they are trying to achive, and the advice "think like a bad guy" is about as usefull as "think like a rich man". Being able to think "hinky" is harder than spotting "hinky behaviour" and as Bruce knows this is very hard and appears to be something you develop best when you are very young. It is a state of mind few possess and I do not envisage it is something that can be eaisly taught or learned (and what software bod has time to any way). Their "learn how to build code properly" is an issue that is going to be a non starter in most code shops. Experiance has shown that the functional part of the code is usually
Their point (2) about training realy gets my goat. Most practitioners do not have the time to take out for indepth training. And Universities etc in the main teach what the industry wants which is people adept at handeling the tools. Long term readers of this blog will know I bang on about this over and over and... So I won't do it this time.
Their point (1) about pen testing kind of missess the why. Pen testing usually is testing for "Known Knowns" but not for "Known Unknowns" and "Unknown Unknowns". So it is realy part of regresion testing.
Their point (0) about fuzzy testing is an "oh dear they don't get it moment" confirmed by their comment,
"Wow. Who would have guessed that reliability trumps security?"
Simply traditional regresion testing is determanistic you are looking for a "known fault" based on past experiance and due to the complexity of software this is only a very very tiny fraction of the fault set.
Fuzzy testing is probablistic testing it's like shaking the tree to see what falls out. It does not find all faults and will not find some of the known faults or sometimes any at all.
All it realy tests for is how the code responds to "pink noise" input "that is not totaly random but shaped (usually to get further into the code faster). The problem with "pink noise" testing is that although (apparently) more efficient than "white noise" testing it leaves a large number of test cases out. Does this matter well, it depends on your software.
Fuzzy testing is a symptom of poor input / exception handeling in the code. Currently it is used primarily for testing what hapens to the input of the code. It is very very unusuall to see it for being used to test what happens when resources such as other parts of the code, operating system subs or IO etc says wait or not available after the app is up and running.
As I noted earlier the "attackers perspective" is based on what gets the job done usually with the minimum of required effort (lowest hanging fruit). At some point in the (near?) future inuput validation attacks will become more trouble than they are worth and the attackers will look else where. Finding a DOS attack against a back end server causing high input level to excercise bad exception handeling on a middle ware server is definatly well within existing attack capabilities and is going to happen (if it already has not).
As a bit of reasearch there work will make interesting "coffee table" reading but it does not go any where near enough to do any realy interesting digging. So it lives up to their initial expectations,
"We figure we'll get about as close to science as Anthropology ever does."
As I was repeatedly told when wearing the green,
"Plan to fail and you will succeed in your objective".
I guess they have lived up to their expectations.
Due to being out and about I have banged this in from the phone so there may be some rough bits and typo type errors. My appols if they spoil the read but I hope the general message rises above them.
Happy new year to all those who are nolonger waiting for theirs 8)