Comments

old guy January 16, 2009 12:47 PM

Good points. The last top 25 were helpful for contractual language for systems procurement, such as the state of New York, I recall reading, did. There is more value when you look at it from other perspectives.

Anon January 16, 2009 1:23 PM

Eh. I spent the past week defending my (small) employer’s site against cross-site request forgery, session fixation, and password-guessing dictionary attacks, because I read about the first two on Wikipedia and Twitter made news on the third.

This is the kind of thing that shouldn’t happen by the logic of that article. It’s not how security should work, but people will discover and fix bugs post-hoc in real organizations and that shouldn’t be pooh-poohed. Our first programmer had a great previous life auditing banks’ security and discovering awful things, and the stuff he wrote isn’t perfect.

Bring on the lists; they may not be sufficient but they don’t hurt. Also, insane publicity like Twitter’s is a great way to get lots of people to look at their security. If only companies had to reveal both the fact that breaches occurred and how they occurred.

Anonymous January 16, 2009 2:05 PM

@Benjamin

I am running Firefox 3.1 on Vista and it appears valid to me. Are your root certs up to date?

ck January 16, 2009 2:57 PM

i’ll agree with mcgraw that the #1 reason top ten lists don’t work is because executives don’t care about technical things.

the difference is that i think that’s a problem.

tree-cutting outfits are run by folks who’s idea of a “risk management plan” is making sure everyone on the site knows what mistakes to avoid with a chainsaw.

our industry is run by execs who avoid “geeky technical details” – like the actual development of their company’s product. nonetheless, they attempt to manage risk without understanding where that risk comes from.

predictably, crappy software is the result.

oh, i also got a chuckle over the bit where he said that instead of being taught what to avoid, developers should learn “defensive programming”…

Nostromo January 16, 2009 3:44 PM

No, it wasn’t worth reading. McGraw doesn’t give much rational support for his opinions, and without that, why would they interest anyone?

Dick Cheney January 16, 2009 5:07 PM

Exactly. The hard part isn’t “making the list”, it’s “NOT making the errors”.

Lots of people can make (and have made) a list; few can avoid (and have avoided) making the error.

Anon January 16, 2009 6:16 PM

Same Anon as earlier — my earlier annoyance has blossomed into a full-on case of Someone Is Wrong On The Internet ( http://xkcd.com/386/ ).

What does he mean by “top ten lists don’t work”? What does he think we expect a top-ten list to do — solve our security problems by writing good code and making correct management decisions? Okay, then his list of eleven reasons “doesn’t work”; I’ve read it and the folks I work for still have imperfect security. OWASP’s list exists to call attention to ten frequently-occurring types of problem, and it calls attention to them. Therefore, it works.

But on a less juvenile note, and critiquing his content rather than just his punchy framing: give us something constructive. Tell us some things organizations should know above the level of specific bugs. What should developers know? Where do I start learning?

I work on this software of which you speak; I’m interested in security; I’m on a small team with managers that “get it.” If you (the author of the top-11 list, not Bruce) weren’t wasting your soapbox nattering at me for educating myself through imperfect methods, maybe something could get accomplished here.

Apologies for the tirade.

Anon January 16, 2009 6:31 PM

I do understand, BTW, that the most important security wins aren’t necessarily at the level of specific bugs to fix, or even software-focused at all. I just don’t see the point of trashing something with a useful role to play simply because it’s not a panacea — he might as well write a column trashing firewalls.

Kevin Spease January 16, 2009 6:37 PM

I’d be more receptive to the article if it went something more like this:

“Top Eleven Reasons Why Top 10 (or Top 25) Lists Don’t Work ALONE

Error avoidance isn’t the only important piece to an application security program but you aren’t going to get there without it!

Both articles have their own merit. CWE/SANS did a great job calling attention to the greatest source of the issue. Meanwhile, in his own special way, Mr. McGraw illustrates that it isn’t the only important piece.

All that said, I believe CWE/SANS (and the numerous other contributors) did a very good job bringing these Top 25 to light.

Anon January 16, 2009 7:05 PM

A last comment: another blogger I read has periodically posted a “nitpicker’s corner” with disclaimers because of all of the negative comments he gets. I get that Bruce posts stuff offhand, and probably is not expressing a deep and abiding agreement with this guy. Regardless, I look forward to reading many opinions and links I find off-putting in 2009, though I doubt I’ll always have time for a rant about it. 🙂

Not anonymous January 16, 2009 11:12 PM

To the anon above who said:

give us something constructive. Tell us some things organizations should know above the level of specific bugs. What should developers know? Where do I start learning?

Google Gary McGraw, the author. He’s written a number of books that answer your exact question, the most notable being “Software Security” and “Building Secure Software”

Clive Robinson January 17, 2009 2:14 AM

@ Anon,

“… give us something constructive. Tell us some things organizations should know above the level of specific bugs. What should developers know? Where do I start learning?”

Where do I start…

1) You work in a busines where the man that cuts your cheque at the end of the month does not speak technical :- learn to speak his “business speak” and put your technical issues in that language then he will listen and not go off to sleep when you walk through his door.

Also it will stand you in good stead when you get promoted or go out on your own so it’s a good investment of your time.

2) I disagree with the point where he says bugs : design 50:50, He has left out use of poor tools used badly (GIGO) and a few others.

However sticking to his two I would say that 90% are due to poor design in several areas. Primarily “input” “exceptions” “APIs” “documentation”.

Most programers by habit push all input validation as far to the “left” as they can primarily because the do not know how to deal with exceptions properly or at all…
And most code cutters realy don’t know how to break their code up and design clean well structured interfaces.

I should not need to see any of your source code to write other parts of the program, either as part of the “initial code cut”, or for “bug maintanence” or “upgrading”.

If your APIs are designed properly and documented properly and your code handles exceptions properly then,

I should be able to write my code and test it to the API and drop it in and have it work.

3) Testing :- Most code shops do not test properly or at a level where they have an real degree of certainty on “functionality” let alone “security”. This is due in part to my above two points…

4) Metrics :- here I very much agree with the article writer if your only metric is number of bugs squashed then you realy are in trouble.

The problem is that most metrics you see are for audit and are about as usefull as a choclate teapot to architects analysts and programers.

This is a real problem as the industry as a whole does not have consensis on what should be measured or how and most support tools metrics are for accounting and audit (at best).

5) Tools :- get them, learn how to use them properly, use them, and pester the company that supplied them to get features you need (unless you tell them then their marketing dept are just going to guess).

You would not belive the number of times I have seen very expensive tools become “shelfware” simply because nobody has the time…

6) Programing Languages :- the author is correct they suck, but not just from the security aspect (though the ANSI C spec has a lot to be blaimed for on that score).

Most studies show that the strongest indicator of bugs in a program are related to the number of lines of code written by a programer irespective of the language used… So pick a language that is appropriately high level to the job you are doing that way you will have less lines of code written by programers, and they will get more done in the same period of time with less bugs.

Also being able to work in more than one programing language will help turn you from being a code cutter into a programer.

C++ et al suck because they try to be all things to all men and like the jack of all trades they fail to be a master of any. Worse they all have C in their past either directly or by methodology so have inherited some real bad traits.

7) Be an Engineer :- “software engineering” is a term I loath and despise.

Few software analysts, architects, designers, programers or code cutters use anything even remotly close to engineering principles or methodology.

Code Re-use and Patterns have been the mantra for some time which is about the same level as old style artisans making cart wheels and early steam engine builders. They make cart wheels the way they do due to evolution not science, in that they got to the design through things breaking and bolting a bit on untill forced by the laws of nature to come up with a new slightly different design. Those that worked went forward those that dident are lost in the mists of time.

However with software there are no laws of nature to force a new design. So every time something breaks a patch is bolted on and when that breakes that gets a patch and that in turn gets a patch…

Know the difference between a bolt and a bridge. A big problem with code reuse is code cutters design one bridge and cut it down to size for the river they are crossing or make it bigger. The next river the same, the bridge they use only ever gets bigger and the cutting more obvious. They do this Instead of making bolts etc and building the right bridge for the river they are crossing.

8) Learn by others :- this is a real problem. Everybody learns by experiance and knowledge. A clasic example was the Windows MFC, it was overly complex badly designed and documented, using it was a pig of a job until you learnt the “secrets”.
However programers treated it as a “rite of passage” in that they figured they had had to do the tasks of Hercules to get where they where and the hard won knowledge gave them a “competative edge” so they where not going to pass it on to others…

“Competative edge” “secret knowledge” call it what you want is self defeating it condems you and everybody else to the same Tasks of Hercules but without his abilities. Think how many times you have felt “If only I knew…” well the chances are somebody else has been in exactly the same position or one very close.

Know the difference between “war stories”, “parables” and “history files”. War stories have heros and villains but only one purpose to make the person telling them look the hero. Parables make general points and are supposed to make you learn but are generaly so vague to be of little practical use. History files however if written correctly contain the hows the whys and facts both good and bad of the problem and are of very practical use both as an aid to memory but as a body of knowledge for others.
Therefor if you have found a way of doing something don’t keep it to yourself document it warts and all and let others learn from your hard won knowledge, and ask them to let you know if they found it usefull or have improved on it etc. Most won’t but some will so you and they have benifited.

9) Education is a life long process :- take the time to read books journals and blogs and as I was once told, as a programer “learn a new language every year”.

But importantly realy learn the fundementals they are transferable skills knowing the secret tricks of a tool or language is only useful for a short time, and confuses others (think the obsficated C contest 😉

Also there is no “one way” to do a particular job the broader your range of skills the more insight you will have as to how best to do the job.

I was once told,

If all you know is how to blow glass skillfully you are not going to be a lot of use in the carpenters shop. Likewise knowing how to use a hammer and chisel is not going to help you to much in the glass shop. However it will get you started in the masons shop. Knowing how to carve both wood and stone gives you more insight and oportunities.

And if somebody needs a bridge building the knowledge and ability to use both wood and stone will set you well above jobing carpenters and masons and with a little knowledge of engineering fundimentals you can be a bridge architect and designer.

There are a whole load of other points I could make but hopefully the above will help.

Anon January 17, 2009 3:20 AM

Google Gary McGraw, the author. He’s written a number of books that answer your exact question, the most notable being “Software Security” and “Building Secure Software”

Which is great, and thanks. I still think this column ain’t his best moment; he went out of his way to set up the perfect as the enemy of the loads-better-than-nothing.

I’ll also look at what Clive posted, and I’ll quit posting to this thread lest I become a total scourge to discourse on the Internet.

SIWOTI cat bids adieu: http://tinyurl.com/6wx6hb

Erik N January 18, 2009 4:22 AM

Right. If top-N is to be understood as the most “severe” bugs with respect to some arbitrary conditions. And particularly if it is assumed that solving those problems will solve most security problems.

However, if top-N is to be understood as, 99% of all software bugs can be avoided by avoiding these problems, then it can save programmer resources. After all, maybe you can do automatic bug-analysis, but you still have to correct the code. Getting it right first time will save you time.

Evidently, once these 99% of common bugs disappear, the next iteration of the list will be different.

It is always best to learn to do things right, and do things right first time, but we all learn by err and knowing common pitfalls can’t be that bad.

agent smith January 18, 2009 7:28 AM

I’m wondering if anybody thoght about the top ten high-level security threads for the future.

personally I would judge as follow

1) existing controlled and contaminated client and servers connected to the Web
2) unaware management
3) unaware users
4) digital natives
5) complex babylon of programming languages
6) digital natives
7) unsecure web servers
8) unsecure web clients
9) unsecure setup scenarios withouth native hardening
10) outdated software on clients and servers

ed January 18, 2009 1:24 PM

You’re all wrong. The top TWO reasons for crappy software are:

  1. Time to ship date.
  2. Cost to develop.

To a business person, those are the only things that matter, because without them you won’t have a product to sell.

agent smith January 18, 2009 6:39 PM

@ed good point – but proceeding with short term solution, instead having a focus on long term quality will lead IT into a disaster as the banques are just into right now. Assume to have data and programs not mixed up in mem anymore and a focus on long term quality would propably solve most of the issues of today – wouldn’t it ?

Clive Robinson January 18, 2009 11:09 PM

@ ed,

“You’re all wrong.”

Err no, my point 1 I posted above (Jan 17 @2:14AM) covers your two points and several others that arguably are more important.

Superficialy your two points appear to be the main process drivers for software development. And in many cases yes they are the main drivers.

However as a number of organisations are finding out the down stream costs of poor design are hurting badly.

In physical product manufacturing poor design gives high “return rates” which due to asymetric costs (transportatio/rework) have a disproportiant effect on profit.
The soloution for manufacturing industry was to correctly implement Quality Assurance (QA) systems.

Due to the fact that software companies have effectivly externalised the asymetric costs onto their customers the main cost they see is the “redesign”.

However the cost of redesign in physical products is usually quite low compared to that of software due to complexity issues.

And it is the complexity issues that are causing the down stream costs to hurt in some organisations.

In new or small software products complexity issues are often managable so the redesign cost is often marginal to the overal development costs.

However in older or larger software products complexity issues rise dramaticaly (n^x where x is >3) as do the attendant redesign costs and these can in some cases be such that it would be less costly to start from scratch.

Security type issues general fall well and truly into the high order complexity cost bracket. Partly due to “new class” issues but also due to the relative position in the software stack (app level less complex than OS level etc the worst being standards and fundemental building blocks like MD5).

However due to other technical (legacy etc) and business (customer baase, market credability etc) issues starting from scratch is offten not possible.

Therefor the efficiency advantages of QA seen by physical product manufactures are applied to the software build process.

It can easily be shown that the only way to manage quality issues is to build in quality across the whole process and that security is directly equivalent to this.

However outside of product design quality has been a bit of a hard sell to business managers simply because there are no easy metrics that can be used for ROI etc, which is the same for security.

However some non design/manufacturing business organisations had quality forced on them by their customers or regulatory processess.

What was found was that those organisations that took quality to heart saw improvments in efficiency, and those that paid only lip service saw decreases in efficiency.

Most business managers in medium or large organisations now regard quality as part of the business process effectivly as a given.

Security issues are from the business case almost identical to quality. Knowing this and knowing how to make the business case will make any security practitioners job considerably easier.

Importantly from QA is the recognition that a failure is part and parcel of the process and is dealt with by the mitigation asspects of the process.

Taking this view helps a security practitioner realise which metrics they should be using and why. But first they have to be business savy which means learning the walk and talking the talk.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.