From what you have said I think we actually agree about the effective cause of the problem, which is lack of a reasoned or engineering approach to IT security.
For the sake of clarity (for other readers etc) I'm going to drill down through the layers to show where I'm looking.
Unfortunatly this will give rise to a very large post that I hope the Moderator and others will forgive (I'm sure Bruce will find it moderatly interesting ;).
As you note the development of security tools and policies is a bit of a "chicken & egg" problem.
As you correctly say vendors make what people buy, unfortunatly that is often not what they might actually need.
Likewise policies are (or should be) based on what is "known" to be the correct choices within the constraints of the business requirments (where known).
The question then arises what do the IT practitioners "techies" need?
And as you note their first need is to keep their jobs, by "keeping the man who cuts their cheques happy".
Which as you outline often boils down to the "crocs in the swamp" problem of asigning the incorect priorites due to the situation (fighting the crocs not draining the swamp).
Unfortunatly this put's the techies in to the position of "fire fighting in a shanty town". Which is not what they should be doing as it is very dangerous and full of unknown issues which will likely as not keep them trapped there as long as the shanty town exists.
What they actually need is not the tools to fight individual fire types, but the political will and expendeture to have the shanty town cleared out and proper "fire code" houses built instead. So things move from a dangerous unknown situation to a less dangerous known situation, where effective tools can be designed and implemented.
Thus we see an underlying series of problems. Top down these are,
1, Political will to change.
2, Finance to pay for the change.
3, A plan of what to do.
4, The knowledge of what is required.
Each step is reliant on the one below. However the political will and finance to carry out an action is a business process (which techies invariably get wrong, because they "don't speak business").
Step three is both a business and technical process. That is the business supplies requirments information so that it can work effectivly. The techies look at these and supply senarios which meet or partialy meet the business requirments within other constraints such as the tools available (Major "Opps" problem).
These senarios are suppled with costs and "known benifits" plus (if the techies have any sense) "bonus benifits" that offer further oportunities to the business.
The business side examins the senarios to see if they fit the requirments or not and the benifits, which may change the business requirments.
This process should loop around a few times until a senario that is a best fit within the realisable requirments is found.
Then the resulting top three options get sent up to the business execs to make a choice and supply the finance (2) or return / reject the plans.
The problem with step 3 is that it is reliant on step 4 which is "knowledge" that is reliable and preferably scientificaly sound.
Unfortunatly as we know that is not available because the tools available are supplied by vendors trying to make a sale and thus meet the perceived not actual needs.
So the process above (steps 1-3) is broken because of the GIGO principle that clearly exists at step 4.
I think that you would probably be in major if not full agrement with the above reasoning.
The GIGO problem is further exacerbated by "pleasing the man" and thus the "knowledge" obtained is not that which is actualy required (that is it is driven by incorrect needs).
The problem is we don't yet have "fire codes" by which the houses can be built, that have a scientific basis.
What we have is just "best practice" which although it's a step in the right direction is by no means the best way as it is just based on the untested observation of,
"The top ten companies that have the minimum number of reported incidents do this in common"
That is not science and sadly it is also an admission of failure, that "we just don't know what works".
So what does step 4 need to get "reliable knowledge", well it needs reliable tools.
Reliable tools come about in two ways,
A, Improvment by trial and error the "Artisan Aproach"
B, Improvment by scientific investigatio to find the base principles and thus build the tools on firm foundations, the "Scientific method".
The problem with A is it takes a very long time, and often ends up in overly built solutions (the iron rimed cartwheel) which limit further development or move it in the wrong direction (incorect placing of damping measures thus requiring significantly higher energy costs).
Where as B looks at the problems and provides information to engineer a solution (the air tyre and wire spoke bycicle wheel). This provides an efficient solution to the individual tasks and allows significant progress (modern wheels for cars and aircraft).
Thus step 4 requires further supporting steps below it,
5, Tools to reliably assess what needs changing.
6, Tools to identify and measure problems correctly.
Which in turn requires,
7, Usable and reliable measurments (metrics).
This is the step where things are clearly not right to most people who care to examine the issue.
We simply do not have metrics that are of any use.
The problems behind this are many fold.
The first is that IT developers and technicians are not only not scientists, they are not engineers either.
They are by and large "Artisans" (or worse tempramental artists ;)
(Also by and large most IT techies are not business savy either which is another real issue, in that they realy need to talk to "the man" in his language not put him to sleep with what he sees as "techno bable". But that is a related high level issue not a fundemental issue).
Artisans develop their tools and goods by ad hoc "paterns" that is by trial and error method/model (if it brakes bolt a bit on or make a random change). Which we both clearly recognise is a significant problem.
The problem with being an Artisan these days is by and large the world has moved on into science backed engineering and business.
Thus the way of the "Artisan" does not fit, in anything other than very limited places such as "craft markets" where even there they have been userped by "Artisanal" methods.
Thus if you step outside of the "Artisan" mindset and you examine other industry sectors such as mechanical engineering you will see something that will make you pause for thought.
Each level of the business from the lowliest "tool hand" through to the senior executives have metrics which are importantly "specific to the job in hand". Also more importantly each metric at the bottom has a recognised way of being converted into the metrics at the layer above.
That is "tool wear" converts via recognised steps to a "cost benift ratio" as part of any business project, which in turn alows a "return on investment" calculation or a correct "risk assesment" to be made at higher levels.
Which begs the question,
"Why does ITSec not have usable metrics?"
However it is not just in IT security, but in IT in general that this "No Metrics" issue arises (I have deliberatly left communications out for the obvious reasons it has well established and reliable metrics that work all the way to the top which I will explain further down).
Metrics are measurments not comparisons. To be usable they have not only to be reliable but they have to be vectors and quantifiable independently of what is being measured.
As an analagy saying one bit of wood is longer than another is not very helpfull it is not a quantifiable measure. However saying it is 1.27m long is usefull as you now have a metric that you can use independant of the piece of wood.
Information however is not realy tangable it has no currently measurable dimensions or forces that can be used as the basis for metrics.
(Another reason I have left communications out, is that it is most definatly based in our physical world. It is based on the movment of energy or physical items. As there is a direct convertable relationship between energy and matter and requires forces to operate that are all constrained by physical properties which we know how to measure).
Thus metrics are based on the science of measurment (metrology), which mainly deals with the abstraction into information of the properties of physical objects and forces.
One of the ways any branch of science moves forwards is to borrow models from another branch of science.
Provided the model has similar underlying assumptions then it is likely to produce usable results, thus further knowledge in the borrowing branch of science.
The real problem with ITSec and IT in general is that it is based on "information".
That is it is based on "intangable" or non physical entities and forces, where as most of our models are based on "tangable" physical entities and forces.
Thus by and large all our engineering models are about finding "physical limmits" and staying within them (which is why we have metrics for communications but not the information it carries).
Obviously if something is not physical it has no physical limits, therfore the underlying assumptions (axioms) of the physical models may be incorrect.
One aspect of this revolves around duplication.
In the physical world you need to work on physical objects which has significant costs whilst making in exact copies.
Apart from the very small energy involved with the copying and communication processes, the duplication of information cost is as close to zero as makes no real odds (the real cost is in storing the information in a physical entity).
More interestingly and importantly as information is not tangable it is not "localised" like physical objects.
This has significant issues for trying to move physical security models into information security models.
A thief as a unique physical entity can only be in one place at a time which is a fundemental physical constraint.
Secondly what a thief can do is limited by physical forces which is a second constraint.
The first constraint is "localisation" that prevents "action at a distance", the second constraint is that a "force multiplier" such as a glass cutter or drill is physical and this has obvious limitations in what it can do.
Niether of the constraints applies to the "intangable" information world and it's security.
That is an information thief can be anywhere that has communications access to the information (therefore not localised).
Secondly their "force multiplier" tools are built not of physical items but information, thus are infinatly copyable. The only real cost is the energy involved with copying the information and using the tools.
And as we know the bulk of that energy cost is not bourn by the information thief, but the victim.
If you examine information you will find that the only real measure we have for it (entropy) is probabalistic.
All the other measures of information are due to the costs of storage and movment in our physical world.
Thus we have the problem, that there are currently no usable metrics for information, just metrics for the physical manifestations of storing and moving it.
The nearest "science" we have to borrow models from for developing a science of information is quantum physics (it's entirely probabalistic and based on states which is directly equivalent to information).
This brings up an interesting side point,
The position of "father of the scientific method" is often acredited to Sir Issac Newton. Who through the study of light and gravity gave rise to understanding of how forces work on physical objects.
Importantly (for my argument ;) he removed an incorrect assumption from our view of the physical world (that objects fall at different rates depending on their mass) which gave rise to the advancment of the ideas of friction etc.
Thus a bottle neck to human understanding was removed. However Newton's laws of matter in motion are known to be wrong at certain extreams. That is the laws model the perceived world not the actual world.
Matter in motion are subject to the laws of "relativity" (macro) and "uncertainty" (micro).
The problem with information is that being probablistic in nature and not constrained by physical issues there is no more cost at working at the extreams than there is in the norm.
This is compleatly different to our physical world where getting out of the norm requires significant cost. That is as you aproach the extream the cost goes up geometricaly. Thus by and large the physical world stays in the norm where Newton's model works.
Our statistical models of how things work in the physical world pretty much all have an underlying assumption in one form or another of "the norm is more probable due to cost".
Thus to get usable metrics for information we need to find new models not based on our tangable "Newtonian" world but those based on probablistic measures without cost constraint.
We then have the akward process of finding ways to convert the fundemental metrics back up to usable metrics at each upwards layer.
As a final asside ;)
"incentivise their security practitioners to keep proper inventories in order to identify and remove risks that have no business benefit"
Is the "less is more" paradime, which I learnt whilst young,
Some of my relatives used to own a farm, and there was an old boy who worked there who tended the apple trees. Every year I would see him "hard pruning" the trees and I asked him why. He said if you just let the tree grow it will waste it's energy make lots of little apples that you can not sell. Prune it well and it makes few fruit that sell the best.