"However to me the most telling thing is that none of those organisations seems to have complained about being on the list erroneously"
"Damed if you do and damed if you don't", is something I was taught about leadership, along with "praise in public, punish in private".
Thus I would not expect a public response via Brian's blog, but a quite word via email or phone, in a polite way requesting further proof etc.
The problem Brian has with his post is "Shoot the messenger syndrome", and there is only two ways to avoid that "Never Say Anything" or "burn the source" with full disclosure. Either way in this particular case is actually being "part of the problem not the solution".
Look at it this way, most infosec people who have spent any time in the trenches will know that security is an illusion more than an actuality. And the more secure a system is generaly the less use it is and the more expensive it is. Those around them and who write their pay cheques want working solutions to everyday problems without cost or hassle.
To a certain degree being an infosec bod is a no win situation and this is not going to change because 99% of the tools out there are either "reactive not predictive" or "restrictive not permissive".
As a result there are two truisms in the industry the first being the idea that a secure computer is,
"One you never own, never use, never turn on, in the middle of a large block of solid concrete you have abolutly provably droped down the Mariana Trench, and that's only secure for maybe a year or two".
The second being related to the myth of infalability and is brutaly stated as,
"Sack the security person who has not found malware on your systems"
The reality of life is there is always going to be "zero day" and "workers need to be productive".
The question then is what do you do about it, to which the answer is that dred non-answer statment of "manage the risk".
If you look back on this blog to the time the actual RSA breach became known I put up a hypothetical scenario of what might of happened with regards the loss of the authentication seeds.
In essence the business driver of the help desk function ment the seeds had to be readily available to support customers. The result was under estimating the risk, or failing to manage it for one of many reasons that might also be the same business drivers...
The result was a loss of the seed database, loss of company reputation, loss of a number of customers some of whom were "high value low cost" and considerable cost clearing up the mess.
So on the face of it, it can be observed that somebody did not put enough resources into protecting the seed database...
But did they?
You have to ask that question and to answer it you have to accept that these tokens are not exactly low cost high profit items both to produce and support. Which in a competative market the costs of security are critical as to if it is actually worth producing such items.
You then need to ask the question about the resources an organisation or individual is prepared to expend to overcome the security system that is in place.
In the case of state actors it is "whatever it costs to achive the objective" and in the case of some obsesive individuals it is "whatever time they can devote to achieving the objective. In either case they are going to reach a point where the attackers effective costs in time or resources is going to equal or exceed the annual cost of the security measures that would be in place...
Thus as the defender you have to look not just at the cost of the security but it's qualitative effects.
And this is where it usually head butts "business drivers" and where it all goes horribly wrong.
For high security you go for segregation with strict point of access control not just on the individuals but also on the data. And it is this second issue it usually goes wrong.
Segregation is usually relativly easy to do if you can "air gap", but to be usable in any environment data still has to be used and in this modern world that means transfering data to other systems and people. This data transfer can be controled in various ways such as no use of storage media for data transfer, data diodes, and data rate limiting. All of these are technicaly difficult to implement and difficult to use, thus have serious cost impacts that eat either directly or indirectly into the income from producing such items.
But even if you do get the security sufficient to prevent data exfiltration over the wire, as we know from Stuxnet and the "code signing key" a sufficiently resourced attacker will find some other way to get at the data. And as history tells us it might be by placing an agent in as an employee or a "black bag job" or worse some kind of direct action against individuals working for the company ranging from bribary through to kidnap tourture and murder.
The underlying problem is the asymitary of the value of the token to the producer and the value of the information a third party uses the token to protect. For a defence contractor this could be well over 10,000,000:1.
For instance the US DoD has just recently set test dates for a "flying humvee" that is expected to have a unit price below 55million and the DoD have anounced that there are two organisations that have tendered. What is the value of the IP required to make that happen a Billion, 10Billion and what would another company be prepared to pay to get it let alone a hostile state?