BF Skinner September 14, 2010 7:28 AM

“…Adobe Acrobat, Sun’s Java and some Microsoft applications.”

Sounds like the kind of thing outside of vendor automatic updating.

Full Disclosure. I’ve never worked on US-CERT and I do believe testing and monitoring is the only way to improve security at the device level.

202 high risks across 174 machines. I’d say the MOE CM is weak. Desktops and backoffice servers are generally considered the management responsibility of the OCIO.

No POA&M for vulnerabilities…implies they knew of their existence.

“Establish an information security training process that includes…”
Okay sore point…SATE = security awareness AND training…most agencies stop at the annual brief. Once while attempting to implement a training program I was checked by both contractor program managers and government service types complaining about the “hit to productivity”. ‘You’ve got a hundred people in there for an hour a month! I’m losing 100 hours a month!!’ Not entirely true since a lot of the time their people do just sit around playing star craft.

Then there was the cost. Contractors are expected to provide ‘trained’ personnel. If there’s additional training required then they have to pay for it out of overhead, it ain’t billable. (If the gov’t changes the requirement it becomes billable). So no one wants to pay for it…what? should the be expected individual employee shelling out 3 grand for a SANS course? 8-10 percent of their average SA salary. Unlikely. And note the US-CERT is contractor staffed.

But while IG sources SANS they don’t cite the Verizon 2008 or 2010 Data Breach reports that show the percentage of breaches using known vulnerabilites is dropping rapidly. 24% in 2008 they couldn’t find any in 2009. Shocking but the attacks were at the application layer and OS configurations which it sounds like CERT covered. So CERT covered, probably not by design, their maximum risk areas (Einstein and principal OS) yet IG wants EVERY vulnerability corrected?

The same Verizon report found 0 cases where physical security lapse was part of the compromise. This checks with what I’ve seen on site. We all do physical security fairly well now.

Chillers? Bah. I’m surprised they didn’t talk about the risk to infrastrure from EMPs or CMEs.

mdb September 14, 2010 7:33 AM

I never found US-CERT to be better than Carnegie’s CERT. I still go to Carnegie’s web site as resource, I rarely go to US-CERT. The information on Carnegie’s website is more timely, easier to access and BETTER. This does not surprise me.

Clive Robinson September 14, 2010 8:10 AM

It’s interesting to notice the split between OK and not OK systems.

This strongly suggests two entirely different admin sets. Well practiced security staff on the “mission machines” and run of the mill staff doing the busines infrestructure machines.

Which begs two questions,

1, what’s the pay differential.
2, Why are staff not rotated tthrough.

MarCon September 14, 2010 8:34 AM

Just to wrap a little context around this: ALL Inspector General reports find something wrong with whomever they are reviewing. That’s the nature of their job. If everything was perfectly perfect, they’d still have to produce some findings. Otherwise, why would OIG be necessary? And why would people in the OIG have to be employed? That the findings in this report are so trivial is a good indication that US-CERT is doing a very good job.

BF Skinner September 14, 2010 8:54 AM

@Clive “two entirely different admin sets”

I’d suggest it’s the mindset between “production application” and “back office”.

Applications get funding and Congressional attention (there’s this really cool thing called an Exhibit 300) and dedicated technical resource. Back office tends to get ‘remotely’ managed.

What’s really important as opposed to what’s work a day and where we can skimp. (what’s really important here equals what is really important to my career).

I’d like to know why we are still fielding anything but thin clients for desktops.

Clive Robinson September 14, 2010 9:40 AM

@ BF Skinner,

“I’d like to know why we are still fielding anything but thin clients for desktops”

Simple answer “web based middleware / servers”.

The main web browsers are basicaly insecure when it comes to memory space and copying info from one “window” to the next.

It sadens me to think we finaly get OS’s with a reasonable amount of security all the way up the stack and then we stick a realy insecure multi tasking app on top with shared memory space etc…

Once upon a time with ASR/KSR paper bashers all we had to worry about was serial snooping on the way and confidential waste control at the client terminal…

Bogwitch September 14, 2010 11:28 AM

“DHS spokeswoman Amy Kudwa said in a statement Wednesday that DHS has implemented “a software management tool that will automatically deploy operating-system and application-security patches and updates to mitigate current and future vulnerabilities.”

I have seen this sort of knee-jerk reation before. I wonder if they have also implemented a suitable testing policy?

James September 14, 2010 12:08 PM


Although you’re right about WWII in some ways, it was finished because there was a real army that would surrender and show it colors when it fought (in most cases). Aside from that, WWII had the draft and there were many drafted. The comparison is very difficult to make.

Shane September 14, 2010 2:36 PM

@yup, @James Re: WWII

Yea, surely the only two nuclear bombs dropped in the history of human warfare had absolutely nothing to do with it. *(changes the channel back to US-CERT)

jml September 14, 2010 2:44 PM

Anyone else find it odd reading the narrative leading to Recommendation 10 that, in a real-time operational/monitoring facility experiencing out-of-spec temp/humidity, that the auditors comment, “When the temperature and humidity readings of equipment in the server room rise above the normal range required by DHS, industry best practices recommend that the equipment be shut down until climate issues are resolved. Shutting down the equipment would prevent the danger of the equipment overheating or malfunctioning.”

Seems to me like it would also make it darned hard to get any operational monitoring done, too, since according to that diagram, all Einstein data flows to that element.

Now, ding ’em for not documenting the exception and decision to risk the systems (somewhat – those out-of-spec environment numbers don’t seem terribly severe) rather than the operational mission.

Maybe I’ve missed something, though. Public versions of audit reports usually omit some pretty relevant detail or work papers. I also generally hate when auditors wave hands in the “industry best practice” direction. Perhaps that just means that DHS didn’t have an exception process for the environmentals.


PC.Tech September 14, 2010 3:48 PM

“You’d think US-CERT would do somewhat better.”
No I don’t. What makes anyone think they would? They’re under-manned, under-funded, etc., as are many gov’t entities. ‘Looks better for the committee when the numbers are lower, for the newspapers… You know.

BF Skinner September 14, 2010 5:59 PM

“The officer led me into a waiting room with about thirty chairs. Six other people were waiting.”

Ahhhh So good to see the Group W bench still around doing it’s thing.

Sasha van den Heetkamp September 15, 2010 5:51 AM

It’s nitpicking. We all know that they haven’t got their security up to par, and I can’t blame them. Security takes time and money. It’s easy to point out other people’s flaws in systems you do not understand nor comprehend. It’s one of the most a childish aspects of the security community these days, to jump on the disclosure bandwagon a priori and acting like a backseat driver. I myself discovered numerous flaws in the least expected spots of governmental infrastructure and reported them sometimes anonymously, and sometimes with my name attached to it. In governmental cases, I almost never disclosed it. I understand the frustration of researchers, but in the end: it is not your system, it are also not your bugs, so why bother in the first place? Some folks need to learn it the hard way. Letting a Nessus scanner lose on US-CERT is in my opinion lame. Not only because it is a “security scanner”, but it doesn’t say anything at all on average.

BF Skinner September 15, 2010 6:13 AM

@Sasha van den Heetkamp “In governmental cases…in the end: it is not your system, it are also not your bugs”

So consider this Sasha. If after billions(?) spent on security in the last 10 years is still producing lame performance in the Gov’t which is required by statute to fund adequate security how good CAN the security be at any private, even a large, enterprise? Billable hours rule in the private sector.

If you’re not directly profitable then you’re cost overhead and in downturns overhead get’s cut.

It may not be our bugs but it’s our data. We, here, pay for and expect the service of the stores, communication services, medical tech, credit processing, no teller banking, gas stations, electric companies and every other ‘convenience’ in modern life.

Sasha van den Heetkamp September 15, 2010 6:30 AM

Posted by: BF Skinner at September 15, 2010 6:13 AM

One will never find all possible security issues. Systems are simply too complex for it to be. You might discover surface warnings, but fixing that doesn’t guarantee that a gaping unknown hole the size of a supernova might not be staring you in the face. Even if they fix it, which costs time and resources we (or at least I) rather see it spent in cutting citizen queues.

On the one hand we expect a government body to perform without too much hassle and bureaucracy, but on the other hand we demand that data is treated, processed and stored 100% safe. A logical incompatibility.

Security is somewhat like nearing infinity. it’s like demanding everlasting life in good health. One can stretch it, but that’s about it.

Nick P September 21, 2010 1:24 AM

@ Sasha van den Heetkamp

Your points in the last post are a combination of moot and red herrings. I specialize in high assurance systems that people would never think of failing. Even we don’t say “fix all security issues,” “100% safe,” and the other nonsense in your post. Only special purpose systems with limited functionality, longevity, trusted update, and physical security can meet anything close to those criteria. Even these aren’t perfect: just extremely low risk and highly unlikely to be compromised. In high assurance, this is the goal. In general day to day, the goal for the consumer is few noticeable defects, no horrendous lapses in security, ease of use and reasonable cost.

For professionals, computer security is about risk management. With risk, we may counter, reduce, insure against, ignore or lie about it. Each risk management decision has a cost-benefit analysis and the final decision should be backed up by a good justification. The point people like BF Skinner are alluding to is this one: political and managerial scruples make the cost-benefit analysis look bad and result in poor security. However, there exist a number of low defect development methods that produce very robust software on schedule without a tremendous increase in cost (15%-50%), sometimes a cost savings due to less testing or rewrites.

If a fair cost-benefit analysis is done, these methods are easily justified in tons of systems. The cost-benefit analysis is stacked against the users and consultants by managers looking at bare numbers and not quantifying losses due to security breaches, lawsuits, etc. When managers get real or software liability laws force them to, then they will apply these processes and attackers will have to jump 8-15 foot hurdles instead of 1-4 ft. Is that worth an extra 15-50% for critical applications and systems? I think so. I’ll pay a little more to not have my life ruined by bored, 13 year old Russian script kiddies.

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.