Hacking the Brazil Power Grid

We’ve seen lots of rumors about attacks against the power grid, both in the U.S. and elsewhere, of people hacking the power grid. President Obama mentioned it in his May cybersecurity speech: “In other countries cyberattacks have plunged entire cities into darkness.” Seems like the source of these rumors has been Brazil:

Several prominent intelligence sources confirmed that there were a series of cyber attacks in Brazil: one north of Rio de Janeiro in January 2005 that affected three cities and tens of thousands of people, and another, much larger event beginning on Sept. 26, 2007.

That one in the state of Espirito Santo affected more than three million people in dozens of cities over a two-day period, causing major disruptions. In Vitoria, the world’s largest iron ore producer had seven plants knocked offline, costing the company $7 million. It is not clear who did it or what the motive was.

60 Minutes called me during the research of this story. They had a lot more unsubstantiated information than they’re provided here: names of groups that were involved, allegations of extortion, government coverups, and so on. It would be nice to know what really happened.

EDITED TO ADD (11/11): Wired says that the attacks were caused by sooty insulators. The counterargument, of course, is that sooty insulators are just the cover story because the whole hacker thing is secret.

Wired also mentions that, in an interview last month, Richard Clarke named Brazil as a victim of these attacks.

Posted on November 11, 2009 at 12:19 PM38 Comments

Comments

Matt Simmons November 11, 2009 1:02 PM

@Aguirre

That’s the first thing I thought of when I heard about the outage, as well. It seems overly coincidental that 60 Minutes covered it, and the next day, it gets shut down.

Aguirre November 11, 2009 1:16 PM

@Matt Simmons

Coincidence is more common than you think. We are hard-wired to notice coincidence, ask a magician.

Dale B November 11, 2009 1:24 PM

I saw the 60 minutes story last Sunday and have read similar stories over the years. There was so much left out of the story that I couldn’t figure out what actually happened.

I do know that modern plants (chemical, oil refineries, power generation) do use PC based computers for operator consoles. Up to about 15 years ago they used proprietary hardware in the console but today just about everyone uses PCs because they are a lot cheaper. However, these plants still use proprietary hardware, operating systems, and communications protocols in the actual control computers and controllers.

I suppose that if you could get into the control console you might be able to do some damage IF you knew exactly which control system was being used and how that particular instance was configured. This would require a lot of inside knowledge but it is possible I guess.

If the operator consoles are connected to the internet I would think that that would be a really bad idea Air gaps are not perfect but they are a good start towards decent security.

Arclight November 11, 2009 1:46 PM

Ubiquitous network access is not always a good thing. A lot of these systems probably use Ethernet and IEEE488 serial for critical things, which isn’t itself bad. The real threat, in my opinion, is having the endpoint/management console be available to the outside world.

There are plenty of places I have been where I really can’t imagine why the guy behind the desk needs to be able to access Yahoo web mail from the same computer as the camera system or whatever is managed from.

At the very least, require all of the outside network stuff to be run in a VM or from a terminal window. In fact, why have a general-purpose PC at the operator’s station at all, when a thin-client device connected to a secure server environment would suffice.

Arclight

Mulder November 11, 2009 5:14 PM

The fact that most of the entire 60 Minutes story is based on anonymous, unsubstantiated claims by so-called “experts” and government employees is more than enough reason to discount the entire thing.

We’ve all seen how our government has repeatedly lied to us over the years, so there is no reason to believe any of their claims without proof that’s independent, on the record and backed by incontrovertible data.

Filias Cupio November 11, 2009 6:30 PM

I only skimmed the CBS article, but it seemed utterly lacking in any skeptical examination. People whose salaries rely on there being a problem say there is a problem. Or possibly people who say there is a problem get interviewed by CBS, others don’t. I’ve only had one side of the issue, and I can’t draw more than very tentative conclusions from this.

(Of course, the media can manipulate ‘balance’ too: get some raving nutter to represent the other side, leaving the impression that they are typical.)

Rob Mayfield November 11, 2009 10:01 PM

“Hackers” (incorrect term, but lets run with it) have nothing to do with it.

Possums are to blame, or so the local power authorities here would have you believe …

Clive Robinson November 11, 2009 10:29 PM

There have been incidents where virus and other malware have got into various industrial plant control systems.

Like most business organisations they don’t publish the information for fear of what will happen to the share price etc.

Likewise there is also evidence of “insider attacks”.

But I’ve yet to see any evidence of “targeted outsider attacks”.

Yes PC’s both “off the shelf” and custom are used for a several pragmatic reasons.

The first is equipment availability (VDUs/terminals etc just don’t exist any more unless they are PC’s with serial USB dongles…).

Because of this availability issue the second is cost across the project life. In long life projects maintanence often actually costs considerably more than the initial implementation, and custom hardware equipment manufactures go out of business quite quickly these days.

Then there are issues to do with availability and maintance of software tools and programers. As with hardware the builders and integraters of custom software solutions disapear almost as soon as it has been signed off. So as Fred F. noted the use of tools like LabView is becoming common think of it as the MS Office of the industrial and lab control (oh and toys Lego MindStorms uses it 😉

Also a lot of expensive test equipment uses either Win NT 4 or Win CE and semi custom hardware (just look through the HP range to see several).

What I have seen is that due to such incidents as operators playing games on PC’s (and causing malware infections) the more modern systems are locked down.

However there is a dirty little issue that not many people want to talk about in industrial control and that is reliability effects.

When you have 24×365.25 operation running at tens of millions of USD a day, you don’t want to have AV/Malware software and security patching going on.

Especialy if there are “safety critical” asspects to consider (a “red shutdown” can take a considerable time to recover from).

So even though the systems are locked down they are invariably vulnarable (especialy now that “web browser front ends” are appearing).

In the UK power plants spend a lot of money getting the level of security required for control room equipment.

However the further you move from the center the less secure things are. A lot of local substations use Low Power Radio Systems and hand held control units that enable “drive by maintanence”. Most of these control systems lack any real security.

Which brings you around to “Home Power Controler Initiatives” there is a political idea in the US and other places that to solve energy supply issues and be more “green” the utility companies should be able to control your power consumption in things like air conditioners etc.

Quite frankly these scare the life out of me as you know that they will have a quater century or more life expectancy and have to be produced at a very low cost with absolutly minimal maintainance.

Have a think about what has changed in security since 1984…

Clive Robinson November 11, 2009 10:40 PM

Oh and as an example of,

“Have a think about what has changed in security since 1984…”

The very recent TLS/SSL protocol hole in authentication.

As many industry commentators have remerked “most security relies on firewalls and SSL for it’s solutions”, Opps…

BooBoo November 11, 2009 10:51 PM

Maybe the “Sooty Insulators” are a gang of the world’s best hackers with laptops in Las Vegas. As they card count, they use their mobile phones to take down a whole country’s power grid…just because they can. Oh yeah, they also win millions on a single bet they rig with an electro-thingy-jigger that stops the roulette wheel with super-nerdy-code stuff.

Or maybe it’s just crappy South American wiring.

Aguirre November 11, 2009 11:02 PM

With the huge length of the high voltage transmission lines in Brazil, hackers are the least of the problems faced in ensuring continuity of the Brazilian electricity supply.

It would still be interesting if anybody at all could provide any evidence that hackers were involved in Brazil’s 2005 and 2007 power blackouts.

Without any evidence, I suppose we may as well assume that the hacker story is a just an unsubstantiated and ultimately deniable rumour being spread for another purpose. Not that anybody is currently trying to take control of cyber defence, or is requesting vast funding for increased cyber surveillance?

Marcos November 12, 2009 6:07 AM

Some people here already noticed the periodicity of that. That are some metereological phenomena that repeat themselves at the brazilian south and south-east regions (where are located the biggest power generation plants and power consumers) with that periodicity, and they get the blame every time the power goes off.

Our governemnt would really love to have some hackers to blame, they’d blame someone else if there was the tinest evidence available to support the claims. The fact that they didn’t blame anyone is a strong indicator that there isn’t someone to blame.

Clive Robinson November 12, 2009 6:26 AM

@ Marcos,

“That are some metereological phenomena that repeat themselves at the brazilian south and south-east regions”

If I rememner correctly, doesn’t Brazil lay claim to the place with the most lightning in the world?

I know it has some of the longest and highest electrical distrubution systems in the world so putting the two together does suggest outages would be more common than more temperate regions.

BF Skinner November 12, 2009 6:49 AM

@Clive “even though the systems are locked down they are invariably vulnarable ”

So as new holes are found they aren’t patch. The installed code base ages.

Risk metric? This should be a function that can be easily graphed.

Squirrly November 12, 2009 11:23 AM

In many ways, poor transmission line maintenance is more condemning than so-called “cyber attacks.”

(Use of the “cyber” non-word is really grating on me!)

Clive Robinson November 12, 2009 2:09 PM

@ BF Skinner,

“So as new holes are found they aren’t patch. The installed code base ages.”

Yes it ages over a period of time that depends on the “end users” policy of test before update.

Some end users make a “no changes” choice and effectivly isolate the areas concerned the best they can.

Others have test update at down times (ie when the controled part is down for other maintanence)

And other end users have a more normal software patch cycle.

With regards to,

“Risk metric? This should be a function that can be easily graphed.”

It obviously depends on a number of things, but the more isolated the control system the lower the perceived risk.

Some control systems now run on high availability telecoms grade open source systems (thanks Sun & Nokia 😉 and achive very high availability figures.

And yes I’ve seen some control systems get given the VMware etc treatment.

At the end of the day safety, dependability and availability are the usual key metrics for industrial control systems, and security is looked at in respect of those metrics.

Bruno November 13, 2009 2:58 PM

Just one thing, Vitoria is not the world’s greatest iron ore producer. It may be the world’s greates iron ore exporter, because that is where is one of the largest Vale’s ports is located. Minas Gerais is Brazil’s greates iron ore producer, contributing with over 70% of the total produced by the country.

Marcos November 14, 2009 4:48 PM

@Clive

Yes, it claims to be the country with the most lightning, also most of it is of the worst kind (with positive charged clouds, if I remember it righ). But the affected region is more prone to winds than lightning. That is the first time I heard about the power going off because of lightning, but I guess it should happen sometimes, since there is always some lightning with the winds.

Also, Brazil has a very large and very interconected set of transmission lines. It seems that every country that depend on hydro-power has that characteristic. Every time a big plant goes off, the entire grid goes off.

Clive Robinson November 15, 2009 4:15 PM

@ Marcos,

“That is the first time I heard about the power going off because of lightning”

The stats on lightning are quite frightening in terms of peak current, volatage and power. Even moderate strikes can fuse soil into glass.

Interestingly it appears that cities make the lightning above them more severe than surounding areas. Two theories about this are “heat islands” and “micro particulate polution”.

Heat islands is a fairly interesting idea but… Does not fully account for the observed effects.

Micro particulate polution effects are being investigated by UMIST (UK) (by somebody also called Clive 😉

It looks like adding even very minor amounts of micro particulates have significant effects on charge seperation in clouds. So ordinary polution such as smoke and fumes increases the effects of lightning quite measurably.

All well and good but… What about Global Warming and some of the proposed “technical solutions” one of which is seeding the atmospher with sulpher compounds which are effectivly micro particulates.

As has been noted on a number of occasions “Physics strongest law, is the law of unintended consiquencies”…

It could be that a “man made” attempt to solve another “man made” problem might just bring black/brown outs to a large number of places.

fabio November 16, 2009 6:35 AM

thats what happened, they accessed a website and then there was a button, “turn off whole country eletricity” they pressed it and boom ? if you believe this computer existed and ´d be connected to the internet…then you should believe in dwarfs also

DaveK November 17, 2009 5:58 AM

lol, hackers shut down your electricity grid, suuuure, and witches are responsible for the crops failing and the milk souring in your cow’s udders too, right?

there’s a very similar psychosocial dynamic going on here. expect salem hacker trials some time in the next decade.

KF November 17, 2009 12:33 PM

Most recently I noticed that CEMIG got hacked on 8-29-2009, references to the hack on Zone-h are missing , still in google cache though.
http://74.125.93.132/search?q=cache:4RlN33UGTnoJ:www.zone-h.com/mirror/id/9509483+cemig.com.br+hacked&cd=3&hl=en&ct=clnk&gl=us
I find it odd that this hack in august has been pulled from Zone-h and there are little other links or indications that the hack occurred.

http://arqtec.furnas.gov.br
http://br.zone-h.org/component/option,com_mirrorwrp/Itemid,0/id,14534/

http://ridat.furnas.gov.br
http://www.attrition.org/mirror/attrition/2000/05/20/ridat.furnas.gov.br/

http://www.mme.gov.br
http://www.zone-h.org/mirror/id/229927

http://www.energiabrasil.gov.br
http://www.zone-h.org/mirror/id/229926

http://hidroweb.aneel.gov.br
http://www.zone-h.org/mirror/id/763

http://www.aneel.gov.br
http://www.attrition.org/mirror/attrition/2000/01/28/www.aneel.gov.br/

http://www.cemig.com.br/
http://attrition.org/mirror//attrition/2000/06/11/www.cemig.com.br/

http://www.eletronuclear.gov.br
http://www.zone-h.org/mirror/id/19089

http://www.itaipu.gov.br
http://attrition.org/mirror//attrition/1999/12/21/www.itaipu.gov.br/
http://www.zone-h.com/mirror/id/19928
http://www.zone-h.com/mirror/id/12054

http://itaproxy.itaipu.gov.br
http://www.zone-h.com/mirror/id/14503

http://www.cnen.gov.br
http://br.zone-h.org/component/option,com_mirrorwrp/Itemid,0/id,5831370/

Anyone care to add to the list? It is really almost as easy as picking an agency name and adding ‘defaced, hacked, attrition or zone-h’ in the search query.

Anyone care to speculate what occurs after a breach like this? How about before the public defacement?

Anyone think these fine defacing artists are the only ones to have breached these servers?

PatrickB November 25, 2009 1:32 AM

There’s an enormous amount said here about technical minutia – but does anyone care about the root cause of the mess that the 60 Minutes segment described?

The whole premise of the segment was that exploitations of Information Security vulnerabilities aren’t just hypothetical, and the perpetrators aren’t just teenagers. It was meant to shake up the general public – and clearly take some digs at the previous administration’s stewardship (as if opinions on that topic could possibly be driven any lower).

I’m personally embarrassed that the discussion here has descended into hardcore geek badinage on sooty insulators and risk metrics. Are we maybe missing the whole point here – or is this forum only dedicated to displaying one’s technical prowess? How about a discussion on the root cause that got us here?

Here’s my contribution…

When I look in my wallet, I find several documents. I have a driver’s license, issued by my state. I have a pilot’s license that says I can fly helicopters, gliders, and only certain types of airplanes. I have a medical certificate that says a physician looked me over recently and feels fairly confident I won’t drop dead at the controls of an aircraft. I have a license to operate certain types of radio transmitters. I have an investigator license that attests I passed a test and a background check of my fingerprints. All of the licenses were issued by a government entity only after I passed examinations of some sort, and they are revocable at any time if I break certain rules of behavior.

When I look in my wallet, I don’t see anything that says I can practice information security. Yes, I have a CISSP and a CISA cert, but those aren’t issued by a government entity, and I can misbehave in all sorts of ways without jeopardizing those credentials. To get the CISSP, I only had to self-certify that I wasn’t a convicted felon: “I’m honest. Honest!” – which makes as much sense as allowing students to grade their own finals and athletes to test themselves for steroids. The State ran my fingerprints through the FBI and the DoJ for my investigator license, the FAA continually checks to make sure I’m neither a nut case or have gotten a DUI. The FCC will yank my license if I so much as utter certain four letter words. And I’m fine with all that!

Allowing information security practitioners to practice without any form of controls might make the Free Market advocates happy, but would you like to drive a car where no one was required to be licensed? How about sharing the airspace with unlicensed pilots? Or maybe you feel that anyone should be able to practice surgery — and let “market forces” decide which self-certified surgeons are actually competent?

The interviewees in the 60 Minutes segment consistently portended a future disaster. And if the future follows its predicable pattern, then following that disaster – we will see the same reaction that occurred after the sinking of the Titanic: government licensing of radio operators and the creation on an international Safety Of Life At Sea (SOLAS) organization, with strict safety requirements for all ships of any significant size.

So, enjoy your unlicensed careers as long as they last, because there’s change in the wind.

I for one am all in favor of getting some accountability over the practice of our art. The voluntary industry standards are, for the most part, a complete joke. How many PCI-compliant organizations have recently had major breaches? It’s a long list. And our economy is showing the dire impact of unlicensed behavior coupled with woeful under-regulation.

So far as the military being breached – they are simply continuing to demonstrate how “military intelligence” really is an oxymoron. I’m surprised they’re allowed to run while carrying sharp objects. They made assurances long ago that their networks would be “air gapped” from the Internet, and apparently they forgot that promise.

Clive Robinson November 25, 2009 5:26 AM

@PatrickB,

“does anyone care about the root cause of the mess that the 60 Minutes segment described?”

Yes many people do and they have differing view points as to what it is.

“The whole premise of the segment was that exploitations of Information Security vulnerabilities aren’t just hypothetical, and the perpetrators aren’t just teenagers. It was meant to shake up the general public”

Unfortunatly the program was banging the wrong drum with “crackers” are here so “panic now”.

The underlying problem is two fold.

1, The systems are brittle due to market forces.

2, Although the utility networks are tangable assets with tangable controls these are managed by intangable systems.

Both of these have the same root cause the models based on our old “tangable goods” market axioms do not work in the new world “intangable goods” market.

Thus all of our models we base our day to day thinking on are not designed to work in the intangable world.

You rase an issue with what is effectivly competence to practice,

“Allowing information security practitioners to practice without any form of controls might make the Free Market advocates happy, but would you like to drive a car where no one was required to be licensed?”

You ar not incorrect that this will to some extent improve security. But it needs to apply to the whole value added process of the intangable goods market, not just security.

This lack of a “licence” for “competent performanc” is not a root cause issue.

If you think about it, it is bassed on the process of “qualification”. Which in turn is bassed on the process of “understanding”. Which in turn is bassed on the processess of “education” which of necesity should be bassed on “scientiffic investigation”.

And this is the rub. For “the scientific process” to work it relies entirely on one thing, the ability to measure results of testing hypotheses.

Thus the requirment for viable metrics that stand up themselves to testing.

The process of establishing metrics is based on logic and axioms as is the mathmatics we use to build our models which enable us to carry out tests and enable qualified understanding that gives rise to viable hypotheses.
Currently we have no “security metrics” and we have no real idear on how to formulate the axioms that might give rise to them.

Thus we currently rely on “best practice” which is only as strong as it’s weakest link. And those that seek to brake security concentrate on the weakest links.

Because of this the whole “best practice” process is “brittle” because it follows the implecations of “low hanging fruit”.

But we do not know how to predict (except in arm waving ways) let alone measure where the low hanging fruit is. In effect our security model is based on that of Victorian Engineers,

“If it brakes bolt another bit on”

We know this unscientific process does not work, it kills people and is grossely inefficient as well as being self limiting in the tangable world.

Unfortunatly as I noted earlier we are now dealing with an intangable world.

The axioms that our models use are often bassed on tangable constraints, they don’t work in an intangable world.

For instance security can almost endlessly “bolt a bit on” because it does not brake under it’s own weight. All that is required is that you keep throwing more resources at it. Thus in the view of a tangable world it can be made “infinatly strong”, as it has no “self limitation” constraint.

Self limitation is a fundemental axiom of physical processes that engineering deals with. The mathmatical models and thus metrics we have are tainted by this constraint axiom.

Thus the models of a tangable world are not directly transferable to an intangable world.

The axioms of the models are weaknesses that get exploited.

One such thing is “force multipliers” in the physical world they are tangable and have costs to those wishing to employ them. One being the cost of manufacture and being tangable they can only be in one place at one time.

In the intangable world the a force multiplier has only design costs for the user. Manufacturing cost is near zero as you simply copy the information. The resources for the force multiplier are bourn by others “the victims” simply because the systems are not aware of how the resources are being consumed.

Thus although I agree with your sentiment of,

“I for one am all in favor of getting some accountability over the practice of our art.”

Accontability needs metrics and we lack these because the axioms of our current models are suspect in an intangable world / market place.

PatrickB November 25, 2009 3:38 PM

Clive, I appreciate your thoughtful reply. Although our perspectives differ slightly, I want to reinforce one of the points that you state very well: the reaction to, as you say, “bolt a bit on.”

I have been consulting for the past five years as a breach investigator and organizational troubleshooter. I am often called in only after a spectacular failure, and there are usually similar root causes that I see with monotonous regularity.

In my practice, I have seen many information security programs that, in their complexity, resemble an old steam boiler. There is a great deal of activity devoted to reading gauges and adjusting controls – under the threat that the whole contraption is likely to explode if it is ignored for even an instant. This is the consequence of a logical fallacy: that flawed technology can be somehow repaired merely by heaping more technology on top of it. And it’s a consequence of an information security system that is of the Technologists, by the Technologists, and for the Technologists. The security vendors naturally depend on this myopic view of information security to sell their products.

As the saying goes: “When all you have is a hammer, all problems look like nails.” I have seen very few enterprises that consider fixing the underlying flaws, incentivise their security practitioners to keep proper inventories in order to identify and remove risks that have no business benefit, and who view information security as a business process that can give the enterprise a competitive advantage.

Many information security practitioners report directly to IT, which places them in a direct conflict of interest where their role then involves telling the IT Director that his baby is ugly and that he can forget about receiving his quarterly bonus. In such an environment, what you see are circular rationalizations for ignoring fundamental problems, and lots of technology deployed in an ad hoc manner (“kludges”) around flawed processes. You see lots of IDS that is actually being used to forensically discover what should have been initially documented, lots of logging with little or no human analysis, and ineffective countermeasures that prop up broken applications. Taken in totality, what I usually find is an architecture that resembles a Gold Rush shanty town just waiting for a spark to ignite the fire that will cascade through the entire mess.

One of the fundamental concepts of good engineering practice is that you can not correct a fundamental technological flaw… simply by adding more technology to it.

Clive Robinson November 26, 2009 6:15 AM

@ PatrickB,

From what you have said I think we actually agree about the effective cause of the problem, which is lack of a reasoned or engineering approach to IT security.

For the sake of clarity (for other readers etc) I’m going to drill down through the layers to show where I’m looking.

Unfortunatly this will give rise to a very large post that I hope the Moderator and others will forgive (I’m sure Bruce will find it moderatly interesting ;).

As you note the development of security tools and policies is a bit of a “chicken & egg” problem.

As you correctly say vendors make what people buy, unfortunatly that is often not what they might actually need.
Likewise policies are (or should be) based on what is “known” to be the correct choices within the constraints of the business requirments (where known).

The question then arises what do the IT practitioners “techies” need?

And as you note their first need is to keep their jobs, by “keeping the man who cuts their cheques happy”.

Which as you outline often boils down to the “crocs in the swamp” problem of asigning the incorect priorites due to the situation (fighting the crocs not draining the swamp).

Unfortunatly this put’s the techies in to the position of “fire fighting in a shanty town”. Which is not what they should be doing as it is very dangerous and full of unknown issues which will likely as not keep them trapped there as long as the shanty town exists.

What they actually need is not the tools to fight individual fire types, but the political will and expendeture to have the shanty town cleared out and proper “fire code” houses built instead. So things move from a dangerous unknown situation to a less dangerous known situation, where effective tools can be designed and implemented.

Thus we see an underlying series of problems. Top down these are,

1, Political will to change.
2, Finance to pay for the change.
3, A plan of what to do.
4, The knowledge of what is required.

Each step is reliant on the one below. However the political will and finance to carry out an action is a business process (which techies invariably get wrong, because they “don’t speak business”).

Step three is both a business and technical process. That is the business supplies requirments information so that it can work effectivly. The techies look at these and supply senarios which meet or partialy meet the business requirments within other constraints such as the tools available (Major “Opps” problem).

These senarios are suppled with costs and “known benifits” plus (if the techies have any sense) “bonus benifits” that offer further oportunities to the business.

The business side examins the senarios to see if they fit the requirments or not and the benifits, which may change the business requirments.

This process should loop around a few times until a senario that is a best fit within the realisable requirments is found.

Then the resulting top three options get sent up to the business execs to make a choice and supply the finance (2) or return / reject the plans.

The problem with step 3 is that it is reliant on step 4 which is “knowledge” that is reliable and preferably scientificaly sound.

Unfortunatly as we know that is not available because the tools available are supplied by vendors trying to make a sale and thus meet the perceived not actual needs.

So the process above (steps 1-3) is broken because of the GIGO principle that clearly exists at step 4.

I think that you would probably be in major if not full agrement with the above reasoning.

The GIGO problem is further exacerbated by “pleasing the man” and thus the “knowledge” obtained is not that which is actualy required (that is it is driven by incorrect needs).

The problem is we don’t yet have “fire codes” by which the houses can be built, that have a scientific basis.

What we have is just “best practice” which although it’s a step in the right direction is by no means the best way as it is just based on the untested observation of,

“The top ten companies that have the minimum number of reported incidents do this in common”

That is not science and sadly it is also an admission of failure, that “we just don’t know what works”.

So what does step 4 need to get “reliable knowledge”, well it needs reliable tools.

Reliable tools come about in two ways,

A, Improvment by trial and error the “Artisan Aproach”

B, Improvment by scientific investigatio to find the base principles and thus build the tools on firm foundations, the “Scientific method”.

The problem with A is it takes a very long time, and often ends up in overly built solutions (the iron rimed cartwheel) which limit further development or move it in the wrong direction (incorect placing of damping measures thus requiring significantly higher energy costs).

Where as B looks at the problems and provides information to engineer a solution (the air tyre and wire spoke bycicle wheel). This provides an efficient solution to the individual tasks and allows significant progress (modern wheels for cars and aircraft).

Thus step 4 requires further supporting steps below it,

5, Tools to reliably assess what needs changing.

Which requires,

6, Tools to identify and measure problems correctly.

Which in turn requires,

7, Usable and reliable measurments (metrics).

This is the step where things are clearly not right to most people who care to examine the issue.

We simply do not have metrics that are of any use.

The problems behind this are many fold.

The first is that IT developers and technicians are not only not scientists, they are not engineers either.

They are by and large “Artisans” (or worse tempramental artists 😉

(Also by and large most IT techies are not business savy either which is another real issue, in that they realy need to talk to “the man” in his language not put him to sleep with what he sees as “techno bable”. But that is a related high level issue not a fundemental issue).

Artisans develop their tools and goods by ad hoc “paterns” that is by trial and error method/model (if it brakes bolt a bit on or make a random change). Which we both clearly recognise is a significant problem.

The problem with being an Artisan these days is by and large the world has moved on into science backed engineering and business.
Thus the way of the “Artisan” does not fit, in anything other than very limited places such as “craft markets” where even there they have been userped by “Artisanal” methods.

Thus if you step outside of the “Artisan” mindset and you examine other industry sectors such as mechanical engineering you will see something that will make you pause for thought.

Each level of the business from the lowliest “tool hand” through to the senior executives have metrics which are importantly “specific to the job in hand”. Also more importantly each metric at the bottom has a recognised way of being converted into the metrics at the layer above.

That is “tool wear” converts via recognised steps to a “cost benift ratio” as part of any business project, which in turn alows a “return on investment” calculation or a correct “risk assesment” to be made at higher levels.

Which begs the question,

“Why does ITSec not have usable metrics?”

However it is not just in IT security, but in IT in general that this “No Metrics” issue arises (I have deliberatly left communications out for the obvious reasons it has well established and reliable metrics that work all the way to the top which I will explain further down).

Metrics are measurments not comparisons. To be usable they have not only to be reliable but they have to be vectors and quantifiable independently of what is being measured.

As an analagy saying one bit of wood is longer than another is not very helpfull it is not a quantifiable measure. However saying it is 1.27m long is usefull as you now have a metric that you can use independant of the piece of wood.

Information however is not realy tangable it has no currently measurable dimensions or forces that can be used as the basis for metrics.

(Another reason I have left communications out, is that it is most definatly based in our physical world. It is based on the movment of energy or physical items. As there is a direct convertable relationship between energy and matter and requires forces to operate that are all constrained by physical properties which we know how to measure).

Thus metrics are based on the science of measurment (metrology), which mainly deals with the abstraction into information of the properties of physical objects and forces.

One of the ways any branch of science moves forwards is to borrow models from another branch of science.

Provided the model has similar underlying assumptions then it is likely to produce usable results, thus further knowledge in the borrowing branch of science.

The real problem with ITSec and IT in general is that it is based on “information”.

That is it is based on “intangable” or non physical entities and forces, where as most of our models are based on “tangable” physical entities and forces.

Thus by and large all our engineering models are about finding “physical limmits” and staying within them (which is why we have metrics for communications but not the information it carries).

Obviously if something is not physical it has no physical limits, therfore the underlying assumptions (axioms) of the physical models may be incorrect.

One aspect of this revolves around duplication.

In the physical world you need to work on physical objects which has significant costs whilst making in exact copies.

Apart from the very small energy involved with the copying and communication processes, the duplication of information cost is as close to zero as makes no real odds (the real cost is in storing the information in a physical entity).

More interestingly and importantly as information is not tangable it is not “localised” like physical objects.

This has significant issues for trying to move physical security models into information security models.

A thief as a unique physical entity can only be in one place at a time which is a fundemental physical constraint.

Secondly what a thief can do is limited by physical forces which is a second constraint.

The first constraint is “localisation” that prevents “action at a distance”, the second constraint is that a “force multiplier” such as a glass cutter or drill is physical and this has obvious limitations in what it can do.

Niether of the constraints applies to the “intangable” information world and it’s security.

That is an information thief can be anywhere that has communications access to the information (therefore not localised).

Secondly their “force multiplier” tools are built not of physical items but information, thus are infinatly copyable. The only real cost is the energy involved with copying the information and using the tools.

And as we know the bulk of that energy cost is not bourn by the information thief, but the victim.

If you examine information you will find that the only real measure we have for it (entropy) is probabalistic.

All the other measures of information are due to the costs of storage and movment in our physical world.

Thus we have the problem, that there are currently no usable metrics for information, just metrics for the physical manifestations of storing and moving it.

The nearest “science” we have to borrow models from for developing a science of information is quantum physics (it’s entirely probabalistic and based on states which is directly equivalent to information).

This brings up an interesting side point,

The position of “father of the scientific method” is often acredited to Sir Issac Newton. Who through the study of light and gravity gave rise to understanding of how forces work on physical objects.

Importantly (for my argument 😉 he removed an incorrect assumption from our view of the physical world (that objects fall at different rates depending on their mass) which gave rise to the advancment of the ideas of friction etc.

Thus a bottle neck to human understanding was removed. However Newton’s laws of matter in motion are known to be wrong at certain extreams. That is the laws model the perceived world not the actual world.

Matter in motion are subject to the laws of “relativity” (macro) and “uncertainty” (micro).

The problem with information is that being probablistic in nature and not constrained by physical issues there is no more cost at working at the extreams than there is in the norm.

This is compleatly different to our physical world where getting out of the norm requires significant cost. That is as you aproach the extream the cost goes up geometricaly. Thus by and large the physical world stays in the norm where Newton’s model works.

Our statistical models of how things work in the physical world pretty much all have an underlying assumption in one form or another of “the norm is more probable due to cost”.

Thus to get usable metrics for information we need to find new models not based on our tangable “Newtonian” world but those based on probablistic measures without cost constraint.

We then have the akward process of finding ways to convert the fundemental metrics back up to usable metrics at each upwards layer.

As a final asside 😉

Your point,

“incentivise their security practitioners to keep proper inventories in order to identify and remove risks that have no business benefit”

Is the “less is more” paradime, which I learnt whilst young,

Some of my relatives used to own a farm, and there was an old boy who worked there who tended the apple trees. Every year I would see him “hard pruning” the trees and I asked him why. He said if you just let the tree grow it will waste it’s energy make lots of little apples that you can not sell. Prune it well and it makes few fruit that sell the best.

PatrickB November 28, 2009 3:37 PM

Clive, as you surmised, I am in complete agreement. As long as we don’t pretend to be actuaries and try to stray into “quantitizing risk”. While everyone in this field has their own gut feelings about the relative risk of vulnerabilities based on experience, it is impossible to apply a concrete quantity to any given risk. The likelihood that any given vulnerability will produce any given negative result depends too much on the adversary’s motivation and skill. Once a human agency comes into play, one might as well read tea leaves to produce predictions. The confluence of poorly designed applications, tight coupling of what should be separate platforms and processes, and complexity make any predictions of a cascading or leveraged security failure impossible to predict beyond a visceral gut feeling.

What is missing, as you say, are valid metrics. And some consequence for not abiding by them. Since most information security practitioners spent their larval stages in IT, we can’t expect them to behave like engineers. Still, I am deeply disappointed that the practitioners of such a young field of technology don’t show substantial interest in High Reliability theory, risk management principals, or the basics of logic and scientific method.

As a consultant, I frequently have to cope with astounding levels of logical fallacies. The most dangerous being a combination of delusional optimism coupled with the belief that, “perception is reality.” The most common fallacy being normalization of deviance: the argument that a risk gets smaller and eventually disappears the longer one exposes the technology to exploitation, i.e., the “we’ve always done it that way and nothing bad has happened” argument. Yea, and for a long time we launched space shuttles with leaky “O” ring seals and with launch-phase ice impacts….

When I look back on the history of other technologies, what I see is a consistent pattern of ad hoc innovation and “industry self regulation” … right up until something really bad happens. Then, society demands regulations and the licensing of key personell. I fully expect that to happen in our field, and I’m not adverse to the idea. What we have now clearly doesn’t work.

DEADLINE January 16, 2010 2:46 AM

The questions which I’m searching for (but have to date not found an answer for) are;

A: Are the Brazillian systems connected to the internet yes/no?

B: When Brazillian authorities allege they are not “directly” connected to the internet does that mean there is no connection?

Or rather does this simply mean that while you can not directly acesss the system on line you could acess hardware which is simultaneously running the program for such grid systems.

If so could a hacker still interfer with such a systems. And can a hacker acess an automated system ( See: STATEMENT OF
JACQUELYN L. WILLIAMS-BRIDGERS
INSPECTOR GENERAL OF THE
DEPARTMENT OF STATE AND
INTERNATIONAL BROADCASTING
THE YEAR 2000
COMPUTER PROBLEM: GLOBAL READINESS
BEFORE THE
COMMITTEE ON INTERNATIONAL RELATIONS
U.S. HOUSE OF REPRESENTATIVES
October 21, 1999, — which confirms that Brazil does indeed use a computerised power grid central inwhich a command post in Brasilia is linked to 10 regional command posts
[though not neccsairly linked via the internet perse]) in a more physical system i.e. on site sabotage or use of EMP technology, such as those tested an explored at a number of weapons expo world wide last year including the Latin America Aero and Defence (LAAD)trade fair, which took place fromat Riocentro, in the city of Rio de Janeiro in April 2009.

Clive Robinson January 16, 2010 5:51 AM

@ DEADLINE,

“Or rather does this simply mean that while you can not directly acesss the system on line you could acess hardware which is simultaneously running the program for such grid systems.”

The answer (I suspect) is that communications systems are expensive especialy when bespoke. Thus as much of a command and control network is sent via one or more “common carriers” or Telco’s.

Take the UK’s BT they had a stated purpose to make all their systems IP based for the start of the 21st century.

The result has been that in some places ordinary telco Voice Traffic has gone down the same length of glass as private and public IP traffic (and I suspect private traffic within public traffic as well).

That is the only thing making it work is the QoS routers. There is no physical seperation and like as not no real information air gap equivalent either.

Thus if you know which box to subvert you own a piece of a private network carried on a public backbone.

The concepts of logical and secure communications seperation are difficult enough for even engineers to get a grip on properly. How do you expect legal / political / business / journo types to get even a basic understanding let alone be able to give accurat answers?

As I keep saying (to anyone who might actually listen and think about it 😉 “efficiency and security don’t make easy bed fellows”.

It gets worse when efficiency is mandated by fiscal policy derived from percieved share holder need…

Don’t be surprised as a result to get a bag of nuts even a hungry squirle will turn it’s nose up at 😉

Clive Robinson January 16, 2010 6:32 AM

@ DEADLINE and others,

As you see from the report you quote with the
STATEMENT OF JACQUELYN L. WILLIAMS-BRIDGERS, INSPECTOR GENERAL OF THE DEPARTMENT OF STATE AND INTERNATIONAL BROADCASTING…

“i.e. on site sabotage or use of EMP technology, such as those tested an explored at a number of weapons expo world wide last year”

You hear a lot about EMP and subsiquently HERF weapons of and on.

I guess it won’t be long before somebody decides to make them “WMD” or some such (if they have not already).

The problem is you can buy the bits to make a 2.5GHz HERF weapon for less than 200USD. And if you know what you are searching for (coin crushers and Z wire systems for example) the basic parts of a low grade EMP weapon or higher grade HERF weapon can be seen as well as how to make bits of them.

But why bother going for brut force when a little finess (Fault Injection via modulation wave form of an RF source) will be easier and one heck of alot safer for the operator (oh and more environmentaly friendly to boot ;).

You used to be able to find plans of how to convert a microwave oven into a HERF gun up on the Internet and they might still be around. However you will get detailed plans of most of what you need to do from the various ARRL magazines and articals on using microwave ovens for the 13cm Amature radio band.

However if you build one don’t be too expectant on the results as a plain EMP or HERF weapon. Remember though that a microwave oven coupled into an appropriate horn antena that will fry consumer electronics like CCTV cameras people Ipods and mobile phones, will still just as easily cook a pork chop if it gets in the wrong place (and they once called humans “long pork”) so there are some significant risks for the unwary…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.