Comments

uk visa February 15, 2010 8:26 AM

Interesting but he misses the major point IMO:
A couple of points from: http://en.wikipedia.org/wiki/Huawei
2009 Q3- Huawei passed Nokia Siemens Networks for the No. 2 position in the global mobile infrastructure equipment market in the third quarter.
Huáwei itself variously translates as “achievement”, “magnificent act”, “splendid act” or “China can”. Such a translation is an etymological curiosity, not to be used generally.
If there’s a Cyber war the West has already lost it.

Carlo Graziani February 15, 2010 9:02 AM

It’s a very balanced, and nuanced article, well-worth reading in full.

On the subject of the “cyber” aspect of US-China competition (or, heaven forfend, conflict), it seems worth emphasising even more strongly a point that Fallows reiterates a few times: China top priorities are internal economic development and, concomitantly, regime preservation. The structure of their economic development is of such a nature that if Western economies (and in particular that of the U.S.) were to suffer serious, long-term damage from a cyber-attack, their own export-led development and their stored wealth would also be seriously damaged. As nuclear winter limits the offensive utility of nuclear weapons, global economic interdependence must necessarily give pause to the cyberwar planners of a government that stashes an appreciable fraction of its national wealth in a huge pile of U.S. Treasury bonds.

This is not to say that a major Chinese cyber attack on Western economic infrastructure is inconceivable. But it does say that if a conflict ever escalates to the stage where the Chinese government is prepared to carry out that sort of attack, then the cyber aspect will probably already be the least of our worries by that point.

phred14 February 15, 2010 9:58 AM

I would say that the far bigger risk is that of terrorists “getting a clue”. The most fearful thing about cyber-warfare is the inability to know exactly who mounted the attack. This single thing lends itself well to the terrorist model – not that they don’t want credit to be known, but that they don’t want their platform vulnerable to state-owned resources. (Like Afghanistan was, and perhaps Yemen is.)

As for China, if they were to launch an undetectable cyber-attack on the US that would devastate our economy, it would certainly affect their ability to export to us, and therefore hurt their economy. Any attack seems to me more likely to come from a “nothing to lose” party, and in this age that means terrorists. (Though perhaps not limited to them, given more thought.)

Muffin February 15, 2010 11:07 AM

“Attacks—not just from China but from Russia and elsewhere—on America’s electronic networks cost millions of dollars […]”

I don’t doubt that, but at the same time, I can’t help but wonder how much attacks from the USA on Russia’s or China’s electronic networks cost those countries.

Matt from CT February 15, 2010 1:41 PM

One thing that struck me when I read the article last week is the view of Russia & China that the U.S. Military is now, paraphrasing, “Too network dependent.”

As such they make a primary military goal of knocking out the network.

This is a mirror image of the long held view of the U.S. that Soviet styled militaries were too dependent on central command and control, and thus the first thing you do is knock out the network to keep them from issuing orders.

Whether it’s accurate or not, it’s an interesting point to ponder. Is it true, or is it other nations “fighting the last war” and projecting their own weaknesses onto the U.S.?

======================
If China launches a debilitating attack on U.S. cyber infrastructure we have bigger problems then we want to deal with.

The more likely goal of such efforts isn’t to kill the goose that lays the golden egg.

It’s to put their thumb on the scale to gain economic advantage. Whether that’s intelligence on business deals, or stealing intellectual property.

Clive Robinson February 15, 2010 1:54 PM

I’m getting well known for my “beware of China” attitude.

The reasons are quite simple.

1, China unlike the US takes a very longterm view.

2, China has learnt that invade and hold of buffer countries does not work the way it used to.

3, China has adapted to that change unlike other countries and can be seen using that stratagie in the southern hemisphere.

4, China has about 1/6th of the worlds population and fifty years ago 99% of them where farmers etc. And the belife and work ethic are inextricably linked in their minds.

5, China has to feed 1/6th ot the worlds population

6, China has developed economicaly far faster than just about any other similar population.

7, China needs raw resources and external markets to keep it’s forward momentum.

8, China is on it’s way to being the world #1 for energy consumption.

9, China has bought into the US economy in what at first appears to be an endless mutual pact.

The first thing most people think (incorrectly) is that China has a significant investment in the US that ties them together in a Mutualy Asured Destruction (MAD) Pact. This is actualy incorrect due to certain differences in outlook China most definatly has the upper hand. In that it can develop other market places over the next ten years the US being a consumer nation and the worlds largest debt hole does not have that escape ability. The Chinese could if they wished pull the plug on the US in less than a year. Over and above that the US curancy is so over subscribed that any loss in confidence will cause both the US and China to suffer in the short term, the US for maybe the next 3 to 6 generations.

Senior Chinese Military figures are already talking openly about having a “cold war” with the US in the next 10 years. Meanwhile the US has been ignoring the warning signs for well over that period. We see the US press spouting surprise at China in recent times, the world press however has been overtly commenting on the China-US cool down since halfway through the last Bush administration.

China does not need the US long term or even medium term. It has as noted above two pressing needs,

A, Bring China out of it’s agrarian fuedal past.

B, Maintain the central fiefdom.

That is the “head of the snake” wishes to be incontrol in much the same way it has for centuries, the difference being they want to be world leaders in technology etc so they can spread their writ far and wide without the problems of overt occupation or Empire.

Thus to feed their own people they have an issue in that the new “Technocrats” want the baubles, trappings and status symbols of the “western world” which cost significantly (how many days rice and vegtables do you get for the price of a top of the line European Car?). This means that China has to balance the want’s of the increasing few against the needs of the many.

This can only happen with a rapidly expanding economy. However this creates more “technocrats”…

Hence the point about feeding the 1/6th of the worlds population is by no means the major problem any longer.

The point most people forget is 8 this is the real Achiles heal. If you look into history about “Water Rights” you will see a “blue print” of how the energy wars are likley to be fought.

Keep an eye on the south pole and even the moon…

Daniel February 15, 2010 2:20 PM

I consistently fail to understand why anyone sees China as a threat at all. Is China ambitious. Why yes it is. But so what? Why are China’s ambitions a threat to the US?

I see it more as a sporting match. My football team may have a rival, but I don’t think of them as a threat. I want China to give us all they’ve got. Without the challenge it would get mighty stale.

BF Skinner February 15, 2010 4:06 PM

@Clive “getting well known for my…”

Odd. You can replace the word China with Iran and the analysis reads pretty much the same.

BF Skinner February 15, 2010 4:09 PM

@Daniel “fail to understand why anyone sees China as a threat”

Because like all of America’s enemies they want to sap and impurify our precious bodily fluids.

Brandioch Conner February 15, 2010 5:16 PM

“With financial, medical, legal, intellectual, logistic, and every other sort of information increasingly living in “the cloud,” the consequences of collapse or disruption are unpleasant to contemplate.”

Yay for buzzwords!

Meanwhile, at the manufacturing company where I work, we have tape backups of all of our systems.

“First, nearly everyone in the business believes that we are living in, yes, a pre-9/11 era when it comes to the security and resilience of electronic information systems.”

Yay for meaningless emotional appeals! I’m still waiting for any of the people using that phrase to explain how 3,000 people would die in a “cyber attack”.

“Electronic-commerce systems are already in a constant war against online fraud.”

See Bruce’s other comments on who bears the financial burden vs who had the ability to prevent such.

“It is well-funded and pursued by mature individuals and groups of professionals with deep financial and technical resources, often with local government (or other countries’) toleration if not support.”

Seeing as how the USofA still doesn’t have any laws about ISP’s preventing zombies on their networks, does that mean that the USofA is one of those that tolerates such actions?

I’m seeing a whole lot of generalities and fear there. But not much in the way of understanding the technology.

moo February 15, 2010 5:18 PM

@BF Skinner: That reply was perfect, thank you for that. =)

@Daniel: U.S. foreign policy for a long time has been to secure the rest of the world’s resources for themselves, using whatever means are necessary. This includes toppling democratic governments to install U.S.-friendly dictators, and even outright invasion of other countries. They would have stood by and let Saddam have Kuwait if it wasn’t full of oil. Ten years later, invading Iraq, same thing. South America is also full of examples of U.S. meddling.

Anyway, its natural for them to worry that what goes around comes around. Eventually some other country (China?) will be top dog, and then they will probably want those resources for themselves. They may even be willing to fight with the U.S. over them, overtly or covertly. An electronic “cold war” with China seems distinctly possible, and if things go in that direction, it would be wise for the U.S. to be prepared for it.

Steve February 15, 2010 8:07 PM

Interesting that a system designed originally to “survive” a nuclear war may become our Achilles heel, no?

ltr February 15, 2010 8:25 PM

@BF Odd. You can replace the word China with Iran and the analysis reads pretty much the same.

Iran has to feed 1/6th of the world’s population? That don’t make no sense.

Winter February 16, 2010 1:26 AM

1 Chinese are humans. When China develops, 1.5 billion more humans will become rich. What is bad about that?

2 Powerful states can inflict damage on others. If China becomes a world power, they will be able to harm others. Destroying China is the only thing that could prevent that.

3 International cyber attacks are a non-issue. With US bridges and levies falling apart and complete industries mismanaged into bankruptcy I do not see what makes the Internet special as a vulnerable resource.

4 The US does not need China for a meltdown. Remember the great power blackouts? The most prolific Spam relays are in the USA.

The only times the Internet was really in jeopardy was due to holes in MS Windows (produced in the USA) servers. It is estimated that 80% of PCs running MS Windows (produced in the USA) are infected with malware of some kind. The botnets used in cyber attacks almost exclusively run MS Windows (produced in the USA).

More likely this whole cyber attack madness is about diverting attention from mismanagement and bad policies.

Winter

Rob Lewis February 16, 2010 7:35 AM

@Matt,

This paper by General Habiger released out of Cybersecure Institute applies to your comment about US military “net-centric” direction and the ability of the US to maintain competitive advantage as a cornerstone to maintaining economic advantage. Companies have to learn that they are risking their IP for short term contracts, after which they will be competing with the Chinese in other world markets. It does not have to be that way. Common sense that you don’t play poker with your hand exposed as well.

Protect your IP (from everybody) and you maintain market leadership if your product and delivery has merit.

The paper calls for a defensive strategy for protecting American interests.

http://cybersecureinstitute.org/docs/whitepapers/Habiger_2_1_10.pdf

Clive Robinson February 16, 2010 8:36 AM

@ Winter,

“1 Chinese are humans. When China develops, 1.5 billion more humans will become rich. What is bad about that?”

It depends on “at who’s expense” it is and the colataral damage caused along the way.

“2 Powerful states can inflict damage on others. If China becomes a world power, they will be able to harm others. Destroying China is the only thing that could prevent that.”

Hmm any state with the right resources can destroy another state it actually takes very little effort if you are not looking for imediate results.

However there is no need to destroy a state and or it’s people to prevent them doing it to you.

The simplest way is not to make it worth while and there are a number of ways you can do that.

“3 International cyber attacks are a non-issue. With US bridges and levies falling apart and complete industries mismanaged into bankruptcy I do not see what makes the Internet special as a vulnerable resource.”

Localisation and force multipliers. In the tangable physical world you would require to be at a point in space and deploy a force multiplier to do any significant damage.

In the intangable information world the weapons are information they have no physical limitations that place any real constraints on them.

For instance let’s make an assumption that I have a zero day attack that enables me to “covertly” infect 30% of the work based servers in the US with a piece of malware at a very low level such that it encrypts data going to or from hard drives and tape backups etc.

I send out a message that tells them to forget the key on such and such a date and time…

Bang something between a 1/6th and 1/3rd of companies backend systems become locked as do their backups over say the past three months.

The stats show various figures but you are looking at something like 80% of those effected businesses ceasing to trade within another three months.

It is estimated that as little as a 1% real shrinkage in the US economy could put the US and dependent nations into a downward economic spiral. If that is the case ask yourself what losing 25% of the economy in three months would do?

“4 The US does not need China for a meltdown. Remember the great power blackouts? The most prolific Spam relays are in the USA.”

Oh so true, the real question is though who runs those Spam relays not where they are geo-located.

“The only times the Internet was really in jeopardy was due to holes in MS Windows (produced in the USA) servers. It is estimated that 80% of PCs running MS Windows (produced in the USA) are infected with malware of some kind. The botnets used in cyber attacks almost exclusively run MS Windows (produced in the USA).”

Hmm not sure about that there is the story of half the US going “off-line” due to the DNS servers becoming unavailable due to a rodent with a taste for data cables (just wish I could find the link).

However you are correct that a “mono-culture” is not a good idea as it removes hybrid vigor from the system which is normaly always a very very bad idea.

“More likely this whole cyber attack madness is about diverting attention from mismanagement and bad policies.”

That may be so but there are very real vulnerabilities which can be and are being exploited.

I suspect it is only the “short term” outlook of cyber-naredowells that actually limits the amount of damage done.

It is this aspect of short term high visability that makes botnets fairly easy to detect and currently deal with.

Imagine what the result would be of longterm covert conversion to making PC’s members of an “Intel gathering botnet”.

It brings you around to that cascade of questions starting with,

1, Why are we connecting PC’s in the workplace to the internet?

2, How are we making a valid inventory of the IP and it’s security on these PC’s?

It is at this point if you have done the analysis correctly you can generaly say that not all PCs are equal.

Therefore you might be tempted to state that not all security needs to be equal.

However that is a dangerous assumption to make for many reasons.

And although you are correct about mismanagement and bad policies I think you have your comment the wrong way around. This “Cyber attack madness” is a direct result of the failings of management and policy and it tends to high light them not cover them up.

However perception is rarely based on reality so with a bit of judicious spin a manager could absolve themselves of responsability for their inabilities, by effectivly arguing it was an inevitable consiquence of having to use poor software from one major supplier (back to the mono-culture issue).

So yes, you are correct from one perspective and wrong from another…

Just the sort of place a FUD-Spin meister loves to thrive…

Ian February 16, 2010 8:37 AM

Given the close ties between China and US in terms of our economies, I agree with Carlo that a full-out cyberwar wouldn’t be a great idea, at least in the short term. As Clive points out, though, the Chinese have traditionally planned for the long term. With that in mind, I can definitely see them sponsoring a lot of “independent nationalist” hackers to continue looking for American IP to help accelerate their rise. Why risk full-scale cyberwar when you can conduct low-risk, high-yield operations that will eventually put you on top?

Then again, Chinese leaders in the past haven’t been adverse to the idea of MAD. After all, didn’t Mao point at the Western casualty calculations for a nuclear exchange and say that he had no problem with China losing 350 million people? Can’t remember where I heard that, so I don’t know if it’s true or not, and I’d certainly hope we don’t have too many Maos out there any more.

Brandioch Conner February 16, 2010 11:06 AM

@Ian
“With that in mind, I can definitely see them sponsoring a lot of “independent nationalist” hackers to continue looking for American IP to help accelerate their rise.”

Why? The USofA is exporting jobs to China already. We’re training them in everything they’ll need to know. Why waste time on “hackers”?

The only exception to that is specialized military hardware. And that’s easy enough for them to acquire using regular spies.

Craig February 16, 2010 11:07 AM

Very good and in depth analysis, with some very poignant comments.
It would be ignorant to think that all countries are not going to look at their position in the global arena, for the short or long term.
Scientia potentia est is a Latin maxim “For also knowledge itself is power” stated originally by Francis Bacon in Meditationes Sacrae (1597).
Now quoted as “Knowledge is Power.”

Ian February 16, 2010 1:42 PM

@Brandioch

True. My point was that it costs them nothing to encourage their own people to pull stuff like this, so there’s no reason for them at this point to NOT encourage it. It’s true that we’re helping to supply, train and employ people who may not be our friends – but that’s nothing new, unfortunately.

Brandioch Conner February 16, 2010 3:08 PM

@Ian
“It’s true that we’re helping to supply, train and employ people who may not be our friends – but that’s nothing new, unfortunately.”

Yep. And giving them jobs as sysadmins and net admins and so forth in the Chinese sites of the companies expanding into China.

Now, how many of those companies do you think are going to have internal firewalls limiting what those Chinese employees can do?

Which is one of the reasons why articles about the “Chinese Cyber Threat” are so wrong. If there was a threat, we’ve already invited them in and given them the keys to our corporate defenses.

False Data February 16, 2010 7:11 PM

Reading the article brought to mind the U.S. Civil War, in which the North used industrial might to grind down the South. Similarly, World War II seems to be a war in which the U.S. focused strongly on industrial might: build the liberty ships and bomb the ball bearing factories. If we were to get into another large-scale shooting war, a cyber attack targeting industrial capabilities, such as interfering with the flow of capital or business-to-business transactions for orders and inventory fulfillment, seems likely.

Nick P February 17, 2010 1:33 AM

@ moo

Nice points on US meddling. I love my country, but I read enough FOIA releases to know why others hate it. It pains me to think of how much blood was shed for the convenience we have, and that most Americans don’t even have a clue.

@ Brandioch Conner

Yes, a sad fact. It seems obvious that, in a relation based on largely one way dependence, one must trust the entity they are so dependent on. However, China is in the top 5 least trustworthy major players on Earth. They are well known to steal all the IP and subvert foreign companies there. Yet, we still do business with them over India, Indonesia, Vietnam, etc. Plenty has shifted, but I’d rather not get a single critical asset from that place. The first time they subverted us for personal gain, we should have ended investments there. They don’t even make financial sense in the long run.

@ False Data

WW2 was largely won due to our excellent spies. The code cracking and the infiltration and destruction of their synthetic oil factories was quite useful. Essentially, we attacked their infrastructure. I’m in agreement with you that a war, covert or overt, with China will result in our infrastructure being hit. Government claims it’s already happened a lot with small attacks. They say we’re extremely vulnerable and I don’t doubt it. The question is: If we pressured or pissed off China too much, would they sabotage our infrastructure in return? Probably not total, but enough to humiliate us & make us back down? I think it’s possible. They are investing in secure & nonsubverted processors, firmware, drivers, OS’s and software stacks. We need to be doing the same. First step: no more hardware from China. 😉

Clive Robinson February 17, 2010 2:51 AM

@ Nick P,

“The question is: If we pressured or pissed off China too much, would they sabotage our infrastructure in return?”

As they say in TV Cop progs “they have previous”.

It has been claimed that the Chinese government deliberatly cut underwater communications cables, as a warning message.

If true (and there is some evidence to support the claim) then simply cutting a number of subsea comms cables in the Atlantic would have very very serious knock on effects as there is insufficient spare capacity via “out of space” options etc.

Clive Robinson February 17, 2010 3:53 AM

@ IT Nija,

“the National Security Agency was recently hacked.”

If it was (and I’d need to see other supporting independent evidence).

It only confirms the “elephant in the room” of high complexity software. That is any code more than a few tens of lines long is vulnerable when connected to the Internet and it is just a question of when the zero day happens to it (arguably a pesimistic viewpoint on my behalf, but then “I’ve got the scars” to show why 8(

As others know, I have a problem bordering on a “chuck the toys out the pram” issue with,

1, Companies connecting all internal computers to the Internet (either directly or indirectly).

2, Assuming that “one size fits all” for computer security.

3, Executives assuming it is not going to happen to them in their short tenure, thus playing Russian Roulet with their share holders reputation.

Yes IT staff appear complacent with the situation but it is more likley to be “battle fatige”.

There are solutions to the issue but…

The original question should allways be asked,

What potential information on this users computer might be of value to others outside of the company?

Then the second question,

Does the company gain anything by connecting this information to either the rest of the company or the outside world through provably insecure software?

Then the next question should be,

What are the companies liabilities to regulators shareholds and customers should the information go outside of the company?

At this point if the assessment has been done correctly which is highly unlikley* (even for real as opposed to faux security experts) you should have a bottom line risk value.

This should then be used as a base line “expected loss” (in any faux ROI calculation). This should then be used to see if the “potential business advantage” is larger than either the “expected loss” or the cost of mitigating the “expected loss”.

Thus I’m generaly in favour of “air gaps” and heavely controlled “media transfer” across the gap.

There are ways to do this and still get the business benifits, the problem is at an almost marginal cost. However modern execs are into reducing / eliminating such costs to “maximise return” or perceived (the faux) shareholder value.

  • The reason for this is it is extremely difficult to assess the value of information. Especialy if you don’t realise the information exists or what it’s value is.

Information value is something a “domain expert” in a “field of endevor” not a “security expert” can calculate.

For example a researcher makes searches of external online DBs, the usual and incorect assumption is this is ehpemaral information.

The search information can give another person in the same field of endevor a very good idea of what the researcher is doing. Thus the search information offers competative advantage if known.

However secondly if it can be shown in court at a later time that a person in the company searched a patent DB and pulled up a patent that might just concevably (in a Judges corkscrew mind) cover a product the company makes then in the US this automaticaly tripples the damages should it become known. as the defence of “inocent infringment” nolonger holds. Thus even the “domain expert” in the “field of endevor” might not assess the information correctly, so what chance has the “security expert”. Oh and remember when asking a legal person for advise they are “never wrong” as they know what litigation can cost thus they always cover themselves, which generaly makes their advice “it is inadvisable…”

Oh and in the US have a look at the new little money earner for tort (civil litigation) Lawyers and judges “electronic discovery” it will make your mind bleed.

Clive Robinson February 17, 2010 5:20 AM

@ Brandioch Conner,

“Why? The USofA is exporting jobs to China already. We’re training them in everything they’ll need to know. Why waste time on “hackers”?”

Yes and no. The Chinese traditionaly take the long view, so I suspect that they have asked “what do we do when the West wakes up?”

Some companies are already more cautious than others and take care in what the send out to China / India / etc.

This is due to a long history of “Republic of China knock off’s” in manufacturing which where all to common in the 1980’s. And of more recent times “Data Subject” data for sale for “ID theft” from dishonest employees outsourced “call centers” in places like India.

These are lesson’s the current crop of young thrusting business executives real real should study. Oh and MS agreeing to hand over the NT code base to China for what is effectivly a meger money stream and it’s implications on MS’s future…

We know that China is well versed in Nuclear and Industrial espionage within the USofA from the 1950’s onwards so I would fully expect them to use “hackers” for not just their “deniability” aspect, but also with a longterm view to securing the information flow.

And I have reason to belive it is already in progress.

There are effectivly two ways you can get at confidential information

1, Directed attacks.
2, Oportunist or Fire and Forget attacks.

Most people think of China doing the “Spy Thing” with “Directed Attacks”. But this is a hindsight perspective.

In fact they rarely have most attacks they have carried out have been via “Fire and Forget”. That is similar to the idea of “sleepers”. Specificaly via students on exchange or via decendents of immigrants who still have family ties to China.

The same principle applies to “cyber-spying”.

That is you “Fire and Forget” via the usual malware vectors. However the payload is different instead of being used for Spam etc it is used for intel gathering.

Some less than covert examples basicaly Hover up PDF’s, .docs, Email address books, PK and other Certs and send them back.

In a recent case it was using a modified attack vector that has been known for atleast 18 months (). And was directed against .gov and .mil addressess and achived a good penetration rate as something like a third of the AV software in use did not recognise the vector…

Now this particular attack was detected because it did not use a “zero day” attack vector and it was not very covert in what it did.

We know from various Botnets that have been found that “zero day” vectors are overwhelmingly effective. And the botnets have been discovered only because they where not covert (that is by either the control channel or their high level of output).

If you had a zero-day vector and a covert control channel and covert upload channel then the chance of being discovered is actually minimal within a normal comercial environment.

As I’ve said before you can make a covert control channel via search engines such as Google and any of the millions of blogs they search you might care to post to.

And there is little or nothing that either the search engines or blog operators can realy do about it.

Likewise you can have a myriad of upload channel options and payload upgrades / updates.

From an intel gathering point of view converting the USofA to a “covert sleeper botnet” is a significant and longterm solution to the various issues of ensuring information continuance.

And provided they use a “zero-day” vector and a low replication rate they are unlikley to be detected…

Hence my preffrence for “air-gaps” and heavily controled “media migration” across it.

And currently I’m of the view that “air-gaps” etc are more cost effective than high assurance systems in all but the largest of organisations.

This is due in the main to our poor understanding of what is “usefull information” and “covert channels”.

It would help if developers followed a few relativly simple rules and did not “roll their own” protocols (the soon to be recognised as major source of the side channels that can be exploited as “covert channels”)

So for those thinking of designing their own protocols or “cherry picking” bits from other protocols my advice is don’t do it (consider yourself warned 😉

If you still think you can do it securely on your own then there are a series of informal rules you might like to consider, (whilst you commit your error in judgment 8)

With regards to system level security protocols the “zero’th” rule is normaly taken as a given. Which is a bit of a protocol failure in of it’s self 8)

So,

0 : The given rule of security protocols ‘they should be “state based” to alow clear and unambiguous analysis’.

Yes not just the protocol but the software should be a state machine. It helps you think clearly and if you do it as a matrix helps you realise where you have undefined states etc.

Oh and try your best not to use feed forward or backwards in moving from state to state it makes analysis way way more difficult than using “state chains”.

Oh and of course the well known 0.5 rules which every developer knows (but ignores) of,

0.5A : KISS (Keep It Simple Stupid)
0.5B : 7P’s (Piss Poor Planning Produces Piss Poor Protocols)
0.5C : 3C’s (Clear, Concise, Calculated)
0.5… etc etc.

Therfore with that out of the way, as far as security related protocols and systems and their “states” the following rules should be considered,

1 : The first rule of security is ‘no undefind or ambiguous states’.

2 : The second rule is ‘clearly defined segregation and transition from one state to another’.

3 : The third rule is ‘States should not be aware, let alone dependent on “carried/payload data” only protocol’.

4 : The fourth is ‘protocol signaling is always out of band to “carried/payload data” (that is the source and sink of the “carried/payload data” cannot see or change the security protocol signaling and data)’.

5 : The fifth is ‘protocol and “carried/payload data” should both be authenticated, seperatly of each other, and further each datagram/message should be authenticated across them both’.

There are other rules to do with dealing with redundancy within and future extension of protocols (put simply DON’T ALOW IT) but those five go a long way to solving a lot of the problems that open up side channels in protocols.

As a note on redundancy it is always best to avoid it.

That is the full range of values in byte or other value container are valid as data. This means you have to use “Pascal” not “C” strings etc and no padding to fit etc.

Why the fuss about “side chanels” from protocols well they are all proto “covert channels” waiting to be exploited at some point in the future.

The main problem with side/covert channels is that their information bandwidth needs only to be an almost infitesimaly small fraction of the main channel to cause problems.

Think of it this way each bit leaked is your security atleast halved (to brut force) or worse with more directed or “trade off” attacks (oh one bit is always leaked you cannot avoid it or do much about it).

And always always remember the EmSec / TEMPEST rules apply equally as well to data in a protocol “information channel” as well as a “physical communications channel”,

1, Clock the Inputs,
2, Clock the Outputs,
3, Limit and fill Bandwidth,
4, Hard fail on error.

The first three help prevent “time based covert channels” not just within the system but through a system as well (have a look at Mat Blazes “Key bugs” for more on how to implement such a time based “forward” covert channel).

The first prevents “forward channels” the second “backward channels”. The consiquence of this clocking is that you have to “pipeline” everything and use the same “master clock”.

If this cannot be done you need to assess which risk is worse data leaking out (forward channel) or illicit data being sent in “reverse channel”).

That is look at it as potentialy your key value being “leaked” onto the network (forward) or your key value being “forced” into a known value from the network (reverse).

Hmm difficult call how about that master clock again 8)

The third is about redundancy within a channel. Any redundancy in a channel can be used to implement a “redundancy covert channel”.

Therfore you need to first hard limit the bandwidth of the channel to a known and constant value. Then fill the channel to capacity with sent data so there is no redundancy.

You usually see this on military comms networks and the reason given if you ask is “it prevents traffic analysis” which whilst true conveniantly hides the truth about “redundancy covert channels”…

However there are other forms of redundancy, such as alternate states or options which can do the same thing, using unbuffered data so that data packet length can be varied, etc etc.

The forth rule is important when anything goes wrong “hard fail” and start again at some later point in time you select.

This limits channels from “induced error covert channels”. When hard failing it is best to “stall the channel” for a random period of time atleast double the length of time the error occured from the chanel being opened to send a message. Or better still from the security asspect close the channel and log it and alow a human to sort out the issue.

Whilst this cannot stop an “error covert channel” being formed it makes it of increadably low bandwidth and easily seen (flag up all errors).

There are other rules but these should limit most of the publicaly known covert channel types.

Brandioch Conner February 17, 2010 11:32 AM

@Nick P
“It seems obvious that, in a relation based on largely one way dependence, one must trust the entity they are so dependent on.”

While correct, that doesn’t matter. The past is the past.

The current situation is that the Chinese do, legitimately, have inside access to corporate networks in the USofA.

Worrying about defenses is useless at this point (with regard to China).

The focus should now be on remediation and mitigation. Which is what articles such as the above completely skip.

Nick P February 17, 2010 11:33 AM

@ Clive

I like a lot of the principles in your reply to Conner, but I have to differ on a few. The first is about state-based design. That’s merely one way to do it and there are numerous high assurance methods that have been demonstrated. Mere black box hierarchical layering can suffice. Although state-based design is usually best, some protocols lend themselves to other paradigms.

The other issue was state transitions with payloads. There’s no reason to get rid of payloads. They are the very basis of asynchronous systems, for which there are many verification tools available. Additionally, any “stateless” or “payloadless” design in an imperative program has implicit state and payloads with every function call. Sometimes I prefer manually passing payloads so my modeling tools can track what I’m actually working with, rather than guess what the compiler is doing. Payloadless design isn’t necessary for verifiable software, but I agree that it’s quite beneficial to prevent covert channels.

“The consiquence of this clocking is that you have to “pipeline” everything and use the same “master clock”.”

Maybe not. This would seem the intuitive response, but other methods and definitions of clock have been devised. See Wray’s paper on covert channels. The idea is that a real master clock is both a prevention and source of timing channels. So, researchers have been constantly finding new models of time to prevent timing channels. One can do it as a sequence of events in state-machine, an event counter, or a real clock. I’ve recently acquired an MIT paper called “Synchronization with eventcounts and sequencers” that gives a supposedly secure alternative to a traditional master clock. Is it practical for real-world programmers? Idk.

“hard fail”

Absolutely. Can’t have fail-safe or fall-back methods that are less secure than the original. I would also prefer a protocol that worked like this: you know when a new session is opened and you know when it’s closed, but you don’t know anything else. With crypto so fast, we could do this even on non-security protocols. It would help because authentication would make MITM harder and encryption would make leaks and sabotage more difficult. On FTP, it’s not that hard to sabotage a transfer. If it had even minimal authentication and encryption, sabotage could be non-trivial.

Clive Robinson February 17, 2010 7:19 PM

@ Nick P,

“I like a lot of the principles in your reply to Conner, but I have to differ on a few.”

That’s OK by me they are informal rules, for those who feel they must do what they should not 8)

Not rules for those who realy do know what they are doing (I just wish I knew somebody who did though 😉

“The first is about state-based design. That’s merely one way to do it…”

Yes there are other ways but untill recently they where not as amenable to analysis by your every day code cutter.

Also most team leaders and their next two managers up can usually understand them as well 😉

“The other issue was state transitions with payloads. There’s no reason to get rid of payloads. They are the very basis of asynchronous systems, for which there are many verification tools available.”

Ahh I think we are talking at cross purposes.

When it comes to “protocols” it is best to not have the protocol acting on the “data” content or payload.

Likewise it is also good to have protocols “lock steped” or synchronous for other reasons (redundancy side channels).

The two are seperate issues but the solutions have a large overlap.

Irespective of if the protocol is synchronous or asynchronous data and control are best strongly segregated to minimise potential interaction.

That is the data or payload is encapsulated from the protocol and it’s control channel, and thus the protocol state cannot be changed by the payload only the control channel (with two exceptions, error & data end).

It also has to do with “redundancy” and “optional path” issues that open up their own nasty little side channel potential (side channels are like roaches every time you look you find another one 😉

Unless you realy do know what you are doing (and those people, in general assume that they are fallable which is why they “peer” for sanity) It is best to maintain clear and rigid segregation between data and control.

With regards the EmSec “clock the inputs and clock the outputs” giving rise to pipelining and master clocking. It is a “tangable world” concept that is being mapped onto an “intangable world” where the tangable is a subset of the intangable….

Which is why you have seen work that makes you say,

“This would seem the intuitive response, but other methods and definitions of clock have been devised.”

And yes an incorectly implemented “master clock” can add a whole host of side channel issues. But in general it is something that is much more easily checked and verified than, say checking for a “Direct Sequence Spread Spectrum” (DS-SS) modulation that goes right through the system effectivly invisably in either direction.

It is again a question of what the protocol designer can truly be conversant with. At the “code cutter” end, a master clock (where possable) is less likley to cause easily exploitable side channels.

The MIT paper you refer to “Synchronization with eventcounts and sequencers” is it the ACM feb 79 paper?

If so I very vaguly remember it from when I was designing multidrop host communications in safety critical systems, but my brain is getting old and hazy. So I don’t see the connection.

Do you have a non “paywalled” URL to the document you are refering to?

Anyway it’s 01:15 in the UK and my brain is most definatly not at it’s best, and being rushed to hospital A&E (ER) with anemia yet again has not helped 8(

Nick P February 18, 2010 9:43 PM

“until recently they were not amenable to analysis by your every day code cutter”

Well, that’s mostly true, but modelling programs functionally and developing them in a functional language like Haskell or Ocaml has been useful since before the Internet became big. Many people, including financial analysts, use Ocaml and teaching them to use basic pre/post-conditions, type systems, and formal specifications isn’t that hard. Imperative programmers had to rely on state-machines, as you suggested, to encapsulate the state that caused them so many difficulties in larger programs. Today, both methods (and several others) have been proven in quite a few demo’s and the average cookie cutter has plenty of easy-to-use tools to help him: CASE tools; code generators; high integrity software environments like Esterel, Perfect Developer and SPARKS Ada. Maybe ten years ago the functional programming or pure state-based design was a good enough excuse to not have secure software. However, we have enough tool and process-based support today that they have no excuse.

“the data or payload is encapsulated from the protocol and its control channel”

Well, sort of. This protocol design and engine might try to do this, but it’s somewhat illusory. The protocol engine or state-machine is handed a big chunk of data. It then parses it and makes certain guesses and assumptions to decide which is control and non-control data. There’s plenty of room for error here with both protocol design and coding flaws. HTTP and SSL flaws have given some examples. I do agree that segregating them is a good idea though. The tricky part is ensuring it works out once its getting hit with raw, possibly malicious data.

“at the code cutter end, a master clock… is less likely to cause easily exploitable side channels”

Well, that statement seems accurate. I’m sure the average code monkey would find that a lot easier to understand that event-ordering and other esoteric methods. On the MIT paper, it is the 1970’s paper. It was significant because it deals with synchronization as event counters and ordering rather than the passage of time and real clocks. They also presented formal arguments that a system built this way could be formally proven correct much easier than other concurrent designs. Instead of looking at the passage of time and keeping clocks synched/fixed, timing channel elimination would focus on the order of events and their input and output signals. In many systems, this might be easier than a regular clock. It was meant to serve as an alternative to the master clock idea, although one could say it emulates its functions.

Clive Robinson February 20, 2010 6:31 PM

@ Nick P,

The real question is not the rules,

But how do we make “little Johnny code cutter” follow them.

Or more correctly,

Get his boss to give him the time to use the tools that are just starting to become available as “FOSS” etc…

Whilst high assurance systems are not cheep to build etc, in the majority of cases it’s not high assurance we need just a slightly better quality of code cutting.

Almost perversly, at the moment a 5-10% improvement in security programing will make around a five to ten fold improvment in real security…

Such is the lack of ability in Jo Average code cutters to do secure coding…

Well hopefully Bruce’s new book will make interesting reading for them.

Mind you I’m still waiting for a book from Bruce and Ross J. Andreson but I guess if it happens now it will be on thee economic perspectives of security 8(

Nick P February 21, 2010 9:36 AM

@ Clive

I agree on the small increase in effort providing a huge initial payoff. I don’t recommend true high assurance for most corporations or individual. What I usually recommend for them is medium assurance, the high end of which is seen in the NSA Tokeener demo. Methods like Cleanroom, Praxis’ Correct by Construction, and Galois’ use of Haskell have been delivering low defect large software projects in budget for years. They are all extremely easy to learn. Additionally, the use of semi-formal specification and design-by-contract both have high payoff for imperative & OOP programmers. These are also easy to learn without a fundamental shift in thinking. I mean, design-by-contract Eiffel or SPARK Ada style basically extends functions with pre and post conditions. Easy as pie, oh my. 😉

So, how do we get code cutters to do this? I think they do whatever management pushes them to do, so we need to convince management. Now, most of a software’s lifecycle is in the maintenance phase, where downtime and bugs are expensive. We could illustrate the cost of buggy software and the insignificant extra cost of low defect methodologies. In other words, make a business case that the methods will deliver on time, be easy to learn and not add too much cost while reducing plenty of risk. If it was mandated in a number of organizations, we would also see CodeProject-style communities popping up providing supporting articles and examples on using low defect methods. So, my strategy: present convincing case to management, who mandates it on code cutters, who help each other improve over time. That’s the only way I see it working right now in commercial software development.

Igor September 30, 2010 12:51 PM

Would you share that pdf with Reed’s paper “Synchronization with eventcounts and sequencers”?

I can’t seem to find it online.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.