More Details on the Chinese Attack Against Google

Three weeks ago, Google announced a sophisticated attack against them from China. There have been some interesting technical details since then. And the NSA is helping Google analyze the attack.

The rumor that China used a system Google put in place to enable lawful intercepts, which I used as a news hook for this essay, has not been confirmed. At this point, I doubt that it’s true.

EDITED TO ADD (2/12): Good article.

Posted on February 8, 2010 at 6:03 AM41 Comments


christopher February 8, 2010 7:36 AM

I’ll just come out and say it: I like an alignment of Google and the NSA, at least for now. It lets China and anyone else watching understand that we’re going to start playing the actual game, not the one we think should be played, i.e. it’s clear Chinese companies enjoy much more governmental support and operational leeway when competing against foreign companies, so it’s long overdue for the US to start backing their own companies against what is essential a Chinese threat, not a company or industry market scenario.

And contrary to reports or beliefs otherwise, China is our enemy.


uk visa February 8, 2010 7:59 AM

Before you fight with anybody you better check how much of your infrastructure is owned by them…
I think you’ll find a substantial amount of US communications is supplied by Chinese companies.

Here in the UK…

‘The threat is not just to military command and control systems. Cyber attacks are capable of knocking out supplies of power, food and water as well as domestic communications networks. Britain’s joint intelligence committee warned ministers a year ago that Chinese parts bought by BT for the UK’s new telecommunications network could be used to shut down Britain by crippling its telecoms and utilities.’

db Cooper February 8, 2010 8:07 AM

Ted Turner had an interesting opinion when asked if the US should boycott the Olympics, to protest human rights violations. In Tibet specifically at that time. It went something like this:

“If you have a really good friend, and you owe them a large sum of money, you probably should not do anything to piss them off.”

SG February 8, 2010 8:51 AM

Bruce, in your essay, you wrote, “In order to comply with government search warrants on user data, Google created a backdoor access system into Gmail accounts. This feature is what the Chinese hackers exploited to gain access.”

In this post, you write, “The rumor that China used a system Google put in place to enable lawful intercepts, which I used as a news hook for this essay, has not been confirmed.”

You didn’t just use this “rumor” as a “news hook” on January 23–you presented a serious allegation as fact. Without any real backing, you asserted that Google was giving access to gmail content to the U.S. Government. Now you have restated it as an unconfirmed rumor.

If we trace back your links, we see this post from MacWorld by Robert McMillan: “That’s because they apparently were able to access a system used to help Google comply with search warrants by providing data on Google users, said a source familiar with the situation, who spoke on condition of anonymity because he was not authorized to speak with the press.”

Wow! A “source familiar with the situation.” That’s one heck of a reliable source–and well worth repeating in other journals. McMillan cites no supporting references for his anonymous slander.

This quote was further stretched by Julian Sanchez: “I was hard-pressed to think of other reasons they’d have segregated access to user account and header information.”

Sanchez appears to have absolutely no technical background, and there are many good reasons to segregate message content, account information, and headers. Off the top of my head is that you read headers all the time and you access message contents much less frequently. And in fact, almost every major mail system except for plain old unix mail stores headers in a separate file or database independently from the message content.

Bruce, I think you owe Google an apology on this one along with a statement that you have no real evidence that Google has made backdoors into their system. Accusing Google of adding backdoors into Gmail and calling it an unconfirmed rumor is like asking someone if they are still beating their wife. It’s a cute rhetorical trick that allows you to press an accusation while still claiming that you are not.

Clive Robinson February 8, 2010 8:57 AM

It should be noted that these APT vectors are not that new in concept they are the standard enumerate or fire and forget attacks.

What is sort of new is that people have noticed… That is they have been concentrating on high bandwidth botnets for SPAM etc.

These APT’s could be likenend to “covert botnets” which I’ve been warning against for some time now.

The simple fact is there are very few ways they can be recognised with zero day entry.

As Mandia from Mandiant indicates,

“If you’re a law firm and you’re doing business in places like China, it’s so probable you’re compromised and it’s very probable there’s not much you can do about it”

The best that can be hoped for is to “down” machines and “deep verify” their content in a semi random way in order to verify them.

However MS OS machines running legacy code are not at all amenable to such technieques.

So if you have critical info “air gap” it and no “removable media” allowed.

This is not new but hey what’s the inconveniance cost compared to the “owend” cost?

Clive Robinson February 8, 2010 9:17 AM

@ db Cooper,

Ted Turner was making a fairly good call with,

“If you have a really good friend, and you owe them a large sum of money, you probably should not do anything to piss them off.”

The simple fact is the Chinese need the US to buy their goods at the moment, and a lot of that has been done on credit.

However the Chinese have been strongly developing other markets of recent times.

Some pundits (taosecurity) are holding up China as wanting a “cold war” with the US, whilst others (financial cryptography) are saying with the size debt the US has can you blaim the Chinese for wanting to keep an eye on the debtor…

China usually takes the longterm view with a bit of sword rateling to keep others of what it regards as it’s territory.

The US on the otherhand generaly takes a very shortterm view point.

The simple fact is that China has been very noticably cold sholdering the US since before the current POTUS was elected, and the US Press and others have appeared to either miss it or ignore it…

Either way it’s not good and China is going to be calling in markers any day soon.

col February 8, 2010 9:48 AM

Bruce: your position as crypto-guru means that your repeating the story /at all/, even as a hook to another issue, gives the press the excuse to attach your name to the story, thus giving it a patina of legitimacy it may not deserve. You, of all people, should be more considered in your references to stories you yourself deem less than accurate.

Caleb Jones February 8, 2010 9:50 AM

Has anyone else out there read “The CooKoo’s Egg” by Clifford Stoll?

These kinds of attacks remind me of the kinds of attacks detailed in his book.

The landscape nowadays is different, but the attack vectors and strategies seem to be very similar.

David February 8, 2010 9:57 AM

A whole lot of computer equipment is made in China, so there’s always the possibility of hardware back doors.

col February 8, 2010 10:03 AM

@Caleb Jones: Attack vectors don’t change, only: (i)the complexity of the attack; and (ii)the complexity of the method. (That’s col’s First Law of Cryptiness, btw.)

peri February 8, 2010 10:14 AM

@Clive Robinson: “It should be noted that these APT vectors are not that new in concept they are the standard enumerate or fire and forget attacks.”

Just wanted to point out a small misunderstanding. Advanced Persistent Threats (APT) are not fire and forget. The article claims that even if eradicated attackers will be back after 3 months.

Clive Robinson February 8, 2010 11:04 AM

@ peri,

Long time no chat 😉

The artical is only half right.

It is dealing with tsrgeted attacks against organisations where there is a direct route in. Thus they are enumerated attacks.

The second type is the fire and forget type that was indirectly mentioned.

These work on the idea of a cirus or equivalent that replicates in the usual fashion and spreads that way.

However instead of a payload that does damage it has targeting and ET components.

When the virus detects it is on a machine in a targets environment it then attempts to ET (phone home) to the controler.

This is becoming a slightly more generalised attack in that the virus targeting software is changable via a control channel.

That is I write a zeroday virus that spreads it’s self out and about and listens covertly on a control channel.

The controler sends out targeting information on the control channel and those systems that have been infected that have the target info ET on the reverse channel.

That is the controler sets up a very wide and very covert botnet.

Which unlike conventional botnets does nothing accept gather various bits of information such as real name CC info user name software licence information PKI etc certificates and email directories etc. It will covertly send this information when requested to do so.

Thus the controler can quietly gather information at a slow rate or send out a request for specific information such as “”.

Fire and forget can be made less noisy thus more covert than enumeration. Which is why it is the more dangerous and less talked about way.

The difficulty is how to arange the control forward and reverse channels.

The easiest and perhaps most difficult to deal with is to emulate ordinary user activity and co-opt the likes of Google to do the heavy lift.

What you need to do is have an agreed plaintext signiture string (that may change with time) that you include as a username or whatever to post to an open blog.

You post a relevent message to a blog that contains both the signiture and control strings. At some point google indexes the blog and puts it in it’s cache (it would appear to be as low as two hourse for this blog).

Once google has indexed the page it is two late for the blog operator to stop the bot net by removing the message…

This is because the botnet agent occasionaly uses google to search for the signature string and pulls the control string not of the blog but out of googles cache (or the wayback machines cache etc). Thus it uses google to search the entire blodshpere for the signiture and never goes near the original message.

As the network admin of a network with an infected machine all you would have seen (if you could) was the covert zero day attack and the occasional google or whatever search….

It’s simple it’s de-coupled and if done correctly currently unstopable…

The reverse channel can be done in a similar way.

As the ancient Chinese curse has it “we are living in interesting times”…

Clive Robinson February 8, 2010 11:26 AM

@ sil


Yup I like the article the only thing realy missing that should be asked,

“Why are we connecting internal machines to the internet?”

Most users do not require Internet access, of those that do few require unlimited or unmonitored access and also do not need to be privy to stratigic or other high value information to the organisation. The few users that do require access to the internet and know or have high value information are actually quite few in number, therfore give them two machines and a very controled method of making data cross over the “air gap”.

If somebody is a little smarter in the organisation then they can put the equivalent of an air-gaped system as a virtual machine.

Maybe just maybe APT hype and the current recession will wake organisations up and get them to ask that question,

“Why do employees need access to the Internet to do their job?”

William O. Blivion February 8, 2010 11:40 AM

Ghod, I can’t believe I’m about to defend Google. The things you people make me do!

“Without any real backing, you asserted that Google was giving access to gmail content to the U.S. Government. Now you have restated it as an unconfirmed rumor.”

No, he didn’t.

He asserted that Google built a “back door” into their Gmail system to allow them to comply with US law. No where did he assert that they’d actually used it in response to a search warrant, which they would be LEGALLY OBLIGATED to do, or in response to a non-warrant request, which they aren’t required to do (depending on the nature of the request, and the exact laws, IANAL).

Blaming a large corporation for making it easy to obey the law AND make it easier for law enforcement officials to execute their duties is rather f’ing stupid. Google may not have done the technically smart thing by building in a “back door” (assuming they did so).

Frankly I doubt they did so. They probably just store lots of crap in databases. It’s what Google DOES for fucks sake. They scan the internets and put shit they finding intersting in databases and index it.

And once you get it in a database, writing crap to dump out specific bits is not a “back door”, it’s a “user interface”.

BF Skinner February 8, 2010 12:44 PM

In addition to Brian’s brothers observation Bruce has made this argument since the clipper chip.

If you build a back door into system it can be identified (disgruntled former employee anyone or am I the only one that had to scramble to protect my Nortel switch when their maintenance password was disclosed?) and exploited.

Back doors are old old risk. I use wargames in breifings to make the point that a hollywood film was talking about them in the early 80s.

Given what the Bush administration has done, and Obama has left in place, with the telecom companies and NSA we can only assume that there are numerous back doors through out the infrastructure.

@christopher. Is this the game as you understand it?

Alex February 8, 2010 1:57 PM

I don’t think Bruce said he doesn’t think there are back doors in the google system (I guess it’s pretty much a given) but that those backdoors where not used in this attack…

Danny O'Brien February 8, 2010 2:06 PM

I think it’s import to differentiate between the Gmail lawful intercept system as a potential vulnerability to external attack, and a useful honeypot of useful information once you’re inside the system. The “unnamed sources” describing the breach of the lawful intercept systems did not indicate that this was either the entry point nor the main target — it just conveyed the shock on Google’s part that anyone had reached that particular subsystem.

The full quote from the original Computer World piece is:

“Right before Christmas, it was, ‘Holy s***, this malware is accessing the internal intercept [systems]”

One of the key points that gets missed in deconstructing the Google story is the significance of Google’s description of trail leading to a set of human rights activists whose accounts were compromised.

Some have misinterpreted that this as merely a cover for Google to express public outrage, when their corporate concern was more about the wider data-gathering operation.

That may be true, I suppose, but I think its real significance for both Google and the wider world is that only the Chinese government (or attackers who felt that they could sell the information to those connected to the Chinese government) would be interested in this particular data. The attack on the Gmail system wasn’t the full extent of the attempted breach, I’m sure, but it was the targetted attack on these particular Gmail accounts that was the smoking gun that pointed at state actors, which is why I think it has been emphasised in both official and unofficial Google narratives.

tonesurfer February 8, 2010 2:47 PM


Ted Turner had an interesting opinion when asked if the US should boycott the Olympics, to protest human rights violations. In Tibet specifically at that time. It went something like this:

“If you have a really good friend, and you owe them a large sum of money, you probably should not do anything to piss them off.”

If I owe you one million dollars, I have a problem. If I owe you 1 trillion dollars, YOU have a problem. Substitute USA for I and China for YOU. Get my drift?

db Cooper February 8, 2010 3:07 PM


If YOU hold nearly $800B in US Treasuries (as of Nov ’09) and decide to dump them, I have a problem. A real big one.

As before, substitute USA for I and China for YOU.

askme February 8, 2010 3:20 PM


If you link into the Wired article, you will get a lot of hyperbole about APT.

Read down to the last comment by Bugtracker to see it pretty well debunked. Basically saying they are using old and simple hacker tricks, none of which are new, to do corprate espionage, which is not new.

And yes, tonesurfer, the Chineses are fully aware they are now in bed with the US economy and the dollar for at least a decade util they can diversify.

GreenSquirrel - OT February 9, 2010 4:05 AM

@Clive Robinson

“Maybe just maybe APT hype and the current recession will wake organisations up and get them to ask that question,

“Why do employees need access to the Internet to do their job?””

Sadly, pretty much any employee who uses a PC will need some form of internet access now.

Its hard to think of many jobs which need a PC but can be totally disconnected from the internet. Staff need access to reference material – which can often be either bought at £100 a book or free on the internet. Staff need access to internet based services (depending on what your clients are/do). With the supposed drive towards cloud computing more and more machines will be exposed to the internet.

I fully agree the question should be asked, but I would be surprised if there was an employer (outside the Intelligence Services) where there wasnt a business reason for staff to access the internet from their workstations.

Clive Robinson February 9, 2010 5:43 AM

@ GreenSquirrel – OT,

“…but I would be surprised if there was an employer… …where there wasnt a business reason for staff to access the internet”

It may well be the case for a number of businesses but if the answer to the question is “yes” it brings up my first question of,

“Why are we connecting internal machines to the internet?”

Or to put it another way,

“Do they need to do it from their “work” workstations.”

Althought I appriciate an “air gap and no removable media” solution is expensive, it has certain advantages, where the cost/risk is high.

There has been an idea in the past that “all workstations are equal” and thus money can be saved from “hot desking” or “coffee shop working”.

APT shows that this opens up risk significantly as does “cloud computing” and in the long term makes the “hot desking” on a single workstation a false saving.

What is needed is a way by which the equivalent of an “air gap” system can be set up and a mechanisum by which data can be transfered but only via a “safe channel” that is very heavily monitored.

For instance let us assume you have an immutable X Terminal type system whereby both an internal “safe” network system can be displayed and also a “Unsafe” system can be displayed data can be moved simply by “cut-n-paste” (the X-Terminal has to be “immutable” to stop it becoming subject to ATP it’s self).

Thus the channel between the two systems is the “cut-n-paste” system in the immutable X-Term. This could for instance be limited to just printable ascii text.

The problem we currently have is that such systems would be implemented as web servers and the user workstation would be Win XP with IE 6 which as it bungs everything in the same memory space is totaly insecure, and has the effect of compleatly bypassing the OS security.

It is thinking along these lines that I suspect started Google’s Chrome.

But it is not new if you look back through Unix history you come back to “trusted path” and “trusted platform” ideas.

They need to be brought back into the light of day and not hidden under the dark stormy cloud that MS and the likes of the performing arts industry DRM have created.

Some systems will of necessity need to be “air gapped” and have strictly controled media access, but with a little care most will not.

Further with a little fore thought and adjustment of work flows a lot of risk can be mitigated with minimal change.

Nick P February 9, 2010 2:09 PM

@ Clive and Green Squirrel

Ah, someone just had to mention air gaps, trusted paths and the like. I’m sure you knew I was coming on this one, Clive. 😉

Well, Squirrel, modern systems will be connected due to sad industry directions. Clive’s suggestion is unworkable in most areas due to business forces, although many could use an air gap. However, there are software solutions that intend to deal with the multiple levels of security problem (MLS, MSLS, MILS). The idea is to have several types of data on one box, wire, etc. without unauthorized leaks. Many facets, but here’s a few classes of systems.

Trusted UNIX’s (Solaris/IRIX/SELinux): Patch mandatory security and “trusted” paths onto an OS not designed for them. Huge trusted computing base (TCB). One bug in TCB can down the whole security scheme. Trustifier is in this category too, but better integration and modular components easier to evaluate.

Security Kernels. The best is probably BAE Systems XTS-400. It’s OS is layered in rings that minimize privilege, create a trusted path, and separate critical apps from legacy apps. Although 500 is out, 400 survived years of pen testing and abuse in the field. Aesec’s GEMSOS security kernel is good for embedded stuff. It reached orange book A1, the highest level. It already has MLS workstation and VPN solutions and weathered years of attack from sophisticated opponents.

Hardened Hypervisors. Recently, the government tried to push COTS solutions to do this stuff. NetTop and General Dynamic’s TVE take hardened linux, VMWare, TPM, and some NSA in-house tech to create MLS servers or workstations. I’m unsure of their assurance, esp. considering the OS & VMM.

Microkernels. Many recent designs focus on tiny TCB’s: the smaller it is, less bugs and easier to analyze. Nizza and Perseus Security architectures are built on the L4 microkernels. They provide critical services, including trusted path, while running isolated apps (even real-time) or legacy apps in Linux VM’s. OKL4 does this on mobile phones with amazing performance. These are all open-source. Turaya Security Kernel uses TPM’s and Perseus architecture in a commercial product that tries to prevent data leaks & create seemless VPN’s.

Separation Kernels. Ahh, my favorite. They are ultra-light microkernels compliant to SKPP that just create partitions, control info flow between them, regularly scrub shared processor storage, and handle CPU time to minimize covert channels. Integrity-178B, VxWorks MILS, LynxSecure, and PikeOS fit into this category. Integrity was EAL6+ certified and is now used in many solutions we are discussing offered by INTEGRITY Global Security. LynxSecure does MLS workstations for Navy. INTEGRITY PC & LynxSecure both offer trusted paths, MILS networking/filesystem, Windows support, TPM, and cohosting of legacy and security-critical apps. VxWorks is in evaluation for EAL6+ & PikeOS is being prepared for an evaluation.

The separation kernels are the most likely to succeed in this stuff if combined with trusted admins, tamper-resistant hardware and good access control policies. This is a tall order that will only be used in high risk situations. INTEGRITY has already demonstrated secure network consolidation with OB1 demo and nWire solution. We may now have cheap, ultra-secure MLS workstations. Alternatively, being able to hit an unclassified account and siphon Top Secret data would be a hacker’s dream come true. Time will tell, but $20 says it will be an attack on Intel’s hardware or firmware, which is much less secure.

Clive Robinson February 9, 2010 6:29 PM

@ Nick P,

“Time will tell, but $20 says it will be an attack on Intel’s hardware or firmware, which is much less secure.”

I have always hated the Ix86 family for a multitude of reasons not least of which was the segmented memory model Ughh. It made working out what the heck was going on very very difficult if a programer desired to be awkward I much prefered 68K and other linear memory space with proper VM control.

Anyway back to the issue of air-gaps…

As you say,

“Clive’s suggestion is unworkable in most areas due to business forces, although many could use an air gap.”

It always amazes me when you have a look at some organisations ICT how on earth they got that way…

For instance would you leave a copy of the next years marketing plans or product development information on the coffee table in reception next to the current “advertising glossies”?

Probably not, so why do the equivalent on the companies combined Intranet and Internet server?

Sun for instance who should have known better did something similar to this…

The “due to business forces” is one of the main problems with security and as we know there are one heck of a lot of Win XP / IE6 workstations in offices around the globe.

Whilst Win XP can be made reasonably secure (with some considerable effort) IE6 cannot be made secure simply because it has a one “hole in the ground” memory model. That is two pages one from say a desktop app and one from an unknown and untrusted website share the same code and memory space without any real isolation. Even later versions of IE have single memory space issues.

Further MS have tried to enhance the old OLE experiance through embeding IE in the desktop to such an extent just opening a document can cause all sorts of stuff to go on in the background without the user being aware of it, and things break easily with the underlying OS ACL’s. So they normaly get set as permissive as possible to prevent issues…

Sadly though this “extra functionality” is apparently what users want (not sure which ones though 😉

Somebody I know shows a nifty little “gotcha” when building an HTML page with embeded Office objects. For instance just pulling in the “totals” column from a spreadsheet can if done incorrectly effectivly pull in the whole table, even though it is not immediatly visable. They then show how to get at the “hidden data”.

Nearly all of these issues are not happening at the OS level, but at the app level due to shared resource issues.

To some extent “Thin Clients” connected to terminal Servers” actually limit the ability for users to cut-n-past across and carry malware etc with it.

However the price to be paid is users moaning…

And as long as Wintel tends to be the operating environment of choice we are going to have problems 8(

Anyway it’s halfpast midnight and I need my sleep to stop looking like BF Skinner’s expectations (ie a Klingon with bad hair issues 😉

moo February 10, 2010 12:24 PM

@Nick P:

Sounds very likely. Proof-of-concept work has shown that very small modifications to a commodity CPU (i.e. adding a few thousand transistors, to a chip which contains hundreds of millions of transistors) can make a “hardware backdoor” that remote software can leverage to hijack the machine. If China is manufacturing a large portion of the world’s CPUs, I wonder if they are slipping such modifications into them? It was discussed here a while ago.

Nick P February 11, 2010 11:46 AM

@ moo

Indeed. I’ve seen those stories. While surbversion is possible, my real concern right now is exploitation of processor errata. OpenBSD’s Theo de Raadt complained about this some time back:

If these numbers are accurate, then there are an amazing amount of bugs. One errata sheet I read had bugs in MMU, which is central to security enforcement. That’s unacceptable. Here the discussion evolves:

Kris Kaspersky presentation on remote exploitation of processor bugs

If you throw in the holes in vPro, weaknesses of TPM, and Joanna Rutowski’s work, then hardware security is the biggest issue for high assurance. We’ve had software strategies that work for a long time, but hardware isn’t under our control. At very least, we need an ultra reliable open-source core to use for critical apps. Formal verification, like VAMP or AAMP7G, is preferrable. MMU & periods processing support is even better.

hylas February 22, 2010 2:16 PM

Speaking of hardware attacks and possible backdoors follow these links. It could be of interest to you.

I, and some others have experienced these same attacks. (hallmarks)
Me, twice:

1) 1995-99 on OS 7.x – 9.x (mostly, I was unaware, until ’97)
68k’s – PPC’s

2) Late 2005, OS X 10.2.8 Clients and servers (Xserve)
PPC’s, WIN x86

Nancy’s experience was on Windows and Linux.
As you can see she’s written extensively on her experience – I’m still compiling my experiences and proofs for a blog such as hers.

She’s also named it – which fits:


I’m happy to see people not “poo-pooing” this as they did just a short time ago.

Another comprehensive link from a fellow traveler:


And of course Joanna Rutkowska:


(A Systems Administrator in California)

hylas February 22, 2010 2:21 PM

Forgot Nancy’s link:

“Nancy’s experience was on Windows and Linux.
As you can see she’s written extensively on her experience – I’m still compiling my experiences and proofs for a blog such as hers.
She’s also named it – which fits:



Chad February 22, 2010 3:10 PM

If I owe you one million dollars, I have a problem. If I owe you 1 trillion dollars, YOU have a problem. Substitute USA for I and China for YOU. Get my drift?

That would be true if YOU has no ability to come beat on you like a mafia thug. Lets face it while that 1trillion is not small the US is worth far more so they are thinking hey I just bought them cheap. Even if I have to go in a break it first.

Luke Leighton February 22, 2010 3:14 PM

“The simple fact is the Chinese need the US to buy their goods at the moment, and a lot of that has been done on credit.”

yes. and the U.S. Federal Reserve has, as a matter of what it believes, incredibly, to be “sound monetary policy”, consistently being devaluing the US Dollar by literally printing new money on-demand (and without oversight).

this is the reason why China has been giving the U.S. the “cold shoulder”, because as the USD is the “reserve currency” of all these massive loans, the U.S. is reneging on its debt obligation by devaluing its own currency.

this puts Ted Turner’s comment, which i will reiterate here, in a whoole new light:

If you have a really good friend, and you owe them a large sum of money, you probably should not do anything to piss them off.”

GreyGeek February 22, 2010 5:36 PM

The Chinese embed “kill switches” in the electronics they sell us? Probably. We did the same thing when the US Gov had Texas Instruments add disabling capabilities in the chips which control their equipment. During the first Gulf War TI equipment being used in Iraq was sent certain signals which shut them down.

Although Microsoft denies it, they labeled two NT CryptoAPI keys (CSPs) as “NSA Keys”, which they later called an “unfortunate name” for their CSP key signing process to allow exporting of their technology. Did/does the NSA use them to access computers owned by foreign powers running Windows? Probably. However, even now the Microsoft EULA gives 3rd party vendors the right to browse YOUR Windows installation. How do you think they get in past the Windows firewall to do their browsing?

As far as Kapersky and his claims of Linux vulnerability are concerned, he has a horse in that race. His company was trying for YEARS to generate a virus hysteria among Linux users in an attempt to get them to buy his AV software. Despite his claims his software hasn’t found the Linux market he was hoping for, and his claims of Linux vulnerability have not materialized. The pursuit of money causes some to make outrageous claims, and fear drives consumers to do foolish things.

surprised February 22, 2010 6:29 PM

I’m a bit surprised that the legislation discussed in the article seems to be ignored in the comments. Does it mean that the public and the “IT Crowd” don’t mind?

Anonymous by choice February 22, 2010 7:23 PM

The evil empire just continues to expand.

The real question here is whether or not Google (and others) have govt required back doors in their systems. Strike that… not IF, but how deep and how many laws they break.

What we need is a law that requires all entities and all employees of such entities so required to report these govt requirements and remove them.

With the binding over to the dark side of the president that was elected BECAUSE HE WAS THE LEAST CONNECTED TO THE DARK SIDE it seems less and less likely, less possible that this evil mess can be corrected a little at a time and that it needs to be thrown out and completely redone.

Nick P February 24, 2010 12:52 AM

@ hylas

Well, the site you linked to mentions a combination of good info, incorrect info (e.g. “blue pill undetectable”), and a bit of exaggeration. But, it’s a good resource because it links to sites showing that things below the OS are vulnerable and part of the trusted computing base (TCB) that we depend on to enforce security policy. These things must be secure… or else.

If you want examples of better designs, history is littered with efforts to secure firmware and hardware. TPM is most popular notion, but it depends on firmware. Look into the A1 VAX security kernel with Alpha PAL code, the Viper chip, the VAMP processor, and the recent AAMP7 partitioning processor from Rockwell Collins. The latter may be the only commercially-available high assurance processor/firmware and it’s only rated to 100Mhz due to design issues. The best bet for normal systems is to look at the errata for each chip and pick the one with the lowest risk in that area. The motherboard vendor’s efforts must be considered, as they often decide what to correct in the firmware. Of Core Duo’s, Intel’s Core Duo Extreme line has the least errata. Maybe underclock it for energy efficiency. That’s all I can tell you past custom hardware or slower COTS high quality processors.

Jeffrey A. Williams February 24, 2010 3:30 PM

Bruce, et., al.,

I am very unsure that NSA will be able to help Google much here as they were themselves hacked very recently and remain exposed to further hacking that will not be trackable back to the perp.

hylas February 26, 2010 11:26 AM

@ Nick P

I attempted 🙂 to linked to several sites, I assume you mean Nancy’s.
Dealing with the subversionhack is confusing by design – you’ll have to take the false leads and inconsistencies and set them aside. The goal at this point is consensus, then understanding.
It is highly sophisticated.
I’m sure that when I get my journal of it up it will have incorrect things as well and I’m equally sure she did the best she could – considering the circumstances.

The Blue Pill reference, perhaps, was correct at the time.
As far as exaggeration, you don’t cite anything, but generally speaking, all of us have been called “liars”, “incompetents”, and “amateurs”, I’ve been cussed out by grown men (“professionals”) for a host of perceptions.
I assure you none of those things apply.
It’s the kind of thing that “widens the eyes”, a, you have to be there, type thing. Explaining it becomes inadequate.
Your design part is enlightening (AAMP7) and informative.
Many thanks.

Alex March 1, 2010 8:09 AM

Well, you don’t have to build visible backdoors into your hardware. If you’re even just a little smart, you’d build a black box device and sell support. The once in a while visiting engineer just replaces some components and retrieves some ‘logs’ (which are, unfortunately, all in Mandarin or a little known dialect thereof).

And it’s not a movie plot: Anybody remembers Comverse Infosys?

Leave a comment


Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via

Sidebar photo of Bruce Schneier by Joe MacInnis.