Comments

Alan June 10, 2011 1:43 PM

Good rants.

Would someone explain to me what it is about IPv6 that causes Bob Cringely to suggest that IPv6 is more secure than the status quo? He says:

“Yet there is good news, too, because IPv6 and Open Source are beginning to close some of those security doors that have been improperly propped open. The Open Source community is building business models that may finally put some security in data security.

“The U.S. government is a big supporter of IPv6, yet the National Security Agency isn’t. Cisco best practices for three-letter agencies, I’m told, include disabling IPv6 services. From the government’s perspective, their need to “manage” (their term, not mine — I would have said “control”) is greater than their need to engineer clean solutions. IPv6 is messy because it violates many existing management models.”

Allen June 10, 2011 1:55 PM

The NSA fears IPv6? IPv6 is a competative advantage? And why does Cringely think that the government will come after you if you create a 4096-bit version of AES? First off, no they won’t. Secondly, I remember reading in some security guy’s book that key length is not the vulnerability of security systems, it’s the System part.

So boo RSA, etc, but not one of Cringely’s better rants.

Richard Steven Hack June 10, 2011 1:56 PM

Patrick Gray: “There is no security, there will be no security. The horse has bolted, and it’s not going to be the infrastructure that’s going to change, it’s going to be us.”

Hmmm…I think I’ve heard that somewhere before…

JR June 10, 2011 2:01 PM

@Alan:
Partly because IPV6 has IPSec baked-in. It makes it harder for them to snoop.

Jens Alfke June 10, 2011 2:02 PM

I’m surprised you didn’t call out this howler from Cringely, though:

“To this point most data security systems have been proprietary and secret. If an algorithm appears in public it escaped, was stolen, or reverse-engineered.”

He also complains about the US government preventing export of strong crypto systems, which hasn’t been true since the mid ’90s.

Paul McMillan June 10, 2011 2:20 PM

That’s advice intended for internal and government use too. It’s part of a standard policy of caution towards unused features. The IPV6 stack isn’t as battle tested as the IPV4 stack, and likely has undiscovered flaws. It makes sense to turn it off as non-essential, especially when you’re as worried about security as the government should be.

Ian Toltz June 10, 2011 2:50 PM

From the second article:

“if I was a spy and trying to keep my secrets secret I wouldn’t buy any of these products. I’d roll my own, which is what I think most governments have long done.”

I’m curious how wise that idea is (it’s doubtful we’d ever get an answer as to how accurate it is). Certainly it’s been demonstrated repeatedly that rolling your own security simply doesn’t work, but then the question is whether someone with the brainpower and resources of the NSA could make it work.

I feel kind of silly even asking the question, but the fact that I’m not sure the obvious answer (that they shouldn’t roll their own) is the right answer implies that maybe, just maybe, they could actually roll their own security solution…

Clive Robinson June 10, 2011 2:56 PM

@ Alan,

“Does Cringely think that the advantage? And why does Cringely think that the government will come after you if you create a 4096-bit version of AES?”

I don’t think he’s that up on technology.

I think what he ment was 4096bit asymetric (RSA) to give you symetric AES key exchange in a public key architecture.

And he’s made other mistakes.

Sniffnoy June 10, 2011 3:27 PM

Meanwhile, the fact that there is a security company named “RSA” continues to confuse.

GreenSquirrel June 10, 2011 5:53 PM

Can anyone confirm if RSA are replacing the SecureID tokens for free?

I have had some conflicting reports – including one vendor I spoke to (today) who claimed RSA were making them re-purchase new tokens if they wanted to keep using the service (and for some reason they are tied into the mechanism so have to…)

tommy June 10, 2011 6:32 PM

Since we all love LULZ for the “I-told-you-so”, permit me to indulge in a little I-told-you-so myself.

Cringely:

“,,,the U.S. government does not want us to have really secure networks. The government is more interested in snooping in on the rest of the world’s insecure networks. The U.S. consumer can take the occasional security hit, our spy chiefs rationalize, if it means our government can snoop global traffic.”

Me, June 6, 2011 10:19 PM:
http://www.schneier.com/blog/archives/2011/06/open-source_sof.html

“Gee, I wonder why I never heard of the Chinese secure OS? Our Gov … afraid of users having systems the gov can’t crack, just as they formerly prohibited encryption they couldn’t crack? It might not be a lack of commitment, but rather an actual antipathy to widespread deployment of high-assurance OSs. (The trrrists will use them! Just like they use cell phones, and cars, and box-cutters!)”

I would add only that Mr. Cringely is too kind. The US Gov has a vested interest in snooping on insecure US networks and individuals. E. g., getting the goods on racketeers, and a lot of stuff far less justifiable, like the pathetic excuse of “searching for terrorists” in mass fishing expeditions. It’s a lot less risky to exploit an insecure system remotely than to break in and install your keylogger or other bug.

@ Richard Steven Hack:

Yes, you too get an I-told-you-so. You have indeed been telling us that all along. … Unfortunately, the sheeple will listen to their Gov, or rely on Gov to protect them, to the extent that they’re aware of the issue at all. Perhaps the push for true security will have to come from the bottom-up, as it certainly looks like waiting for top-down is futile. But most consumers don’t know any better, either, as you reminded me in a previous comment.

David Bell’s papers, referred to several times over the past week or so, acknowledge that there will have to be “selfless acts of security” — e. g., instead of some philanthropist donating a billion dollars to UNICEF (where third-world dictators probably get most of it), donate that money to the development of high-assurance products. “We” as a whole don’t seem inclined to change – we love our dancing bunnies too much — so “someone” is going to have to give us what we need, even if “we” don’t know that we need it.

Bill Watterson, commenting 20 years ago on the moral of a Calvin and Hobbes cartoon (he owns the copyright and trademarks),

“People will pay for what they want, but not for what they need.” Amen.

Roger June 10, 2011 6:46 PM

IPv6 is far less secure than IPv4 because it lacks IPv4’s most effective end-user security feature: NAT. NAT is being blocked from IPv6 not by security advocates but by special interests.

IPv6 enables user-tracking because their addresses will not change. Better tracking yields better marketing and by extention monitization. There’s also the resale market for IPv4, most of which is being horded in legacy allocations which haven’t been needed for years (ref Interop). Finally, the ability of P2P (friend and foe) to make inbound connections across a firewall will be greatly facilitated by IPv6’s end-to-end transparency.

What consumers really need is an IPv7 which is A) backwards compatible with IPv4 and, B) doesn’t require so many untested and unreliable features. It won’t come from the IETF, which has been mired in ‘designed by committee’ for years. It may come from a single engineer, someone like William Herrin (ref http://bill.herrin.us/network/ipxl.html). Either way we’re not likely to move beyond IPv4 until IPv6 has been significantly reengineered.

NZ June 10, 2011 7:50 PM

@ tommy
“donate that money to the development of high-assurance products”

Like OpenBSD.

Andy June 10, 2011 8:20 PM

This Cringeley article is, like most of his work, cringe-worthy. Maybe his heart is in the right place — I find it hard to tell through all of the half-truths, unwarranted extrapolations, misquotes, and logical fallacies — but his factual basis and deductive reasoning skills wouldn’t’ve earned a passing grade in my high school debate class.

I wish people would stop linking to him.

Michael lynn June 11, 2011 1:09 AM

@Alan

Depending on how addresses are allocated under ipv6 it can make the search space for worms seeking new targets too large to effectively spread (at least in the dumb old fashioned way). I don’t know if that is what he meant but that is one security plus. I’m not saying that one whole the scales tip towards it being more secure but that could be what he meant.

tcliu June 11, 2011 2:56 AM

Security people love LulzSec because LulzSec generating headlines make security people feel as important and as needed as they themselves think they should. That is, security is the most important thing ever – just as accountants think that accounting is the only thing between us and Armageddon.

The elephant in the room? Not the insecurity, it has always been there. The elephant in the room is that we have all this insecurity, have had it, will have it, and the sky isn’t falling. (PSN being down doesn’t count.)

In other news: Door locks easily defeated by power tools and crowbars, people still getting raped, mugged and murdered a lot; but somehow society neither goes under nor decides to just give up and drop all security.

tommy June 11, 2011 2:58 AM

@ NZ:

It certainly looks like they’re trying to do all the right things. Two forked questions:

1) Have they proposed it to US Gov, NSA/DoD, etc., for approval as high-assurance, and possible use?

2) Would a billion dollars make it totally n00b-friendly and induce OEMs to preload it vs. Windoze, including far more hw and peripheral support, and perhaps emulation for apps designed for Win? We need a way to get from “here” to “there”, and just the setup process wipes out 98% of the home user base. Limited driver support, etc.

They need to make it like what the public is used to:

1) Go to big-box store and buy computer.
2) Go home, take out of box, hook up to modem or whatever, plug it in.
3) Turn it on and go.

If a billion dollars would do that, and I had it to give, I’d give it. But they at least have the right idea, which is more than can be said for the commodity systems.

btw, Nick P. told me that China’s high-assurance OS used …” modifications to an older version of free BSD” rather than OpenBSD, FWIW.

http://www.schneier.com/blog/archives/2011/06/open-source_sof.html

Brian June 11, 2011 12:39 PM

Cringely’s article seems pretty ridiculous to me, in all honesty. According to him government in general and the NSA in particular actively prevent the development and use of secure computer technologies because they want to be able to “snoop” at will.

Certainly an interesting idea…but does he offer any proof to support his theory? Well, no. All he really offers is conjecture that this MUST be the state of things because underlying security technologies can offer high security, so the fact that security products often aren’t secure MUST be a result of government involvement.

Now given that everyone reading this comment reads Bruce’s blog, I would hope that most people here can see a alternative explanation: companies are TRYING to be secure, but it’s actually a hard problem to solve. Particularly when there’s not an obvious economic incentive to be extra secure.

Richard Steven Hack June 11, 2011 2:16 PM

Brian: As Tony Stark said, “I say, is it too much to ask for both?”

The issue is BOTH that the present level of software development technology (and more importantly, practice) is just so pathetically bad that we can’t engineer decent security (and for the most part no one wants to because it “costs too much” and “it’s too hard”), AND that the government is partial to security solutions that it (and preferably it alone, although that’s not happening) can penetrate.

I frequently use the case of Microsoft allowing the NSA to vet Vista. We all know what had to have happened. NSA finds X vulnerabilities and tells Microsoft about X-Y vulnerabilities. There’s no possible way it could have gone otherwise. Which means the NSA will always be able to penetrate Microsoft Windows no matter what Microsoft does (short of shutting out the NSA from early evaluations).

Cringely has his problems and is prone to over-extrapolation but basically he’s correct. He’s just focusing on one end of the issue.

It’s not clear that someone private or the government dumping a billion dollars into development of a “truly secure OS” would result in anything of the kind. Dumping that money into general research towards a “truly secure computing infrastructure” which includes an OS, the Internet protocols, social engineering training, etc., would be better. What would be best is dumping that money into developing a true ENGINEERING METHODOLOGY of computer systems, hardware and software.

But it’s not even clear that would work, since there have been billions dumped into that over the last four decades. You can read the computing science literature going back thirty years and see that research has been done for decades on a proper system development methodology. We’re no closer today to actually getting people to do it than we were in the mid-to-late ’70’s when I discovered early in my exposure to computers the tenets of “structured programming”.

All of which contributes mightily to my bottom line meme: There is no security. And it’s because humans are just that: humans.

sam disman June 11, 2011 2:38 PM

What makes you think you have been secure over the years? Govs have always said we need to get in when we want, nothing has changed. Want safe? Comput OFF the net and send later.

Clive Robinson June 11, 2011 9:03 PM

@ Brian,

“Certainly an interesting idea…but does he offer any proof to support his theory? Well, no. All he really offers is conjecture that this MUST be the state of things because…”

There is the old maxim about “absence of proof…”

That being said we know the NSA has two conflicting personalities because that is what their charter basicaly asks for,

1, Use talent to protect American Comms.
2, Use talent to breach all other Comms.

And this was less than a decade after WWII just as the Cold War was getting interesting but before the concept of Data Comms and Computers had realy got underway. With the public view of “communications” being basicaly voice and vision at a distance.

For the military however Command and Control was mainly about Data Comms not voice, because voice was too difficult to secure and thus reserved for battle field communications where security was a lower priority compared to timelyness of delivery.

Also known to the UK and US military leadership and kept quite was the fact that crypto systems such as the German Enigma for morse traffic and higher security telex systems (Fish etc) where all breakable. It was not till the 1970’s book by Fred Winterbottom that the rest of the world became aware of “Ultra”.

It was a real shock to most nonwestern countries who had purchased the old German Enigma’s and US Haglin machines as war surplus to use for their Diplomatic traffic. Even though the original machines where obsoleate by that time these countries where still buying augmented systems using the same design technology from Crypto AG in Switzerland.

As for the NATO military command and control comms that used a variation of the NSA ECM machine still using rotors and code wheels even though US and UK mainline command and control and diplomatic traffic had gone electronic. Initialy this was mainly in the form of stream ciphers, however developments in aircraft Identification of Friend or Foe (IFF) systems had started serious development in Block ciphers.

It was strongly suggested by David Khan that the NSA had stolen work by independent crypto developers and had handed it on in a ‘poisoned chalice’ form to others and had kept “back channels” open on development in the only independent crypto equipment supplier (subsiquently brought to light in a more serious form in 1992 when Hans Buehler a Crytpo AG senior sales reprsentative was arrested on espionage charges by the Iranian’s and interogated for nine months).

What is now known is that a number of the Boris Haglin’s “coin counting” mechanism field cipher systems where “improved” by the NSA and that they had various key strengths ranging from weak to moderatly strong (for the time). Importantly that is was not obvious in any way from examining the mechanical design which keys were which, and only certain mathmatical analysis would show which were strong or not.

It has been suggested that this “quirk of design” was one way the NSA could resolve the differences between their two charter aims. That is the NSA responsable with issuing key material for use could ensure that only the stronger key settings were used by the US military. Whilst systems that had been captured and were now being used by others would be unaware of the key strength differences and would end up using both strong and weak keys to the NSA’s benifit.

In the mid 1970’s single chip CPU’s had arived and a major push was to put Crypto onto them. It had however became clear to most financial organisations ten or more years prior to this that strong crypto was a necesity. Thus the US Government in 1973 put out requests for designs for crypto systems via the National Bureau of Standards (NBS was NIST’s forerunner) under the guidance of the NSA. None of the candidates were of any use some being little more than modified versions of hand ciphers that had been broken prior to WWII.

In 1974 NBS issued a second request, the winner of the selection process was from an IBM design team and was based on Horst Fiestel’s Lucifer cipher. However the design was felt to be not satisfactory and was subject to significant changes before eventualy becoming the DEA for the FIPS-46 DES.

It is now known that the NSA had a very strong influance on changing the original IBM submission (see information from the likes of Don Coppersmith). Some of it good (improving the S-Boxes) some of it bad (reducing the key length), and there have been various comments that the NSA wanted the key length to be only 48bits and it was a compromise between that and IBM’s original 64 bits that resulted in the 56bits of the standard.

Right from the start DES was attacked due to the seen and unseen hands of the NSA. However it is reaonably clear that as the only publicly available algorthm that had been “blessed” by the NSA it was going to receive a lot of interest. On the academic side it opened up areas of research that had not received much in the way of attention and kick started a whole new area. It became the standard by which others were judged. However important as DES was it was FEAL that brought about it’s down fall.

What was clear from a practical perspective was that the DEA was effectivly an “anti-software” design. That is the design used functions that ment implementing it in software would be considerably more difficult and execution time expensive than it needed to be (things like the permutations are effortless in hardware but high effort in software). Some of these functions (initial and final permutations) actualy served no purpose in the strength of the cipher.

The Fast data Enciperment ALgorithm (FEAL) was presented as a more software friendly alternative to DES in 1987 by Akihiro Shimizu and Shoji Miyaguchi. Both working for Nippon Telegraph and Ttelephone the algorithm was also a Fiestel round design. It received considerable attention but it had weaknesses, right from the very start at the conference it was presented it was attacked. The attack was improved and formed the beginings of what later became Differential Cryptanalysis which was the first attack to bring DES under serious question. It became clear that DES was such that it’s strength under DC was effectivly the same as it’s Brut Force strength giving rise to the idea (later confirmed by Don Coppersmith) that the NSA were aware of DC at the time of DES’s design.

However subsiquent versions of FEAL though strengthend against DC opened up other avenues of attack that gave rise to Linear Crypanalysis and this was the academic nail in DES’s coffin.

However the NSA had spotted that DES had escaped their control and the academic community was catching them up quickly. Whilst fighting a reguard action to keep DES alive they started up a series of developments one of which delivered in 1980 the SkipJack algorithm that was to become the basis of the Key Escrow program through the classified Clipper Chip.

SkipJack was bassed on research work carried out prior and during WWII in combinatorics and abstract algebra. The design was classified and this created a significant backlash and the NSA yet again wrongfooted went on another reguard action and belatedly invited a small group of academics to review both the design and the process behind it in 1993.

Although it was given a clean bill of health by the team the whole Key Escrow system received such a backlash it was cancelled, however SkipJack lived on in other US Government systems. In 1998 the SkipJack algorithm was suddenly declasified and made public due to a significant problem the NSA had brought upon themselves, whereby the algorithm that was only ever intended to be in CapStone hardware on Fortezza Cards due to production difficulties had to be put in software.

Skipjack has some odd properties which Bruce described in the July 1998 Crypto-Gram ( http://www.schneier.com/crypto-gram-9807.html#skip ).

Of primary interest was it was the first electronic age NSA “inhouse design” ever to become public, although it was secure the design was very very brittle, in that even very small changes would drasticaly reduce it’s security.

This brittleness harked back to the key strength tricks used on the mechanical Haglin cipher machines, in that anyone on discovering the design without fully mathematically understanding it would be tempted to use it in a slightly modified way and would thus lose significant security.

It had in the mean time become abudently clear that DES was now a “dead duck” and NBS’s successor NIST started a new encryption competition to replace DES with the “Advanced Encryption System” or AES. On the face of it although NIST where in consultation with the NSA the competition was held openly.

However the AES competition may have been riged by it’s rules (commented on but ignored at the time). Put simply the candidates had to submit candidate designs for software and hardware and all the emphasise was placed on speed and efficiency.

The result was software and hardware designed to meet the competition criteria not for security. The resulting designs were public and ended up “as is” in by far the majority of products and software libraries and still are in many today.

The problem as the NSA would have been only too well aware is that Efficiency and Security are often on opposit ends of a see-saw. In general unless you take specific precautions (many of which are clasified) an efficient design will open up a significant number of side channels through which information on the plaintext or worse key will hemorrhage.

And before the ink was dry on the draft of the new FIPS AES standard a practical demonstration of AES key recovery across a local area network was put up on the net.

Bob Cringely may not have given any evidence to support his NSA view, and direct evidence may not be available in some peoples view. However the information I have given above certainly suggests that there is a probability that they have at times in the past acted in that way either directly or indirectly weakened security.

Now the question is do you have to prove it before taking precautions or just assume it’s sufficiently probable you take the precautions any way?

Prudence suggest it’s best to assume the worst when it comes to security.

That being said at the end of the day the NSA still has to play the dual role game, and as I’ve indicated there is plenty of information out there showing where they may well have put their hands out in one way or another to ensure that the game goes on.

But what of the future? There are a number of areas the NSA have their lives made easier.

Firstly “Standardised Plaintext” in headers of files and comms protocols. As seen with WEP exploiting a design weakness is significantly strengthened when you know where and what plaintext to look for.

Secondly “side channels” algorithms might well be secure when considering only their input to output mapping (the so called “data at rest”), but what about the dynamic performance of an implementation?

Some algorithms are inherantly more secure than others in this respect, whilst others need specific caution when implementing in general, but importantly some algorithms are very bad when being implemented on certain common hardware architectural features such as caches etc.

Thirdly algorithms don’t exist in isolation in the real world the exist not just in hardware and software implementations but also in standards. Not just for modes of usage but in all maner of standards across the entire software and hardware stacks.

Although an algorithm can be secure the manner in which it is used within a protocol can make it considerably less secure, protocols come into existance in two ways, as standards and being compatable with existing implementations.

A knowledgable person can make a secure standard brittle in design. And by providing the first implementation that although compliant with the standard on paper is actually broken effectivly forces others to then break their design by making them compatable.

Another trick is legacy compatability, a standard contains a hiden or unknown security flaw, if and when it’s discovered the standard is reissued as Rev2 with the protocol flaw fixed. However practical implementations for legacy reasons maintain compatability with the broken Rev1 of the standard. In most cases the implementations can via a man in the middle attack be forced transparently back into the insecure Rev1 one working mode.

I would expect the NSA as part of it’s working brief to be influencing the design of protocols and standards to achive these desirable (for them) outcomes.

Even if they are not, these things happen any way because developers know to little about security and don’t usually have the luxury of time or other resources to ensure their products are secure.

As an individual you are at liberty to decide the level of caution you take, however as an employee this is generaly dictated by managment who do not want to sacrifice “short term shareholder profit” for “long term security”. Simply this is often because their own career is usually judged by the metric of “shareholder gain” and they don’t seriously expect to be in any given job for more than a year.

Brian June 11, 2011 10:11 PM

@Richard Steven Hack:

“I frequently use the case of Microsoft allowing the NSA to vet Vista. We all know what had to have happened. NSA finds X vulnerabilities and tells Microsoft about X-Y vulnerabilities. There’s no possible way it could have gone otherwise.”

Again, that’s an interesting idea, but I feel like it requires more of a supporting argument than “there’s no possible way it could have gone otherwise”. What makes all alternative possibilities so implausible that they can be dismissed with no proof at all?

Brian June 11, 2011 10:26 PM

@Clive Robinson:

Definitely a lot of interesting history there, and I definitely don’t want to give the impression I’m in favor of blindly trusting the government either when it comes to security. Clearly history suggests some prudence might be a good idea.

But I think Cringely’s article goes way too far the other way. Instead of being a large government agency with an often conflicting agenda (as Clive Robinson pointed out by giving their two missions), the NSA is like a Bond villain organization, able to negatively influence all security products developed by everyone.

RobertT June 11, 2011 11:17 PM

OT Any thoughts on the IMF attack?
http://www.nytimes.com/2011/06/12/world/12imf.html

I talked with an 1818 club insider, who said that the exact scope of the breach is unknown but that the nature of the data they were accessing suggests it was a very deliberate and controlled breach.

Unfortunately, I think this will be swept under the rug so that everyone can continue to pretend that these organizations are safe repositories for financially significant data.

Richard Steven Hack June 12, 2011 12:57 PM

Brian: “I feel like it requires more of a supporting argument than “there’s no possible way it could have gone otherwise”. What makes all alternative possibilities so implausible that they can be dismissed with no proof at all?”

Simple.

1) The nature of the state.

2) The nature of intelligence agencies in ANY state.

3) Human nature in general.

If you really believe an intelligence agency is going to be handed the operating system that will be running a billion or more PCs worldwide in every country and not take advantage of that opportunity to learn how it can be compromised – indeed, that’s WHY they were given the product! – and not withhold at least one such vulnerability… Well, that’s just naive.

In fact, if the NSA DIDN’T do that, everyone there should be fired! 🙂

It boils down to having some experience in my sixty two years with human nature and human history. Perhaps yours has been different…or shorter.

Richard Steven Hack June 12, 2011 1:08 PM

RobertT: This isn’t the first time such an organization has been hit. The World Bank, IIRC, was hit persistently over a period of many months a while back. Rumor was dozens of servers were compromised.

Ah, here’s a sample of that one…

World Bank Under Cyber Siege in ‘Unprecedented Crisis’ Friday, October 10, 2008
http://www.foxnews.com/story/0,2933,435681,00.html

Quotes:

But sources inside the bank confirm that servers in the institution’s highly-restricted treasury unit were deeply penetrated with spy software last April. Invaders also had full access to the rest of the bank’s network for nearly a month in June and July.

In total, at least six major intrusions — two of them using the same group of IP addresses originating from China — have been detected at the World Bank since the summer of 2007, with the most recent breach occurring just last month.

In a frantic midnight e-mail to colleagues, the bank’s senior technology manager referred to the situation as an “unprecedented crisis.” In fact, it may be the worst security breach ever at a global financial institution.

According to internal memos, “a minimum of 18 servers have been compromised,” including some of the bank’s most sensitive systems — ranging from the bank’s security and password server to a Human Resources server “that contains scanned images of staff documents.”

One World Bank director tells FOX News that as many as 40 servers have been penetrated, including one that held contract-procurement data.

The bank’s chief information officer, Guy De Poerck, has engaged Price Waterhouse Coopers to do a confidential million-dollar assessment that is expected to tell him what’s going on in his own department. And a 22-page internal report by a computer security company named MANDIANT, dated August 18, fleshes out many details of the June-July breaches. But very few people have ever seen the report, and nobody has been permitted to retain a paper copy.

At the same time, De Poerck has been downplaying the problem to the bank’s 10,000 rank-and-file staffers as mere intrusion “attempts” in his e-mails. Yet most of those staffers have been asked to change their password three times in the past three months.

“We’re not talking about hackers playing games or messing up our website,” insists a senior member of the bank’s IT department at its Washington headquarters. “It’s about the FBI coming last summer and saying, ‘You should take a look at your systems because we think something weird is going on.’ It’s about the intruders knowing what information they wanted — and getting to it whenever they wanted to. They took our existing data stores and organized them in a way that they could be easily accessed at will.”

In plainspeak: “They had access to everything,” says the source. “They had the keys to every room at the bank. And we can’t say whether they still do or don’t until we fully and openly address what’s happening here.”

The World Bank’s data center is literally a treasure trove of vital financial information from around the globe. As a clearinghouse for financial data from both governments and companies, the bank’s computers could provide intruders with both a financial and intelligence gold mine — from inside information on bids and contracts to the minutes of confidential board meetings.

If the bank takes a position in a currency, for example, that currency usually moves in response to the bank’s actions. Stocks and bonds can also swing up and down based on World Bank announcements. “If you know beforehand that the bank is going to put an order in for oil pipelines in Chad or healthcare systems in India, you can actually make a good amount of money,” says one insider.

Although the bank typically provides only a fraction of the financing for a project, its influence on those projects is immense. Private corporations see the bank’s stamp of approval as a guarantee that their own larger investments will be safe — and profitable. Knowing in advance what projects the bank’s board will reject could be just as profitable.

Some insiders fear that contractors — perhaps even governments — might be seeking advance knowledge on the status of the bank’s anti-corruption probes. “The bank knows the books of countries almost as well as the countries do — including the corruption at times,” says one insider.

The first breach of the bank’s secrets was discovered in September, 2007, after the FBI —while at work on a different cybercrime case — notified the bank that something was wrong. The feds pointed to a part of the bank’s network that led out of the Johannesburg hub of the International Finance Corp. (IFC), a bank arm that lends to the private sector.

Within a week of the tip, teams of bank investigators sent to Johannesburg discovered that intruders had gained full and total access to all of IFC’s worldwide information — including all incoming and outgoing e-mail — for at least six months. “They were downloading everything and anything,” says one insider, who says that IFC’s monitoring systems were extremely weak. “They [intruders] had full access.”

Bank sources tell FOX News that Johannesburg is one of several secret “hubs” containing a “common data store” (or CDS) that the World Bank Group has established around the globe. In layman’s terms, a CDS is the cyber-world’s version of a bomb shelter where every piece of an organization’s data is replicated and backed up in case of a data-wipeout at headquarters in Washington. While it’s known that IFC data was accessible at the hub, it remains unclear if all World Bank Group data was compromised there.

[MY NOTE: Heh, heh – remember the downloaded data that Timothy Olyphant’s character took advantage of in “Live Free and Die Hard”? Right out of his playbook…]

The second major breach — of the bank’s treasury network in Washington — was discovered in April 2008. The World Bank’s Treasury manages $70 billion in assets for 25 clients — including the central banks of some countries. It carries out substantial collaborations with the world’s finance ministers on public wealth and debt management, runs an active bond-trading desk in Washington, and does everything from currency trading to capital markets financings.

After a forensic analysis of the treasury breach, bank investigators discovered that spy software was covertly installed on workstations inside the bank’s Washington headquarters — allegedly by one or more contractors from Satyam Computer Services, one of India’s largest IT companies.

The software — which operates through a method known as keystroke logging — enabled every character typed on a keyboard to be transmitted to a still-unknown location via the Internet.

Upon its discovery, insiders report, bank officials shut off the data link between Washington and Chennai, India, where Satyam has long operated the bank’s sole offshore computer center responsible for all of the bank’s financial and human resources information.

Satyam was also banned from any future work with the bank. “I want them off the premises now,” Zoellick reportedly told his deputies. But at the urging of CIO De Poerck, Satyam employees remained at the bank as recently as Oct. 1 while it engaged in “knowledge transfer” with two new India-based contractors.

Then came the June-July breaches in Washington. They were similar to the Johannesburg attack, as the same group of IP addresses from Macao were used.

This time, however, the cyber-burglars used a different spyware. They broke into an external server run by the bank’s private sector development unit. They were able to acquire passwords — including the password for the systems administrator.

That enabled them to jump into the servers at MIGA, the bank’s giant insurance arm. It was there that they captured the security administrator’s password as he was logging on to his computer.

It took ten days for bank officials to detect that they’d been invaded. Once they did, they shut down all external servers, except for e-mail — which it turns out the invaders were already using as their entrance point. By the end of July the invaders “had completely mapped out the topography of the bank’s information systems,” says one expert — “where everything was, the types of servers, and the types of files on the servers.”

Today the total cost to maintain the bank’s information infrastructure is at least $280 million per year. But according to one disgruntled bank staffer, “We don’t even have an internal search engine that works.”

UPDATE: After FOX News published its story, a World Bank spokesman issued the following statement:

“The Fox News story is wrong and is riddled with falsehoods and errors. The story cites misinformation from unattributed sources and leaked emails that are taken out of context.

“Like other public and private institutions, the World Bank has repeatedly experienced hacking attacks on its computer systems and is constantly updating its security to defeat these. But at no point has a hacking attack accessed sensitive information in the World Bank’s Treasury, procurement, anti-corruption or human resources departments.”

FOX News stands by its story.

End Quotes

Todd Knarr June 12, 2011 3:11 PM

I’ll make one comment about Shostack’s rant: the problem isn’t lack of communication by security people. The problem is that even when it’s reduced to business terms, management simply won’t believe it without evidence. Except that they won’t authorize an actual, true test. At most they’ll authorize an analysis, then dismiss the results because there’s no evidence of actual break-ins. And if security people try to run a true test without authorization, management uses the results as grounds to fire them. And when breaches happen, it’s rarely the security people covering them up. It’s management, who’d be embarrassed if it came out that the very things they dismissed for lack of evidence had actually gone and happened.

I suspect security people are chortling at Lulzsec’s antics because Lulzsec is doing what the security people can’t: rub management’s face in solid evidence that they can’t ignore or dismiss and whose source they can’t just fire or bury.

RobertT June 12, 2011 10:16 PM

@Richard Stephen Hack
‘ It’s about the intruders knowing what information they wanted — and getting to it whenever they wanted to. They took our existing data stores and organized them in a way that they could be easily accessed at will.”

That’s about what I was told about the IMF attack. Basically hackers owned the databases, and were extracting whatever information they wanted. The breach was so blatant that their most secret data was being openly accessed from insecure web links. This sort of incompetence is unforgivable, but typical for these two organizations.

Jaime June 13, 2011 12:44 PM

@Todd Knarr
“The problem is that even when it’s reduced to business terms, management simply won’t believe it without evidence. Except that they won’t authorize an actual, true test.”

The problem really is that the security industry has been blowing minor things out of proportion for the past twenty years and management stopped listening a long time ago. Losing a hundred thousand credit cards isn’t going to shut a business down — ask Sony.

Richard Steven Hack June 13, 2011 2:13 PM

Jaime: And that in turn reflects on management that doesn’t care about its customers, which has been true for a lot longer for twenty years.

So while it’s unfortunate for the customers, I really have no sympathy with any corporation that gets nailed to the wall by hackers. Most of them deserve it.

Richard Steven Hack June 13, 2011 2:14 PM

Oh, and Sony in particular, since they were using rootkits to protect their music intellectual property not too long ago regardless of the impact on their customers.

So hackers costing them a few hundred million dollars is payback.

Jaime June 13, 2011 3:14 PM

I agree that Sony shouldn’t get any sympathy. But the fact remains that most companies think about the bottom line and “OMG!!! OUR WEBSITE IS GOING TO GET OWNED” translates into management speak as “It will cost a few days of bad press and three hours worth of profit”. We have bad security because the costs are distributed in such a way that a company would be stupid to invest heavily in fixing it. Even industries that seem to be trying, like the payment card industry, are only putting out a barely effective checklist of simple security techniques that reduce a company’s liability to almost zero without improving security a whole lot.

This simple fact is that it’s usually cheaper to reimburse customers for stolen money than it is to make a system secure enough to reduce the likelihood of losing the money in the first place to almost zero.

Dirk Praet June 13, 2011 8:36 PM

Cringely may have his heart in the right place, but I’m less than impressed with some of the logic and statements he’s making in this article. I think I’d rather have him spill his guts over other topics or perhaps team up with a British ranting poet called Attila the Stockbroker. That would definitely make for an interesting double bill on a dreary summer evening.

Todd Knarr June 14, 2011 11:16 AM

@Jaime: “Losing a hundred thousand credit cards isn’t going to shut a business down — ask Sony.”

And I think that’s the problem: Sony isn’t bearing most of the costs of this. I think companies would take a different attitude if Sony got ordered by a court to not just provide credit monitoring, but to pay all the fees for cardholders to get issued new cards, cover any and all fraudulent charges made, cover any and all penalties and other costs resulting from having cards invalidated or fraudulent charges having been made, and pay cardholders at an hourly rate for actual documented time the cardholder spent dealing with this (eg. calling places to get accounts set up on a new card number) or, lacking documentation, the average time spent by cardholders who do have documentation. In short, make the company that got breached bear the actual costs of the breach to the people whose information was compromised.

Jaime June 14, 2011 12:11 PM

Yes, that is the problem. Bruce has mentioned many times that the economics of security are currently broken. The cost of bad security is borne by the victims and third parties rather than those responsible for securing the system. However, security professionals often seem to be reacting to “how it should be” rather than “how it is”. That’s why they get ignored.

The economics are so broken that it’s often better to be ignorant of your security problems than to either be aware and choose to fix them, or to be aware and choose not to fix them. Fixing problems costs money, consciously ignoring problems can come back to bite you in court, being ignorant sometimes only costs an apology.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.