Brian Snow Sows Cyber Fears

That’s no less sensational than the Calgary Herald headline: “Total cyber-meltdown almost inevitable, expert tells Calgary audience.” That’s former NSA Technical Director Brian Snow talking to a university audience.

“It’s long weeks to short months at best before there’s a security meltdown,” said Snow, as a guest lecturer for the Institute for Security, Privacy and Information Assurance, an interdisciplinary group at the university dedicated to information security.

“Will a bank failure be the wake-up call before we act? It’s a global problem—not just the U.S., not just Canada, but the world.”

I know Brian, and I have to believe his definition of “security meltdown” is more limited than the headline leads one to believe.

Posted on December 2, 2010 at 7:06 AM31 Comments

Comments

greg December 2, 2010 8:29 AM

You mean like a meltdown like y2k? Oh that’s right, it didn’t happen because of all that expensive “y2k” readiness [ a rock] you paid for.

I always say, that when someone is predicting “total” doom, ask them what they are peddling, cus it won’t be free.

Clive Robinson December 2, 2010 8:29 AM

@ Bruce,

“I know Brian, and I have to believe his definition of “security meltdown” is more limited than the headline leads one to believe”

Yup I’d rate him as one of the good guys from the NSA, he’s usually one of the sharpest tools in the factory, his previous calls on the state of things in 5 ten and 15 years has been very close.

Which is what makes this at first sight so surprising. However he has of the past few years become a llittle more forthright and less guarded on what he say’s in terms of where things go wrong and why.

Now the question is is he being taken out of context or is he bang on the nail again…

If not the former then we may be in for some interesting times ahead. And if so my money would be on the more technical side of APT but without the hype.

For instance Stuxnet has recently shown that ideas thought by many as just “way out theoretical attacks” (air gap crossing & false code signing) are quite practical and effectivly kick away two majorly relied upon security mechanisms. And in some ways it’s clarion call has been ignored in the game of guess the sponsor and target countries.

Dave December 2, 2010 8:40 AM

Bruce,

The language does seem a bit extreme, but could you clarify a bit more what you mean by “his definition of “security meltdown” is more limited than the headline leads one to believe.” What, in your understanding, is his definition, and what is he speculating will occur? A Stuxnet-style attack within the US originating from a foreign power? I’d also be curious to hear your opinion – are we heading for some sort of “security meltdown,” and if so how serious will it be?

Section9_Bateau December 2, 2010 8:53 AM

I really respect Snow, and agree that he is one of the best individuals in the field. Based on what I have heard in the past from him, I REALLY hope things are not going to go the direction he expects.

Now that I have spent about 6 months doing high level corporate security, honestly, I am afraid of using almost every system I have worked on. Even after months of time from getting my reports, some of the clients I have worked for still haven’t even managed to do things like use HTTPS to protect PIN codes for e-commerce shudder

Jade December 2, 2010 9:01 AM

There’s been a meltdown in thoughtful reporting of information assurance matters. I’ve been in this line of work for a fairly long time. There’s always been a thick layer of fertilizer, but it’s gotten so deep that many people can no longer find soil or seed. (ok, so that’s probably not the best metaphor)

It seems like everyone I know is now an “expert” in information security. Oh, I suppose that should be “cyber” security. Something happens, as things happen to do, and the “experts” cry out that we should have listened to them; We should have read their paper; We should have purchased their product; We should hire them as consultants; They should speak at our conferences….

Hey, now I’m having a meltdown! 🙂

Phillip December 2, 2010 9:02 AM

THE SKY IS FALLING! THEY WILL CAUSE THE NEXT RECESSION — NAY DEPRESSION….

okay, no. not really; but it makes good headlines!

David Thornley December 2, 2010 9:16 AM

@greg: Y2K was a non-event because of a lot of work that went on before the fact. When I was working on it, I found and either fixed or updated code that would have caused serious problems if left untouched.

There was a lot of excess hype (most embedded processors really don’t care what the century or year is), but there were real problems out there that were mostly avoided by intelligent planning and actions.

Andre LePlume December 2, 2010 9:22 AM

Well, a guy named “Snow” speaking in Canada, can be forgiven for being excessively worried about a meltdown, can’t he?

Brandioch Conner December 2, 2010 9:54 AM

@Jade
“… We should have purchased their product; …”

I think that is the key.

Meanwhile, I’m going to predict that nothing worse than Slammer / Blaster / Code Red will happen in the next two years.

swhx7 December 2, 2010 10:06 AM

There has been a steady stream of alarmist hype about computer security in recent years. And to the wary watcher, most of it looks like government and industry building up a pretext for a takeover of control of private systems. The problems are real, but the proposals have been toxic menaces such as the “internet driver licence”, the “trusted computing” scam, and massive spying schemes (deep packet inspection by ISPs, etc.).

Snow, however, is quoted in the article as referring to “too much external control ” as a security hazard. I couldn’t find a fuller transcript, but if he meant that the device owner should be in control of the device, this is a welcome glimmer of good sense on this topic.

Trichinosis USA December 2, 2010 10:10 AM

There’s a webinar being offered by a firm that supposedly built a blocker for the Stuxnet worm. They’re claiming that now that the worm is more fully understood, it will be modified to target different types of infrastructure. That’s quite believable.

http://www.marketwire.com/press-release/CoreTrace-Host-Webinar-With-Richard-Stiennon-Titled-Stuxnet-Variations-Coming-Your-Computers-1362818.htm

The post-9/11 military/industrial complex thrives on contrived and self-fulfilling crises. Stuxnet having been sent toward a “safe” target with minimal worldwide reaction and absolutely no effort to prosecute the originators, it will of course now be modified and sent toward targets which can then be fleeced for “protection” money. How is this surprising? How does this vary from the formulaic fearmongering that’s been getting steadily thrown at us for the last decade?

dmc December 2, 2010 10:12 AM

Long weeks to short months??? Is that an attack timeframe he’s talking about? Does he know something that we don’t? Why make such a specific prediction?

wiredog December 2, 2010 10:20 AM

@greg,
I remember seeing the Naval Observatory reporting the date as January 1 19100. Wish I’d gotten a screenshot.

Maj. Kusonagi December 2, 2010 10:39 AM

@Bateau: That’s what reading this blog is all about. It demystifies our perception of risks and allows us to go “beyond fear” and help us make right business decisions.

I remember some alarmism about the increasing density of airplanes per airport and prognostications about an impending “meldown”. A meltdown did occur (9/11) but not in the way the cassandras say it would happen. It did reduce the density and allowed the cassandras to rest for another 10-15 years.

Be careful what you wish for…

Clive Robinson December 2, 2010 10:39 AM

@ Bruce,

From what I can find it appears the context was Public Cloud Computing with shared access on the servers.

As he has said on this subject before “you don’t know what you are cuddling upto”.

For those thinking Brian is selling a “product” not realy he is pushing very hard at lack of High Assurance Methods being used in general and more specificaly he belives that any data on external shared servers in the cloud is made available to others (which it most definatly is via cache attacks and network monitoring, then there is the data finding by inferance of wide source agrigation).

He is right if the assumption holds true that companies jump into the cloud with their data without taking suitable precautions. Sadly the precautions are as quite a few NSA employees are aware “unknown”. That is it appears in practice that,

1, Data cannot be anonymized.
2, Data is available in transit even if encrypted if encryptor or decryptor is “online” (which it most definatly is with generaly available Public Cloud).
3, Data from many sources can be easily agrigated and inferance can be drawn. That is the sum is most definatly much greater than the parts.

So yes I think he is right within the context of Public Cloud Computing but we have had this discussion before so he’s not actually saying anything new that those who have looked into it with a reasonable degree of knowledge have not already conclude for themselves…

His aim appears to be to make researchers and graduates aware of this so they can find the “precautions” that are going to work or be aware of where data is going to leak and how.

For those that want a bit more “knowledge” on the why’s of this read,

http://eprint.iacr.org/2010/594.pdf

And then realise what this does for a potential attacker running on the same Cloud Server you are running your data upto or down from even if hevily AES encrypted.

greg December 2, 2010 10:42 AM

@David Thornley

It was 99.9% hype. Hell its not like these systems didn’t have bugs in them anyway, and didn’t exactly have great up time stats (systems that did typically didn’t have y2k issues in the first place). I was working on it too, and i was asked completely stupid questions, like “how do you even test for it” when they had a completely mirrored test environment.

The vast majority of databases already used proper date fields and (ie “unix y2k in 2032 iirc”), and needed no changes. The few system that did need a change were not very critical (date printed wrong a receipt).

There was the odd system that needed some fixing, but nothing that normal system maintenance and response wasn’t adequate for. The worst that happened in one of our systems is that 20 tons of frozen chicken was ordered. But since orders over 1 ton have to be approved… No harm was done.

I reckon that about 2/3 of the companies I worked for (contractor, y2k made me a lot of money) would have been just as well off with a y2k rock of warding. About the only difference is that the rock would be cheaper, but can’t sign the y2k compliance form.

It was a storm in a tea cup. Much like claims that the internet will be “destroyed” by hackers.

Fortunately even people that believed the hype of total anarchy didn’t really behave in that way. They didn’t buy lots of canned food and just partied like its 1999 anyway.

Same here. No one will really do much anyway. Perhaps there is a little more too it. But you know what. The world won’t fall apart because the internet doesn’t work properly for a day or 3.

All in my experience and/or option of course.

greg December 2, 2010 10:48 AM

@clive
I was not suggesting, or did not mean to suggest that Snow was peddling something… But the newspapers/media… They have a lot to gain by over dramatization of the news.

@wiredog
Not exactly matching the hype of the time. A web site with a wrong date?

People were asking me if its safe to fly after y2k for goodness sake…And no, spending lots of money as the reason for nothing happening is no better than we don’t have any terrorist attacks because of the TSA.

John N. December 2, 2010 11:42 AM

Re Y2K hype –

I recall some idiot claiming that the reason why we hadn’t found anything with SETI is that perhaps alien races were wiped out by the Y2K bug.

Bryan Feir December 2, 2010 12:03 PM

@greg:
Oh, nobody who actually understands what was going on truly believes that things like airplanes falling out of the sky would have ever happened. Civilization was never going to end. There was a lot of over-hype on the issue.

Unfortunately, the over-hype happened largely because many places refused to actually do anything about it until there was an actual panic raised. And once it got to the media, well, it exploded even further, since most of the media are more concerned with selling ad space than correctness, and panic always sells.

Besides, you want damaging… remember that a number of big companies have automated payroll systems that had to do date calculations for things like vacation allowances based on the difference between the current year and year of hiring. And a lot of those applications were written based on policy that hadn’t been changed in decades, and were sitting on the back room server which nobody really needed to touch anymore aside from using the regular tools to add or remove people from the database. Try telling a company that would have had to manually correct thousands of payroll cheques because the vacation pay calculations were off that there was no real problem.

There were lots of Y2K discussions in http://catless.ncl.ac.uk/Risks/ including numerous cases where a fast-tracked Y2K fix caused far more problems than it solved.

Dirk Praet December 2, 2010 6:31 PM

IMHO any system, whether it be a car, a bank, a nuclear plant or a cyber infrastructure can be subverted or will suffer a major breakdown at some point. Flaws can be exploited, people can make mistakes. One small failure can set off a massive chain of unexpected events. In that perspective, Brian is not telling anything new. That is why we put in place all sorts of controls to protect, secure and regulate stuff.

We can reduce, mitigate, defer or transfer risk. We can chose to accept it and devise plans to minimize and contain events. The point however remains that no system can or will ever be 100% secure. Even trying to get there there may imply unacceptable costs or invasive measures that directly impact civil liberties such as privacy, freedom of speech or freedom of information. From where I’m sitting, there are limits to the extent I’m willing to sacrifice these in favour of real or perceived security, especially when the cure may prove to be worse than the disease.

Use of full body scanners at airports are a perfect example thereof. Besides privacy and constitutionality issues, the risk of getting cancer from them statistically is about as big as your plane getting blown up by terrorists. See http://www.boingboing.net/2010/11/19/odds-of-cancer-from.html .

So is the internet going to melt down anywhere soon by lack of security ? I think not, but who am I ? Arguably, Brian Snow and the hands that feed him may be looking at quite some rewarding consultancy assignments by those buying into his thesis. As did most of us during the Y2K FUD. But it would definitely be wrong to just laugh it away without examining his arguments. The man is no greenhorn in the field, and I don’t mind spending some CPU cycles reflecting over them. Before 9/11, even Hollywood scenario writers hadn’t come up with a plot of terrorists flying planes into the WTC.

I guess what’s important here is to keep an open mind. Unlikely does not mean impossible. Morerover, the ubiquity of, and the dependency of our socio-economic fabric on electronic and digital technology can hardly be underestimated. Especially because in most industrialized societies it has made most of its manual, mechanical and analog predecessors obsolete to the point that they are not even being used as backup systems any longer. In my opinion, this is sufficient reason to at least give the theory some good thought and debate.

AC2 December 3, 2010 12:09 AM

@Clive

“http://eprint.iacr.org/2010/594.pdf

And then realise what this does for a potential attacker running on the same Cloud Server you are running your data upto or down from even if hevily AES encrypted.”

Thanks for that, was a good read.. But not sure how practicable their attack is given that it is based, partly, on the following

“Next, standard malware infection techniques can be used to compromise a non-privileged account and to deploy our
custom payload on the victim machine.”

The moment you have capability to deploy executables to a production server then its game over, no matter whether on cloud or a private data centre.

Also it is not sufficient to have the attacker just running code on the same Cloud Server.. Most of these use virtualization (eg Amazon EC2 uses Zen) so the attack will not work if the attacker and victim are in different EC2 instances.

Even if they manage to get it running on the same CPU (obviously in the non memory-share mode) there isn’t a way for the attacker process to infer memory accesses based on the CPU cache because there would be a shed load of other processes that use the given CPU not just the attacker and victim. This is because they’re running in different virtualised servers so their CFS tricks to ensure that the attacker process runs immediately after the victim won’t work either…

Unless the attacker code is placed in the same EC2 instance i.e. server as the victim, which is game over anyway, whether on the cloud or in a private data centre…

1915bond December 3, 2010 2:17 AM

As a malware dissector for next to 20 years – all I can say is, “it’s all in the payload.” Brian’s prescient meltdown scenario is inevitable. And, we’re unconcernedly sneaking up on that scenario in short order.

Calgarian December 3, 2010 6:00 PM

Let’s keep some perspective here… the article was written / published by the Calgary Herald after all — it’s not as if this newspaper is recognized for hard hitting indepth reporting or journalistic integrity…

George December 3, 2010 9:54 PM

There is a potential for large scale failure on the internet in a number of ways. Currently we have no real solution to stop something like a DOS attack. It requires getting the attacking systems shut down or removed from the network. There is no protocol in place that allows disconnecting the bandwidth of incoming traffic in such a manor other than disconnecting the attacked system from the network itself.

If a number of parties started attacking each other with large scale attacks we could see traffic issues that continue for a large portion of time making for at least a temporary length of time parts of the internet in accessible.

If such a protocol was ever developed it would be even more dangerous than the issue of the large scale DOS attack. It would allow for limiting others access and use of the network allowing far to great of government control.

bd December 4, 2010 1:37 PM

Actually, it was an excellent talk. The article seems to be little more than a single sentence lifted from the hour-and-a-half long seminar, so of course it lacks context.

(technically, the quote was accurate, in that I do remember Brian saying it. I don’t recall the precise context).

Summary: terrible article, great talk.

Clive Robinson December 5, 2010 12:01 PM

In the process of hunting around for info (+or-) on Brian’s hypothes I came across a number of interesting titbits…

One that has a few interesting links and quotes is,

http://sciencecareers.sciencemag.org/career_magazine/previous_issues/articles/2010_12_03/caredit.a1000115

@ Dirk Praet,

“IMHO any system, whether it be a car, a bank, a nuclear plant or a cyber infrastructure can be subverted or will suffer a major breakdown at some point. Flaws can be exploited, people can make mistakes. One small failure can set off a massive chain of unexpected events.”

The recent wikileaks US diplomatic cables disclosure has one about the Brazilian Black out in 2009 from what has been said it was not a cyber attack but one of those unfortunate cascades of events,

http://m.krebsonsecurity.com/2010/12/cable-no-cyber-attack-in-brazilian-09-blackout/

My original thoughts on hearing about the blackout was “fragile network” and a “casscade failure” as has been seen a number of times in Europe.

@ George,

“If such a protocol was ever developed it would be even more dangerous than the issue of the large scale DOS attack. It would allow for limiting others access and use of the network allowing far to great of government control.

In any limited channel you will always have rate limiting issues no matter what you do. As you probably know the most common method currently is to use some level of Quality of Service limiting and thus prioratise traffic accordingly (usually by type).

But this fails if either the highest priority traffic reaches the channel limit or traffic that looks like the highest priority reaches the limit.

The problem is then to distinquish between genuinely high priority traffic and traffic that which is not but looks like it. may be. Beyond fairly simple rules usually it is only the data sink somewhere on the receiving end of the channel that can decide not the transmitting end and further it cannot decide untill it has received it…

Which unless the data sink is dynamicaly changing the rules backwards towards the data source, makes it effectivly an unsolvable problem.

Which is why we tend to over supply capacity and where problems persist use simple metrics for QoS to decide, and put up with the consiquences whatever they might be.

However there is another way to do it with bidding protocols. The data source puts in a value on how important they think their traffic is into the packet. At the transmission end of the channel, the channel transmitter selects the highest bids and sends the highest current bids through the channel (this is the principle of having “Flash Messages” in military networks).

Obviously there has to be some way to ensure that bids are kept realistic and the usual way in commercial networks is to put in a charge against each bid level. Thus using basic economics of supply and demand.

A correctly implemented combination of QoS and Bid systems will give the best benifit to both the data source and sink in commercial systems where capacity on channels is limited. However it becomes very problematic when multiple or redundant links are concerned as there are oportunities to cheat.

The cost of implementing any system beyond minimal “traffic type” QoS becomes disproportionately expensive very very quickly and beyond a certain point becomes a bottleneck inof it’s self. And this is the real issue, with large area networks of high bandwidth the only place you can effectivly filter traffic is at a node on the edge just before the data sink or just prior to the data source which means effectivly the job falls to the ISP to perform.

Which probably accounts for why various people want deep packet inspection at the ISP because it then becomes fairly simple to monitor or censor the packet stream at that point for just the reasons you aluded to.

JM December 6, 2010 4:00 PM

I was at the talk; it was hardly a sensationalistic talk and it seemed in line with previous papers authored by Mr. Snow.

In particular, I think the meltdown that was implied was that so long as security continued to be an afterthought in product development, and so long as products continued to get more and more complex with more and more features — it would be inevitable that things would come crashing down somewhere, somehow, via some talented hack or malware.

Brian Snow November 11, 2011 4:03 PM

Sorry I am so late catching up on this Blog;

I am the Brian Snow who gave the speech.

The correct quote was “We COULD face a cyber meltdown in as little as long weeks to short months”. Note that I did not say we WOULD. And I did stress COULD.

I went on to say the only thing holding it off is the INTENT of our opponents. If they want it to happen, it will; we could not stop it.

We continue to function not due to the quality of our security posture, but due to the SUFFERANCE of our opponents. This is not a good position to be in!

So far, it has not been worth their while, since disabling us would also damage the world economy enough to cause problems for the opponents as well.

So we toddle on…

Clive Robinson April 16, 2012 6:31 PM

@ Brian Snow,

Sorry I am so late catching up on this Blog

In the UK we have a saying “better late than never” 🙂

More seriously though as you say it’s,

We continue to function not due to the quality of our security posture, but due to the SUFFERANCE of our opponents

Which has two obvious assspects that for some reason get ignored in most discussions,

The first is “we hold the door open to our opponents” it is afterall “our resources” they are using “to attack us” not their own. So the only costs they realy have are the initial attack tool development and developing their own resources to make use of their “ill gotten gains”, irrespective of if they are just petty criminals or state level intel agencies.

The second is that we effectivly “train our opponents” or “force them to evolve their attacks”. Back in the late 1990’s I was trying to convince (unsuccessfuly) various people that small incremental improvments in security were actually counter productive, in that we were in effect training the attackers to be more tenacious.

As a comparison to this in the physical world, few people learn to climb mountains by jogging up a near vertical 20,000ft rock face. They generaly learn to first walk then run on the flat and then progress to walking up hills and climbing over small objects and thus up small cliff faces. That is they master a succession of small chalenges and build the “muscle & skills” to master a series of further incremental steps in difficulty untill the 20,000ft rock face is effectivly “just the next step”.

Part of this “next step” issue is “backwards compatability” and “hidden fall back”. Backwards compatability in effect means we have to carry forward design choices that are now known to be insecure and worse we rarely design systems that make it explicit which “backwards compatability” features are actually in use at any given point in time. Thus we actually make life much easier for our opponents as 80-90% of a new system is actually the old system with all it’s failings being “dragged along for the ride”.

As you say,

So we toddle on…

To which I would add “with our heads in the ground, thus are forced to walk repeatedly in circles treading the same old ground without progressing”

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.