Schneier on Security
A blog covering security and security technology.
« Analysis of How Bitcoin Is Actually Used |
| Friday Squid Blogging: Squid Insurance »
October 19, 2012
Stoking Cyber Fears
A lot of the debate around President Obama's cybsersecurity initiative centers on how much of a burden it would be on industry, and how that should be financed. As important as that debate is, it obscures some of the larger issues surrounding cyberwar, cyberterrorism, and cybersecurity in general.
It's difficult to have any serious policy discussion amongst the fear mongering. Secretary Panetta's recent comments are just the latest; search the Internet for "cyber 9/11," "cyber Pearl-Harbor," "cyber Katrina," or -- my favorite -- "cyber Armageddon."
There's an enormous amount of money and power that results from pushing cyberwar and cyberterrorism: power within the military, the Department of Homeland Security, and the Justice Department; and lucrative government contracts supporting those organizations. As long as cyber remains a prefix that scares, it'll continue to be used as a bugaboo.
But while scare stories are more movie-plot than actual threat, there are real risks. The government is continually poked and probed in cyberspace, from attackers ranging from kids playing politics to sophisticated national intelligence gathering operations. Hackers can do damage, although nothing like the cyberterrorism rhetoric would lead you to believe. Cybercrime continues to rise, and still poses real risks to those of us who work, shop, and play on the Internet. And cyberdefense needs to be part of our military strategy.
Industry has definitely not done enough to protect our nation's critical infrastructure, and federal government may need more involvement. This should come as no surprise; the economic externalities in cybersecurity are so great that even the freest free market would fail.
For example, the owner of a chemical plant will protect that plant from cyber attack up to the value of that plant to the owner; the residual risk to the community around the plant will remain. Politics will color how government involvement looks: market incentives, regulation, or outright government takeover of some aspects of cybersecurity.
None of this requires heavy-handed regulation. Over the past few years we've heard calls for the military to better control Internet protocols; for the United States to be able to "kill" all or part of the Internet, or to cut itself off from the greater Internet; for increased government surveillance; and for limits on anonymity. All of those would be dangerous, and would make us less secure. The world's first military cyberweapon, Stuxnet, was used by the United States and Israel against Iran.
In all of this government posturing about cybersecurity, the biggest risk is a cyber-war arms race; and that's where remarks like Panetta's lead us. Increased government spending on cyberweapons and cyberdefense, and an increased militarization of cyberspace, is both expensive and destabilizing. Fears lead to weapons buildups, and weapons beg to be used.
I would like to see less fear mongering, and more reasoned discussion about the actual threats and reasonable countermeasures. Pushing the fear button benefits no one.
This essay originally appeared in the New York Times "Room for Debate" blog. Here are the other essays on the topic.
Posted on October 19, 2012 at 7:45 AM
• 35 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
And there's money in contrarian positions. Ask any journalist.
a) Is that supposed to be an argument about this topic? If so, how, exactly?
b) Most of the money in journalism goes to people who toe close to the establishment or corporate line - ask Glenn Greenwald.
Regarding infrastructure. I worked for a large infrastructure OEM company. There are aware of the DHS mandate against use of DES for password storage and are aware of the vulnerabilities of DES (I personally made sure of both), yet they continue to use DES hashes for password verification. Their reason is simple; if the customer doesn't ask for it then they are not going to spend the money on development.
As an engineer, I find that it is frequently difficult to convince management to spend the extra man weeks/months required to plug security holes that customers have not actively complained about.
Move over "cyber Armageddon" -- how about "cyber Hiroshima" or "cyber Dresden"?
How about "cyber Chernobyl"? I'm surprised nobody brought that one up yet.
No squid thread up yet, and I will be out the rest of the day, so ...
Wireless meters tell snoopers when you are not home (the same one the electric company meter readers use is a vulnerability)
I just saw a Cyber Pearl Harbour nonsense headline in a Canadian newspaper last week. You should see the fear mongering by our dictator up here its comical. They are blowing 200 million on praetorian guard cyber security citing all these US pearl harbour articles but I fear the real reason is so they can fortify the PMs office for protest attacks when they pass all sorts of new spying laws next year.
They already tried to pass these citing cyber 911 and the backlash was huge, but these laws aren't going away just delayed.
200 million seems absurd when we have free OpenBSD and a lot of their developers up here to provide real security and not simply blowing millions on licensing fees.
The Canadian CSIS ultra secure mega spy fortress being built right now was also green lighted from fear mongering because somebody in government clicked an email and Chinese spies got in. In the end this will be hundreds of billions and draconian laws all because some idiot clicked an email and they were using insecure proprietary software
What about "Cyber Mayan Prophecy"?
Industry has definitely not done enough to protect our nation's critical infrastructure, and federa government may need more involvement.
I would agree that in recent times "Industry has most definitely not done enough..." in fact it's done the exact opposite of what you would expect in that it has very deliberatly weakened the critical infrastructure to everybodies detriment long term. This includs the industry it's self as has been demonstrated a number of times.
However whilst this does not come as a surprise to those who have been around the block a few times, I don't realy think it's due to,
the economic externalities in cybersecurity
The rot started long befor the Internet became much if any consideration to the utility organisations.
It was and very much still is short term profits over long term stability the Internet was just one of many vehicals to reduce costs. For instance the Internet has had no bearing on the repeated cut back on basic maintanence and not increasing capacity margins with increasing demand. All of which started with the processes that led on to what we now call "deregulation" back in the (less than) happy times of Ronnie "RayGun" and "Mad" Maggie Thatcher when Greed, Conspicuous Consumption and Live for the moment F**k Tommorow was the view point of what became the "City Slickers". Who pressed ever onwards with the faux arguments of the "Free Market Mantra" of "The Market Knows Best" and "Free Markets are efficient Markets". The reality of "Free Markets" is as expected a "race for the bottom" by "efficiency" which in reality is cost cutting of everything without the counterbalance of inovation or investment that is required for stability.
In short the notion that "National Infrastructure is endangered by lack of Cyber-Security" is nonsense, the Infrastructure functioned quite well and was stable long before the deregulation of liability in the 1980's (when there was no public Internet) and it's attendant race for the bottom. Further removing the Cyber component of the National Infrastructure will not make it any less prone to failure. As has been seen with all the supposed "cyber-attacks" talked up by the Cyber-War-Hawks that have resulted in infrastructure going down cyber-security had nothing to do with it and fragility due to poor maintanence and non investment the root cause.
a 7 years old kid could prevent stuxnet infection. problem is not weak systems, problem is idiot people
It's easy to imagine just a sheer number of malware inventors overwhelming anti-virus capabilities, especially as the computing platform of choice moves from desktop OS to cloud+mobile web platforms.
The result might be more of like an environmental disaster than a single catastrophic event or attack. "Defending" against the threat requires a much broader and more fundamental change to how new systems get built and deployed.
For example, there was a time when spam overwhelmed spam filters, and I was most of my email time deleting spam. Fortunately spam-filtering has improved but the arms race continues, and new kinds of filter-avoiding spam continue to appear.
"Cyber-global warming" might be a more appropriate metaphor, where we spend a few decades arguing whether there really is a problem.
Michael Mimoso on threatpost has an interesting editorial about the relative economic cost of foreign cyberattack vs. cybercrime, starting from Panetta's "Cyber Pearl Harbor" comment.
The notion of "Cyber Pearl Harbor" is a silly one. It's just that the "ships" of anyone's cyber "navy" aren't capital assets that cost billions or take years to replace.
I'm concerned about a "Cyber Deep Water Horizon". It might not matter that much to a bank that their cards get hacked, they have insurance. Users have to get new cards, and that might make them grumpy, but all the banks are in the same boat, so customers can't really switch to using cash on the Internet.
An Internet without credit cards, on the other hand, would be a huge impact to a lot of users.
Oracle's CSO - ex military herself - has some words on the topic:
"To summarize, I have plenty of concerns about the degree to which people rely on systems that were not designed for their threat environments, and/or that embed a systemic risk ... But I am sick to death of “DPH” and all its catchy variants"
I think that the first step is to get away from the "war" metaphor.
If anything it more closely resembles vandalism.
But ranting against vandalism does not get you multi-million dollar government contracts. So don't expect any rationality there.
And once you've allowed irrationality into the discussion, the entire subject becomes tainted by it.
"Hackers can do damage, although nothing like the cyberterrorism rhetoric would lead you to believe."
There has not been a single death of an otherwise healthy person due to any "cyber" attack. So that is correct.
"Fears lead to weapons buildups, and weapons beg to be used."
Again, the "war" metaphor is not accurate.
The "weapons" can be obsoleted by a software patch. Or just following basic security protocols.
But that is boring and does not generate revenue.
It seems physical security needs to up it's game.
Rather than just a "padlock" they need to impress on people the dangers of a ninja-attack and the ninja-threat which can only be countered with a counter-ninja physical access system, also known as a padlock.
Let's say if for some reason the USA are to experience a Cyber 9/11 in the near future, what's stopping them then from reacting (again) with TSA like bravura?
I'm glad to see an analysis that considers economics; however, I question your assertion that the economic externalities in cybersecurity are so great that even the freest free market would fail. What externalities? Unless we're talking about a physical threat that doesn't involve computers except as the means of carrying it out (say, a hacker ordering a computer to open the floodgates of a dam), I would expect any collateral damage from a cyberattack to be quite localized, as it was with Stuxnet.
In any case, if there are major economic externalities, there is a straightforward way to have the market solve them. Just impose a Pigouvian tax (defined here) on whomever could have prevented the damage but didn't. If it isn't clear who that should be, let the liability lawyers duke it out.
Be careful when you ask for more involvement -- you're likely to get more laws instead.
Made by people who don't understand, and refuse to understand.
You'll get HIPAA, SARBOX, and TSAs devoid of accountability. You'll see things like PCI compliance requiring credit cards be encrypted, but all of the encryption will be controlled by an unpatched domain controller holding the keys.
If you want real improvement, the government needs to drive the market.
When the NSA released SELINUX, nobody trusted them (as well we shouldn't with their history and the absence of accountability when they've engaged in what amounts to national self-espionage) -- but we could still learn from it.
We need the government to /drive/ this by example and meaningfully assist in bearing the market costs of developing security and competence.
We don't need laws saying "all software must be fuzz tested" or "all passwords shall be encrypted with a..."
We need them to start driving the market by doing things like demanding all new contracts negotiate "All commercially purchased software shall have a 10% refund issued on all purchase and licensing costs, and a 25% refund on all consulting fees for each remotely exploitable vulnerability, or any combination of vulnerabilities that ultimately permit remote privilege escalation".
Maybe it can be written better, but this is the only type of accountability that will actually result in a chance of substantial improvement.
The commercial market will begin to compete with the open market.
Then we can finally talk about out of box configuration issues, backdoor accounts, and the fact that a typical administrator shares his passwords with a half dozen people, because their average boss doesn't understand or believe in privilege separation.
"Pushing the fear button benefits no one."
Unfortunately, not so: it benefits those peddling solutions that purport to address those fears. Whip up fear of dropbear attacks, you can make your fortune selling dropbear repellent, dropbear anti-venom etc; brand your computer network overhaul as "cyber-warfare precautions", it'll be better funded by politicians than "security improvements".
...bottom line: Since it's difficult for the MPAA & RIAA to find and prosecute illegal downloaders (not to mention time consuming, economically challenging and lawfully challenging) they have decided to go through the ISPs to find the biggest offenders. Although this is nothing new (they have been trying this method for several years) they're rolling out their policies in a few weeks.
The interesting/amusing part of Panetta's speech in New York is that he had to invoke the specter of Shamoon--an exploit against management computers at the Saudi and Qatar energy companies--to push the cyberfear button. The correct cyber-anxiety analogy for attacks against physical infrastructure is Stuxnet--but of course we can't invoke it because we unleashed it.
I agree between the FUD and the copyright agencies it seems we have the perfect storm of 1984 BS. I rented a movie the other day and the stupid copyright infringement is a crime screen had a friggin DHS logo on it!
There will never be any real cyber-security as long as the Pentagon is spending billions to break other nation's infrastructure. Stuxnet is just the tip of the iceberg and only one example of something that went public. Their ultimate goal (as they have stated publicly) is to have the ability to get root access on any machine running any OS anywhere in the world.
And if you don't think this wont be used against Americans, you've got another think coming. Freedom from eavesdropping is impossible to achieve with crypto if they control the hardware.
This is why I think in the future many people will abandon the Internet for any serious work and will simply use it for hello kitty or watching youtube. It just cannot be trusted for anything else and this will become more and more apparent in the years/decades to come.
Bingo, that's actually what I do now (minus hello kitty thing :). Plus you don't reveal your knowledge. Keyloggers give me nightmares. As cool and applicable as crypto is, I have decided to "start over" and get back to my roots which is attempting to design my own PCB's; even though components may be poisoned and I can't carry out the entire process on my own. Then maybe branch over into software (though the code drives my nuts). It's something you have to learn on your own, though some guidance is really nice. Bruce got a physics degree 1st, then a comp. sci (probably more crypto-oriented) degree. It's hard though, and in so doing I've really come to appreciate some of the tech. today, mindboggling...
Bruce, I'm surprised... With all your experience working near gov agencies, you trust THEM to make it better????!!!?!@#$#@
From all my experience working in cyber security for and with the gov, in the .mil and civilian side, I have come to find this statement to be a NATURAL LAW: "There is nothing so bad that the government won't make it worse." They will, they're experts at it.
To make cyber Armageddon a reality, all we need is to do is get the government, or the NSA (HA! cough, sputter!) involved.
Many problems arise from conflating cyber crime and acts of war. For an insightful discussion of why this distinction matters, see the excellent set of papers in: Information Strategy and Warfare: A Guide to Theory and Practice edited by John Arquilla and Douglas Borer.
I admire the way you assail the alleged hyperbole associated with government's position on cyber security. Then without pause, you attribute Stuxnet, without any basis of fact, to the US and Israeli governments. While I'm all for a good debate - you can argue both sides in good faith. The very existence of Stuxnet shows how serious the situation is.
The cyber threat is real, unconstrained and potentially catastrophic to the citizens of the United States. It's not armageddon or Pearl Harbor, what it is...is Vietnam. A battle in the trenches versus an invisible, motivated, and often unknown enemy that will drag on perpetuity.
The United States government has one responsibility (IMO) and that's safeguarding it's citizens. We are no longer subject to only traditional threats to our way of life. The threat picture has changed and our attack surface has expanded exponentially. To ignore these facts would be folly.
Has there been any sort of development of a language to describe a secure system? I'm thinking of something like the effort to prove L4 to be a mathematically secure kernel, but in a format that doesn't require a team of PhDs to work with it.
If there was such a language, I'd love to see the language evolution that follows as we strive for more capability.
When A annoys or injures B on the pretense of saving or improving X, A is a scoundrel.
There have been various projects to develop a secure programming language. One of the public projects that I am aware of was the BitC project headed by Dr. Jonathan Shapiro.
It was developed as part of the Coyotos microkernel project. Formal verification was a big part of it.
Shapiro got hired away by Microsoft, so he abandoned the project for a while. Then he left Microsoft, so I don't know the status of the project now.
when you say,
Has there been any sort of development of a language to describe a secure system?
Are you talking about a formal verification language or a programing language?
On a slightly different note one of the current problems is those developing verifiable programing languages are not playing in the same game as the secure OS guys. It's kind of like American Football, Rugby and Australian rules. From a distance they appear to be very similar, close up however...
One problem is that those designing programing languages tend to view data in abstract ways and likewise abstract the way they are dealt with. However when you are designing an OS the last thing you need or want is abstraction in data in any way shape or form.
Whilst this is a resolvable issue it throws up other problems one of which is the real killer of code reuse. Abstraction methodologies have subtal nuances that effect the way a programer goes about coding up the way data is handled. If you want to take a body of code written in say C++ if your language does not fully support the nuances then transcoding the code either automaticaly or manually is not going to happen so no practical code reuse...
The problem is with most object oriented languages have nuances that have become the staple of programs and these nuances do not lend themselves to the current methods of providing "provable security" via formal or other methods.
Whilst it may be possible to develope a secure programing language that will allow many of the abstract benifits of objects they won't be the same flavour of objects programers are addicted to because of the nuances. Oh and the extra restrictions required to ensure a good fit to a secure OS is going to cause extra pain which will severly clip your workaday code cutters wings...
Thus the benifits will be seen as "to small to justify the change" not just by the programers but by their managers and the managers managers that interface into the business will regard the lack of off the peg code reuse as being a cost not just to far but so far over the horizon as to be dropping off of their personal flat world.
Yeah and the threat keeps changing. First the Chinese, then Anons, now Iran. I expect this from any administration though, I don't think anyone who wants power will be able to keep their hands off the internet.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.