July 15, 2002
by Bruce Schneier
A free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography.
Back issues are available at <http://www.schneier.com/crypto-gram.html>. To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to firstname.lastname@example.org.
Copyright (c) 2002 by Counterpane Internet Security, Inc.
In this issue:
There's a whole lot of embedded control systems in our society, controlling things as diverse as vending machines and automobiles and power plants, and they've been designed with not a whole lot of security.
Actually, mostly they've mostly been designed with no security. And that's not a good thing.
These are distributed control systems (DCS), or supervisory control and data acquisition (SCADA) systems. The simplest ones just carry measurement data. More complicated ones throw railway switches, open and close circuit breakers, and adjust valve flow in lots of different pipelines. The most complicated ones control devices and systems at an even higher level.
For the most part, these systems have been obscure and isolated -- this is why their designers never bothered with security -- but more and more they're being connected to the Internet. And the fear is that now they can be taken over by hackers, criminals, or (gasp!) terrorists.
This has been true for decades now, but the War on (Some) Terrorism has brought this into the news. Many are worried that that some terrorist with a laptop in Peshawar can open the floodgates of a dam in the United States, or shut down the American power grid. It's a frightening prospect.
And certainly the threats are real. These systems can be successfully attacked. And given the sheer complexity of some of the systems being controlled, catastrophic failures are certainly possible.
But I think they're unlikely. First, as insecure as the systems are, it's hard to hack in and do maximum damage. It's probably easy to hack in and stumble around until something breaks, but that's not nearly as spectacular. For once, obscurity is working in our favor; the simple facts that the commands are arcane and obscure, the effects of individual changes are not obvious, and there are no readily available manuals, makes the system more secure.
Second, low-tech terrorism is much more reliable, and much more effective, than high-tech. While these threats are real, I rate them as lower than explosives or lunatics with automatic weapons. Sure, opening sewage floodgates into the river will make headlines, but bombing one of the three water tunnels into Manhattan will do much more damage.
The real threat here is the remote attacker. I think the likely scenario is that some terrorist-wannabe -- not a real terrorist but someone who reads about terrorism in the press and is sympathetic -- in some random country will try to attack infrastructures this way. They'll break in, and they'll do some random damage. It won't be spectacular, but it will be successful.
The solution is twofold. One, keep critical DCS and SCADA systems off the Internet. Two, fix the protocols to add security. And three, don't panic about the threats; the risk isn't that great.
Counterpoint: No, we're not.
An actual attack:
Crypto-Gram is currently in its fifth year of publication. Back issues cover a variety of security-related topics, and can all be found on <http://www.schneier.com/crypto-gram.html>. These are a selection of articles that appeared in this calendar month in other years.
Phone Hacking: The Next Generation:
Full Disclosure and the CIA:
Security Risks of Unicode:
The Future of Crypto-Hacking:
I only need to quote from the press release: "Combining chaos mathematics and computer science, the Danish company Cryptico has developed a new breakthrough encryption algorithm, which is superior to all existing solutions on the market. The company's CryptiCore (tm) product is able to encrypt at a speed of 1Gbit/second, which is between 5 and 10 times faster than other algorithms. The company has filed extensive patent applications on the technology."
And, by the way, "The technology is being backed up by internationally recognized experts." No names were provided, of course.
I am continually surprised that people still fall for this stuff.
The big news is Microsoft's Palladium system. I know I need to write about it, but I just didn't have time this month. I'll work on it for next time. For now, I leave you with my three main questions. One: security for who? Looks like this system is more about security for Microsoft and Disney than security for the owner of the computer. Two: Does Microsoft realize that fancy crypto hardware doesn't automatically fix software bugs? Do they remember the bugs that plagued their last attempt at code signing: ActiveX? Three: what are the antitrust issues surrounding taking public protocols and replacing them with Microsoft-owned protocols? Still, there are a lot of really good ideas in Palladium, if we can ensure that they're used in the right ways.
The US builds and launches expensive spy planes, and then shares the results with anyone who cares to watch.
Thougts on terrorism:
Interesting Kazaa security issues. Guess what? Many people don't install Kazaa correctly, and inadvertently share many personal files.
"Technologies to Secure Federal Buildings." A long and interesting GAO report.
Hackers breached the California government's network's security, and, for a month, had unfettered access to the personal information of 265,000 state workers. Most disturbing is a statement from the California governor's office: "our security is not that bad and besides, this kind of thing happens all the time." Geez, people. Take responsibility for your own network.
U.S. Army Web sites are no better, it seems.
Cyber security is third on the FBI's priority list, after terrorism and espionage.
Article on why PKI has failed:
Security risks of wireless networks and credit card numbers:
Terrorism might be the impetus for pervasive cyber insurance:
Excellent rebuttal to the paper by the Alexis de Tocqueville Institution, discussed last month, that claims open source software is less secure.
Another press release hoax:
Good essay on why software is so bad, and some solutions:
Scalpers hack World Cup reservation system:
Flaws in the FBI's process to deal with computer vulnerabilities:
More on software liability:
Insider attacks: employee saboteurs.
Pro-Islamic hacking groups: reality or PR hype?
A student claims to have revolutionized cryptography after watching a cartoon. (How come we never see articles about random people revolutionizing brain surgery?) At least the major press ignored this story.
The Motion Picture Association (MPAA) is trying to convince Congress to enforce a "Broadcast Flag," which computers and other hardware are supposed to recognize and refuse to copy. Read both the MPAA FAQ on the flag, and EFF's rebuttal to their FAQ:
Distributed denial-of-service attack taxonomy:
SIGINT, data mining, and traffic analysis in the drug war:
A German campaign against data retention:
The second quarter of 2002 was our best ever. More and more companies -- and more and more big-name companies -- are letting us monitor their networks. And more and more VARs are reselling Counterpane. We're by far the largest security monitoring company in the world, and we continue to grow.
A whole bunch more Counterpane resellers:
On June 13, McAfee issued a press release describing a new virus that affects JPEG files such as digital photographs. In typical ominous terms, the press release tried to scare us all into buying antivirus software and update subscriptions.
There are three important points to this story. The first is one that I've said before: there is no separation between data and program files. We've seen viruses that affect Microsoft Office data files, and viruses that affect Postscript files. A virus that can affect a JPEG file should be no surprise. Neither should a virus that infects XML, PDF, and a whole lot of other data formats. Just expect it. If it can't happen in the current incarnation of the format, it will happen in some future incarnation.
The second point is that this isn't an example of the above phenomenon. There is no executable virus hidden in the JPEG. Near as I can tell, Perrun isn't even a virus. It's a program file -- an EXE file -- that inserts code into JPEG files. Without the EXE file, nothing happens. It only works if you're already infected with an extractor that reads the code out of the images. This is worse than lame; it's stupid. I am stunned that any competent virus researcher considered this worth a second glance.
The final point is to notice how McAfee uses this to sow maximal fear. This is not a virus that is currently infecting computers. This is not a virus from the wild. This is nothing that is an immediate threat right now. According to the AP story: "McAfee researchers received the virus from its creator."
I have long suspected a cozy little link between virus writers and antivirus software makers. The latter certainly needs the former, both to keep viruses in the news and to provide a steady revenue stream from updates. And here's an example of them sharing information.
I don't think McAfee paid the virus writer for this new type of virus, just so they could scare everyone with it. But it wouldn't surprise me if there's some quid pro quo going on.
Beware viruses in data files. Buy antivirus software and keep it up to date. But beware FUD from antivirus manufacturers as well. In fact, buy your antivirus products from companies that don't issue these sorts of press releases. This annoying hype hurts the industry.
McAfee info (the press release is gone from their site):
From: GSCole <gscoleark.ship.edu>
While the first purpose of intelligence activities might be prevention, the second purpose would be reaction. Whether or not the collection of intelligence information allows for the prevention of certain events, it should allow for the preparation of reactions to events, whether or not those events are foreseen. The arguments that many have made, relative to the so-called failure to connect the dots, appear to be specious, at best. Taken to their extreme, such arguments would have us believe that it is a waste of resources to engage in intelligence activities, because all possibilities cannot be predicted with a high level of accuracy.
There are numerous possibilities for unpreventable activities, including activities that might be driven by acts of nature; e.g., a storm that destroys important communication links. The possibilities of reaction are fewer in number, and highlight the importance of intelligence. Merely as an example, there are many ways that a communication link might be degraded, but a fewer number of sufficient ways to react. Redundancy is one means of reacting to a number of communication link failures, with redundancy being in the form of either multiple communication linkages of the same sort or the maintenance of a variety of different linkages. If I can't send e-mail, there are other means of transporting messages from me to someone else. If I can't predict a given failure, at least I have an opportunity to prepare a number of suitable reaction scenarios.
Importantly, to the extent that your reactions are predictable, you've eased the task of those who would attempt to exploit the limitations of your intelligence gathering activities. The development of multiple reaction scenarios would seem to be of a high order of importance, if one thinks that there are limitations in the intelligence gathering process. If there are limitations to a system's ability to prevent certain harmful activities, then it would seem that an emphasis on the need to react would be evident.
If there is a valid criticism of the play of events in recent months, it would appear to be on the preparation to react to tragedy. On-site, at the local level, it appears that the citizenry were prepared to do their best in reacting to events, as compared to the reaction scenarios that were demonstrated by those who were further up the decision-making chain, those who supposedly had greater intelligence resources at their command.
From: Chel van Gennip <chelvangennip.nl>
I think you missed one point. Security is all about controlling damage at a reasonable cost. For terrorism, this control will be limited: the cost of 100% security is too high. You mentioned the effect on civil liberties and the money spent. At 100% security, this security will be the only thing we have.
One of the problems is the fact the enemy has no home base, so conventional approaches are limited. Another problem is the lack of respect of the enemy for many values, including their own life. These two factors will make it difficult to fight against this threat. I think a strategic approach against this threat should address this two problems: give the opponent something to lose, to make him vulnerable and try to bring respect in their ideology by supporting specific parts of that ideology or to create diversions.
So an approach, similar to the old Roman approach is needed: bread and games.
From: Mike <John.Michael.WilliamsComputer.org>
Your remarks remind me of what I've been arguing for years, for those of my circle who had some reason to care: the fundamental flaw in US intelligence is entirely the legacy of J. Edgar, who consolidated all counterintelligence and counterterrorism in a law enforcement agency, partly as a power play, mostly as a patriot (nobody else cared in those days). Others could have their sideshows (e.g., Angleton), but the FBI had police powers; i.e., were armed, dangerous, and voraciously prosecutorial; scalps on the belt meant more to careers than national security, especially where very long-term strategies were needed.
It amazes me that no one has, in the accessible press as far as I can tell, compared the STRUCTURE of American intel to those of other, relatively successful, countries, including both the UK and Israel that you refer to.
The Israelis have Mossad, its high profile "institute" for foreign intelligence, catching the flak while a zillion other entities, such as those that ran Pollard can be called rogues if need be. It has Shin Bet for internal security, apparently and effectively separate, if usually well-coordinated. They probably have an equivalent of Special Branch, given their UK heritage.
The Brits have a tripartite arrangement, SIS (aka MI6, the Secret Intelligence Service, chartered for foreign intel only); the Security Service (aka MI5, chartered for domestic -- where Commonwealth intel falls, I don't know); and the Special Branch of Scotland Yard, the national law enforcement agency. Special Branch are specially indoctrinated and cleared law enforcement who may brought into national security cases, foreign or domestic, when it is time to begin arrests and prosecution.
Neither MI5 or MI6 have police powers. Wiretaps run on Brits by "UK government security personnel" without warrant, I have it from a public address by a senior UK bureaucrat, may not be used in court. I believe there are other equally significant evidentiary restrictions on material/evidence gathered by MI5.
Where are the scholars, the theorists? Where are the practitioners? Where are the operators? We need scholarly, comparative input to the current mess, before we let Homeland Security follow in the footsteps of old J. Edgar.
From: Mike Robinson <mikersundialservices.com>
No one has articulated opinion-polling as a security issue, and I doubt they ever would or could ... but history (and even office politics) consistently tells us that some of the worst decisions ever made have come as a swift reaction by a politician or official who did not first surround himself with the best and most objective information available ... "and damn the pollsters." Particularly in times of war and crisis, decision making must come first and inevitably public opinion will follow. The most courageous and important decisions were not always popular in their own time.
From: Abdul Rehman Gani <abdulgeastcoast.co.za>
I am sad to see the discussion has moved to how to improve intelligence gathering and analysis to prevent future terror attacks, almost as if they are inevitable. This, coupled with the major incursions being made into liberties in that most free of countries, leaves one contemplating a very bleak future. Have we here in South Africa been chasing fantasy when we dreamt of a democratic future that guaranteed human rights? Is peace and prosperity only attainable by less liberties, less freedom of expression, creating special provisions for certain races, and more intrusion in our lives? Isn't that something we just cast aside when Nelson Mandela became president in 1994?
Fortunately that is not what is needed. It just seems that way because the U.S. is surging forth with plans to treat the symptoms, not the problems. How many more despots and dictators will your government fund, arm and support before the American people realise that all this gnashing of teeth, continual terrorist warnings and billions spent on trying to monitor everything is not the way it needs to be, or indeed should be? How many more special interests will influence trade policy and so destroy young economies in the name of free trade?
Why not spend billions (if that much is required) in exporting democracy, implementing real fair trade, and rewarding real progress amongst the world's impoverished nations? America does not have to own everything.
There is a common view outside of America that holds that because American policy has a major influence in our lives, we too should have a say in who is to be the US president. That is unlikely to happen. All we can do is hope that you citizens of America will keep your eye on the ball. Don't let your government distract you from the real issues -- US foreign policy.
That means encouraging your government to play fair, so you can reap the rewards of the peace. So that you can say that yours is not just a mighty country, but a great one.
After 9/11 America should work to reduce the number of people who wish to do her harm, rather than continuing to sow hatred and then trying to watch them all. That is impossible, and in that scenario another successful attack is, sadly, inevitable.
From: "Lucky Green" <shamrockcypherpunks.to>
Setting off a nuclear bomb in a shallow underground cavern or tunnel is a poor effect multiplier. Such a blast is just another variant of the generally dirty ground blast. As any first semester nuclear terrorist should know, the canonical way to increase the long-term impact of a nuke is by wrapping it in cobalt.
Physics sidebar: natural cobalt 59 will eagerly absorb the copious amounts of neutrons generated by the gadget. Once a neutron has been captured, the readily available cobalt 59 turns into cobalt 60, a highly radioactive substance commonly used to power medical irradiation devices. Cobalt 60 beta-decays with a half-life of a few years to nickel 60, which in turn will pretty much immediately release excess energy in the form of two high-energy gamma particles before turning into regular nickel. As a result of said gamma radiation, the area covered by the fallout would remain uninhabitable for as much as several decades.
I first read about this method of dirtying up a nuke in a Superman comic book when I was six or seven years old. I will turn 40 this year.
I'd say the jig is pretty much up on how to extend the impact of a nuke over extended periods of time. There is no rational reason for a reporter today to hold back on the publication of such concepts. Unless America were to resort to banning libraries, undergrad text books on nuclear physics, oh, and yes, 30+-year-old children's comic books.
BTW, if you are at all interested in learning more about nukes, I highly recommend reading "The Curve of Binding Energy" by John McPhee. The book is very accessible and quite a page-turner. No prior knowledge required. The book even discusses taking down the WTC with a home-built nuke. Of course we now know that a nuke wasn't even required.
See Sir Arthur Conan Doyle "The Adventure of the Norwood Builder": "When those packets were sealed up, Jonas Oldacre got McFarlane to secure one of the seals by putting his thumb upon the soft wax. ... It was the simplest thing in the world for him to take a wax impression from the seal, to moisten it in as much blood as he could get from a pin-prick, and to put the mark upon the wall during the night...."
Granted that the gummy prints are a little more sophisticated, but what does one expect after 100 years!
The "real risks" you allude to are the merchants' (and possibly Citibank's), not mine.
I looked into Citibank's virtual credit card numbers a while ago and didn't find any incentive for me to use it. Yeah, the scheme is nifty. But, it helps to protect Citibank and the merchants, not me. If my credit card is
From: "Benjamin J. Tilly " <ben_tillyoperamail.com>
Gunnar Peterson [the author of a letter in the previous issue] correctly points out the development benefits of using SOAP. SOAP makes development easier. A good semantic model makes it easier to integrate intent through the design of the system, which makes it easier to design systems which are secure if the underlying layers work as you expect. (Which they may not -- see his note on Unicode.)
However, this completely misunderstands the nature of the threat he is worried about.
Security is not primarily a problem arising from the inability of a single project to be properly designed and implemented. Security threats arise because out of the many projects you have to put some trust in, some or many will be flawed and provide avenues in for potential attackers. Security threats need to be defended against by, among other things, some sort of automated monitoring which is largely independent of the (at least somewhat untrusted) application code.
Given that, consider the following observations:
- Applications built using SOAP will be designed as software today is already designed. Which means little understanding and awareness of security with plenty of opportunities for relatively inexperienced people to make well-known and basic mistakes.
- Even worse. When you simplify development, software companies come under pressure to cut costs by replacing competent developers with cheap ones. This leaves you with programmers who are less equipped to analyze security risks working with tools that are harder to analyze and understand.
- The entire SOAP/ASP model means that there is pressure to network enable more programs of more kinds than was historically the case. Even if those programs were individually safer (a hypothesis that I am not inclined to accept), the number of programs with possible security holes is an increased overall risk.
- SOAP implementations may have basic security mistakes that you do not realize when you commit to them. For a random example, Perl programs built with SOAP::Lite had (at least until recently) serious security vulnerabilities that you simply could not turn off. (It exported every possible function in the name of ease -- people could use this to make arbitrary system calls.) Such mistakes are easy to make, and most people don't see the consequences of such decisions.
- You have failed to address Bruce Schneier's basic point. It is important to have various kinds of monitoring and intrusion detection in place. However, the same semantics which assist a human developer hinder automated tools which simply cannot be equipped to understand that.
- There is a widespread misconception that encryption and security are the same thing. Adding "security" to SOAP through tunnelling over https does nothing to solve the security problems just described, but does render it mathematically impossible to develop auditing tools to catch intrusion problems.
For these reasons and more, I believe that Bruce is absolutely correct. SOAP will be bad for the security of our computer networks.
CRYPTO-GRAM is a free monthly newsletter providing summaries, analyses, insights, and commentaries on computer security and cryptography. Back issues are available on <http://www.schneier.com/crypto-gram.html>.
To subscribe, visit <http://www.schneier.com/crypto-gram.html> or send a blank message to email@example.com. To unsubscribe, visit <http://www.schneier.com/crypto-gram-faq.html>.
Please feel free to forward CRYPTO-GRAM to colleagues and friends who will find it valuable. Permission is granted to reprint CRYPTO-GRAM, as long as it is reprinted in its entirety.
CRYPTO-GRAM is written by Bruce Schneier. Schneier is founder and CTO of Counterpane Internet Security Inc., the author of "Secrets and Lies" and "Applied Cryptography," and an inventor of the Blowfish, Twofish, and Yarrow algorithms. He is a member of the Advisory Board of the Electronic Privacy Information Center (EPIC). He is a frequent writer and lecturer on computer security and cryptography.
Counterpane Internet Security, Inc. is the world leader in Managed Security Monitoring. Counterpane's expert security analysts protect networks for Fortune 1000 companies world-wide.
Schneier.com is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..