Entries Tagged "debates"

Page 3 of 5

John Mueller on Nuclear Disarmament

The New York Times website has a blog called “Room for Debate,” where a bunch of people — experts in their areas — write short essays commenting on a news item. (I participated a few weeks ago.) Earlier this month, there was a post on nuclear disarmament, following President Obama’s speech in Cairo that mentioned the subject. One of the commentators was John Mueller, Ohio State University political science professor and longtime critic of the terrorism hype. (I recommend his book, Overblown.) His commentary was very good; I especially liked the first sentence. An excerpt:

The notion that the world should rid itself of nuclear weapons has been around for over six decades — during which time they have been just about the only instrument of destruction that hasn’t killed anybody. The abolition idea has been dismissed by most analysts because, since inspection of any arms reduction cannot be perfect, the measure could potentially put wily cheaters in a commanding position.

There may be another approach to the same end, one that, while also imperfect, would require far less effort while greatly reducing the amount of sanctimonious huffing and puffing we would have to endure.

Just let it happen.

While it may not be entirely fair to characterize disarmament as an effort to cure a fever by destroying the thermometer, the analogy is instructive when it is reversed: when fever subsides, the instrument designed to measure it loses its usefulness and is often soon misplaced.

Indeed, a fair amount of nuclear arms reduction, requiring little in the way of formal agreement, has already taken place between the former cold war contestants.

Posted on June 22, 2009 at 1:46 PMView Comments

Obama's Cybersecurity Speech

I am optimistic about President Obama’s new cybersecurity policy and the appointment of a new “cybersecurity coordinator,” though much depends on the details. What we do know is that the threats are real, from identity theft to Chinese hacking to cyberwar.

His principles were all welcome — securing government networks, coordinating responses, working to secure the infrastructure in private hands (the power grid, the communications networks, and so on), although I think he’s overly optimistic that legislation won’t be required. I was especially heartened to hear his commitment to funding research. Much of the technology we currently use to secure cyberspace was developed from university research, and the more of it we finance today the more secure we’ll be in a decade.

Education is also vital, although sometimes I think my parents need more cybersecurity education than my grandchildren do. I also appreciate the president’s commitment to transparency and privacy, both of which are vital for security.

But the details matter. Centralizing security responsibilities has the downside of making security more brittle by instituting a single approach and a uniformity of thinking. Unless the new coordinator distributes responsibility, cybersecurity won’t improve.

As the administration moves forward on the plan, two principles should apply. One, security decisions need to be made as close to the problem as possible. Protecting networks should be done by people who understand those networks, and threats needs to be assessed by people close to the threats. But distributed responsibility has more risk, so oversight is vital.

Two, security coordination needs to happen at the highest level possible, whether that’s evaluating information about different threats, responding to an Internet worm or establishing guidelines for protecting personal information. The whole picture is larger than any single agency.

This essay originally appeared on The New York Times website, along with several others commenting on Obama’s speech. All the essays are worth reading, although I want to specifically quote James Bamford making an important point I’ve repeatedly made:

The history of White House czars is not a glorious one as anyone who has followed the rise and fall of the drug czars can tell. There is a lot of hype, a White House speech, and then things go back to normal. Power, the ability to cause change, depends primarily on who controls the money and who is closest to the president’s ear.

Because the new cyber czar will have neither a checkbook nor direct access to President Obama, the role will be more analogous to a traffic cop than a czar.

Gus Hosein wrote a good essay on the need for privacy:

Of course raising barriers around computer systems is certainly a good start. But when these systems are breached, our personal information is left vulnerable. Yet governments and companies are collecting more and more of our information.

The presumption should be that all data collected is vulnerable to abuse or theft. We should therefore collect only what is absolutely required.

As I said, they’re all worth reading. And here are some more links.

I wrote something similar in 2002 about the creation of the Department of Homeland Security:

The human body defends itself through overlapping security systems. It has a complex immune system specifically to fight disease, but disease fighting is also distributed throughout every organ and every cell. The body has all sorts of security systems, ranging from your skin to keep harmful things out of your body, to your liver filtering harmful things from your bloodstream, to the defenses in your digestive system. These systems all do their own thing in their own way. They overlap each other, and to a certain extent one can compensate when another fails. It might seem redundant and inefficient, but it’s more robust, reliable, and secure. You’re alive and reading this because of it.

EDITED TO ADD (6/2): Gene Spafford’s opinion.

EDITED TO ADD (6/4): Good commentary from Bob Blakley.

Posted on May 29, 2009 at 3:01 PMView Comments

An Expectation of Online Privacy

If your data is online, it is not private. Oh, maybe it seems private. Certainly, only you have access to your e-mail. Well, you and your ISP. And the sender’s ISP. And any backbone provider who happens to route that mail from the sender to you. And, if you read your personal mail from work, your company. And, if they have taps at the correct points, the NSA and any other sufficiently well-funded government intelligence organization — domestic and international.

You could encrypt your mail, of course, but few of us do that. Most of us now use webmail. The general problem is that, for the most part, your online data is not under your control. Cloud computing and software as a service exacerbate this problem even more.

Your webmail is less under your control than it would be if you downloaded your mail to your computer. If you use Salesforce.com, you’re relying on that company to keep your data private. If you use Google Docs, you’re relying on Google. This is why the Electronic Privacy Information Center recently filed a complaint with the Federal Trade Commission: many of us are relying on Google’s security, but we don’t know what it is.

This is new. Twenty years ago, if someone wanted to look through your correspondence, he had to break into your house. Now, he can just break into your ISP. Ten years ago, your voicemail was on an answering machine in your office; now it’s on a computer owned by a telephone company. Your financial accounts are on remote websites protected only by passwords; your credit history is collected, stored, and sold by companies you don’t even know exist.

And more data is being generated. Lists of books you buy, as well as the books you look at, are stored in the computers of online booksellers. Your affinity card tells your supermarket what foods you like. What were cash transactions are now credit card transactions. What used to be an anonymous coin tossed into a toll booth is now an EZ Pass record of which highway you were on, and when. What used to be a face-to-face chat is now an e-mail, IM, or SMS conversation — or maybe a conversation inside Facebook.

Remember when Facebook recently changed its terms of service to take further control over your data? They can do that whenever they want, you know.

We have no choice but to trust these companies with our security and privacy, even though they have little incentive to protect them. Neither ChoicePoint, Lexis Nexis, Bank of America, nor T-Mobile bears the costs of privacy violations or any resultant identity theft.

This loss of control over our data has other effects, too. Our protections against police abuse have been severely watered down. The courts have ruled that the police can search your data without a warrant, as long as others hold that data. If the police want to read the e-mail on your computer, they need a warrant; but they don’t need one to read it from the backup tapes at your ISP.

This isn’t a technological problem; it’s a legal problem. The courts need to recognize that in the information age, virtual privacy and physical privacy don’t have the same boundaries. We should be able to control our own data, regardless of where it is stored. We should be able to make decisions about the security and privacy of that data, and have legal recourse should companies fail to honor those decisions. And just as the Supreme Court eventually ruled that tapping a telephone was a Fourth Amendment search, requiring a warrant — even though it occurred at the phone company switching office and not in the target’s home or office — the Supreme Court must recognize that reading personal e-mail at an ISP is no different.

This essay was originally published on the SearchSecurity.com website, as the second half of a point/counterpoint with Marcus Ranum.

Posted on May 5, 2009 at 6:06 AMView Comments

Breach Notification Laws

There are three reasons for breach notification laws. One, it’s common politeness that when you lose something of someone else’s, you tell him. The prevailing corporate attitude before the law—”They won’t notice, and if they do notice they won’t know it’s us, so we are better off keeping quiet about the whole thing”—is just wrong. Two, it provides statistics to security researchers as to how pervasive the problem really is. And three, it forces companies to improve their security.

That last point needs a bit of explanation. The problem with companies protecting your data is that it isn’t in their financial best interest to do so. That is, the companies are responsible for protecting your data, but bear none of the costs if your data is compromised. You suffer the harm, but you have no control—or even knowledge—of the company’s security practices. The idea behind such laws, and how they were sold to legislators, is that they would increase the cost—both in bad publicity and the actual notification—of security breaches, motivating companies to spend more to prevent them. In economic terms, the law reduces the externalities and forces companies to deal with the true costs of these data breaches.

So how has it worked?

Earlier this year, three researchers at the Heinz School of Public Policy and Management at Carnegie Mellon University—Sasha Romanosky, Rahul Telang and Alessandro Acquisti—tried to answer that question. They looked at reported data breaches and rates of identity theft from 2002 to 2007, comparing states with a law to states without one. If these laws had their desired effects, people in states with notification laws should experience fewer incidences of identity theft. The result: not so much. The researchers found data breach notification laws reduced identity theft by just 2 percent on average.

I think there’s a combination of things going on. Identity theft is being reported far more today than five years ago, so it’s difficult to compare identity theft rates before and after the state laws were enacted. Most identity theft occurs when someone’s home or work computer is compromised, not from theft of large corporate databases, so the effect of these laws is small. Most of the security improvements companies made didn’t make much of a difference, reducing the effect of these laws.

The laws rely on public shaming. It’s embarrassing to have to admit to a data breach, and companies should be willing to spend to avoid this PR expense. The problem is, in order for this to work well, public shaming needs the cooperation of the press. And there’s an attenuation effect going on. The first major breach after the first state disclosure law was in February 2005 in California, when ChoicePoint sold personal data on 145,000 people to criminals. The event was big news, ChoicePoint’s stock tanked, and it was shamed into improving its security.

Next, LexisNexis exposed personal data on 300,000 individuals, and then Citigroup lost data on 3.9 million. The law worked; the only reason we knew about these security breaches was because of the law. But the breaches came in increasing numbers, and in larger quantities. Data breach stories felt more like “crying wolf” and soon, data breaches were no longer news.

Today, the remaining cost is that of the direct mail campaign to notify customers, which often turns into a marketing opportunity.

I’m still a fan of these laws, if only for the first two reasons I listed. Disclosure is important, but it’s not going to solve identity theft. As I’ve written previously, the reason theft of personal information is common is that the data is valuable once stolen. The way to mitigate the risk of fraud due to impersonation is not to make personal information difficult to steal, it’s to make it difficult to use.

Disclosure laws only deal with the economic externality of data owners protecting your personal information. What we really need are laws prohibiting financial institutions from granting credit to someone using your name with only a minimum of authentication.

This is the second half of a point/counterpoint with Marcus Ranum. Marcus’s essay is here.

Posted on January 21, 2009 at 6:59 AMView Comments

Does Risk Management Make Sense?

We engage in risk management all the time, but it only makes sense if we do it right.

“Risk management” is just a fancy term for the cost-benefit tradeoff associated with any security decision. It’s what we do when we react to fear, or try to make ourselves feel secure. It’s the fight-or-flight reflex that evolved in primitive fish and remains in all vertebrates. It’s instinctual, intuitive and fundamental to life, and one of the brain’s primary functions.

Some have hypothesized that humans have a “risk thermostat” that tries to maintain some optimal risk level. It explains why we drive our motorcycles faster when we wear a helmet, or are more likely to take up smoking during wartime. It’s our natural risk management in action.

The problem is our brains are intuitively suited to the sorts of risk management decisions endemic to living in small family groups in the East African highlands in 100,000 BC, and not to living in the New York City of 2008. We make systematic risk management mistakes — miscalculating the probability of rare events, reacting more to stories than data, responding to the feeling of security rather than reality, and making decisions based on irrelevant context. And that risk thermostat of ours? It’s not nearly as finely tuned as we might like it to be.

Like a rabbit that responds to an oncoming car with its default predator avoidance behavior — dart left, dart right, dart left, and at the last moment jump — instead of just getting out of the way, our Stone Age intuition doesn’t serve us well in a modern technological society. So when we in the security industry use the term “risk management,” we don’t want you to do it by trusting your gut. We want you to do risk management consciously and intelligently, to analyze the tradeoff and make the best decision.

This means balancing the costs and benefits of any security decision — buying and installing a new technology, implementing a new procedure or forgoing a common precaution. It means allocating a security budget to mitigate different risks by different amounts. It means buying insurance to transfer some risks to others. It’s what businesses do, all the time, about everything. IT security has its own risk management decisions, based on the threats and the technologies.

There’s never just one risk, of course, and bad risk management decisions often carry an underlying tradeoff. Terrorism policy in the U.S. is based more on politics than actual security risk, but the politicians who make these decisions are concerned about the risks of not being re-elected.

Many corporate security decisions are made to mitigate the risk of lawsuits rather than address the risk of any actual security breach. And individuals make risk management decisions that consider not only the risks to the corporation, but the risks to their departments’ budgets, and to their careers.

You can’t completely remove emotion from risk management decisions, but the best way to keep risk management focused on the data is to formalize the methodology. That’s what companies that manage risk for a living — insurance companies, financial trading firms and arbitrageurs — try to do. They try to replace intuition with models, and hunches with mathematics.

The problem in the security world is we often lack the data to do risk management well. Technological risks are complicated and subtle. We don’t know how well our network security will keep the bad guys out, and we don’t know the cost to the company if we don’t keep them out. And the risks change all the time, making the calculations even harder. But this doesn’t mean we shouldn’t try.

You can’t avoid risk management; it’s fundamental to business just as to life. The question is whether you’re going to try to use data or whether you’re going to just react based on emotions, hunches and anecdotes.

This essay appeared as the first half of a point-counterpoint with Marcus Ranum in Information Security magazine.

Posted on October 14, 2008 at 1:25 PMView Comments

Chinese Cyber Attacks

The popular media conception is that there is a coordinated attempt by the Chinese government to hack into U.S. computers — military, government corporate — and steal secrets. The truth is a lot more complicated.

There certainly is a lot of hacking coming out of China. Any company that does security monitoring sees it all the time.

These hacker groups seem not to be working for the Chinese government. They don’t seem to be coordinated by the Chinese military. They’re basically young, male, patriotic Chinese citizens, trying to demonstrate that they’re just as good as everyone else. As well as the American networks the media likes to talk about, their targets also include pro-Tibet, pro-Taiwan, Falun Gong and pro-Uyghur sites.

The hackers are in this for two reasons: fame and glory, and an attempt to make a living. The fame and glory comes from their nationalistic goals. Some of these hackers are heroes in China. They’re upholding the country’s honor against both anti-Chinese forces like the pro-Tibet movement and larger forces like the United States.

And the money comes from several sources. The groups sell owned computers, malware services, and data they steal on the black market. They sell hacker tools and videos to others wanting to play. They even sell T-shirts, hats and other merchandise on their Web sites.

This is not to say that the Chinese military ignores the hacker groups within their country. Certainly the Chinese government knows the leaders of the hacker movement and chooses to look the other way. They probably buy stolen intelligence from these hackers. They probably recruit for their own organizations from this self-selecting pool of experienced hacking experts. They certainly learn from the hackers.

And some of the hackers are good. Over the years, they have become more sophisticated in both tools and techniques. They’re stealthy. They do good network reconnaissance. My guess is what the Pentagon thinks is the problem is only a small percentage of the actual problem.

And they discover their own vulnerabilities. Earlier this year, one security company noticed a unique attack against a pro-Tibet organization. That same attack was also used two weeks earlier against a large multinational defense contractor.

They also hoard vulnerabilities. During the 1999 conflict over the two-states theory conflict, in a heated exchange with a group of Taiwanese hackers, one Chinese group threatened to unleash multiple stockpiled worms at once. There was no reason to disbelieve this threat.

If anything, the fact that these groups aren’t being run by the Chinese government makes the problem worse. Without central political coordination, they’re likely to take more risks, do more stupid things and generally ignore the political fallout of their actions.

In this regard, they’re more like a non-state actor.

So while I’m perfectly happy that the U.S. government is using the threat of Chinese hacking as an impetus to get their own cybersecurity in order, and I hope they succeed, I also hope that the U.S. government recognizes that these groups are not acting under the direction of the Chinese military and doesn’t treat their actions as officially approved by the Chinese government.

This essay originally appeared on the Discovery Channel website.

EDITED TO ADD (7/18): A slightly longer version of this essay appeared in Information Security magazine as part of a point/counterpoint with Marcus Ranum. His half is here.

Posted on July 14, 2008 at 7:08 AMView Comments

The Ethics of Vulnerability Research

The standard way to take control of someone else’s computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it’s still how most modern malware works.

Vulnerabilities are software mistakes–mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don’t get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent–or protect against–those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society–in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities–again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don’t know, my first question is, “What has the designer broken?” Anyone can design a security system that he cannot break. So when someone announces, “Here’s my security system, and I can’t break it,” your first reaction should be, “Who are you?” If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible–and more and less legal–ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn’t whether it’s ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus’s half here.

Posted on May 14, 2008 at 11:29 AMView Comments

Security Products: Suites vs. Best-of-Breed

We know what we don’t like about buying consolidated product suites: one great product and a bunch of mediocre ones. And we know what we don’t like about buying best-of-breed: multiple vendors, multiple interfaces, and multiple products that don’t work well together. The security industry has gone back and forth between the two, as a new generation of IT security professionals rediscovers the downsides of each solution.

The real problem is that neither solution really works, and we continually fool ourselves into believing whatever we don’t have is better than what we have at the time. And the real solution is to buy results, not products.

Honestly, no one wants to buy IT security. People want to buy whatever they want — connectivity, a Web presence, email, networked applications, whatever — and they want it to be secure. That they’re forced to spend money on IT security is an artifact of the youth of the computer industry. And sooner or later the need to buy security will disappear.

It will disappear because IT vendors are starting to realize they have to provide security as part of whatever they’re selling. It will disappear because organizations are starting to buy services instead of products, and demanding security as part of those services. It will disappear because the security industry will disappear as a consumer category, and will instead market to the IT industry.

The critical driver here is outsourcing. Outsourcing is the ultimate consolidator, because the customer no longer cares about the details. If I buy my network services from a large IT infrastructure company, I don’t care if it secures things by installing the hot new intrusion prevention systems, by configuring the routers and servers as to obviate the need for network-based security, or if it uses magic security dust given to it by elven kings. I just want a contract that specifies a level and quality of service, and my vendor can figure it out.

IT is infrastructure. Infrastructure is always outsourced. And the details of how the infrastructure works are left to the companies that provide it.

This is the future of IT, and when that happens we’re going to start to see a type of consolidation we haven’t seen before. Instead of large security companies gobbling up small security companies, both large and small security companies will be gobbled up by non-security companies. It’s already starting to happen. In 2006, IBM bought ISS. The same year BT bought my company, Counterpane, and last year it bought INS. These aren’t large security companies buying small security companies; these are non-security companies buying large and small security companies.

If I were Symantec and McAfee, I would be preparing myself for a buyer.

This is good consolidation. Instead of having to choose between a single product suite that isn’t very good or a best-of-breed set of products that don’t work well together, we can ignore the issue completely. We can just find an infrastructure provider that will figure it out and make it work — who cares how?

This essay originally appeared as the second half of a point/counterpoint with Marcus Ranum in Information Security. Here’s Marcus’s half.

Posted on March 10, 2008 at 6:33 AMView Comments

Security in Ten Years

This is a conversation between myself and Marcus Ranum. It will appear in Information Security Magazine this month.


Bruce Schneier: Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Moore’s Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we’ll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict. I don’t think anyone can predict what the emergent properties of 100x computing power will bring: new uses for computing, new paradigms of communication. A 100x world will be different, in ways that will be surprising.

But throughout history and into the future, the one constant is human nature. There hasn’t been a new crime invented in millennia. Fraud, theft, impersonation and counterfeiting are perennial problems that have been around since the beginning of society. During the last 10 years, these crimes have migrated into cyberspace, and over the next 10, they will migrate into whatever computing, communications and commerce platforms we’re using.

The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

I don’t see anything by 2017 that will fundamentally alter this. Do you?


Marcus Ranum: I think you’re right; at a meta-level, the problems are going to stay the same. What’s shocking and disappointing to me is that our responses to those problems also remain the same, in spite of the obvious fact that they aren’t effective. It’s 2007 and we haven’t seemed to accept that:

  • You can’t turn shovelware into reliable software by patching it a whole lot.
  • You shouldn’t mix production systems with non-production systems.
  • You actually have to know what’s going on in your networks.
  • If you run your computers with an open execution runtime model you’ll always get viruses, spyware and Trojan horses.
  • You can pass laws about locking barn doors after horses have left, but it won’t put the horses back in the barn.
  • Security has to be designed in, as part of a system plan for reliability, rather than bolted on afterward.

The list could go on for several pages, but it would be too depressing. It would be “Marcus’ list of obvious stuff that everybody knows but nobody accepts.”

You missed one important aspect of the problem: By 2017, computers will be even more important to our lives, economies and infrastructure.

If you’re right that crime remains a constant, and I’m right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.

I’ve been pretty dismissive of the concepts of cyberwar and cyberterror. That dismissal was mostly motivated by my observation that the patchworked and kludgy nature of most computer systems acts as a form of defense in its own right, and that real-world attacks remain more cost-effective and practical for terror purposes.

I’d like to officially modify my position somewhat: I believe it’s increasingly likely that we’ll suffer catastrophic failures in critical infrastructure systems by 2017. It probably won’t be terrorists that do it, though. More likely, we’ll suffer some kind of horrible outage because a critical system was connected to a non-critical system that was connected to the Internet so someone could get to MySpace — ­and that ancillary system gets a piece of malware. Or it’ll be some incomprehensibly complex software, layered with Band-Aids and patches, that topples over when some “merely curious” hacker pushes the wrong e-button. We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?

You’re worried criminals will continue to penetrate into cyberspace, and I’m worried complexity, poor design and mismanagement will be there to meet them.


Bruce Schneier: I think we’ve already suffered that kind of critical systems failure. The August 2003 blackout that covered much of northeastern United States and Canada­ — 50 million people­ — was caused by a software bug.

I don’t disagree that things will continue to get worse. Complexity is the worst enemy of security, and the Internet — and the computers and processes connected to it — ­is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.

Yes, IT systems will continue to become more critical to our infrastructure­ — banking, communications, utilities, defense, everything.

By 2017, the interconnections will be so critical that it will probably be cost-effective — and low-risk — for a terrorist organization to attack over the Internet. I also deride talk of cyberterror today, but I don’t think I will in another 10 years.

While the trends of increased complexity and poor management don’t look good, there is another trend that points to more security — but neither you nor I is going to like it. That trend is IT as a service.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.


Marcus Ranum: You’re right about the shift toward services — it’s the ultimate way to lock in customers.

If you can make it difficult for the customer to get his data back after you’ve held it for a while, you can effectively prevent the customer from ever leaving. And of course, customers will be told “trust us, your data is secure,” and they’ll take that for an answer. The back-end systems that will power the future of utility computing are going to be just as full of flaws as our current systems. Utility computing will also completely fail to address the problem of transitive trust unless people start shifting to a more reliable endpoint computing platform.

That’s the problem with where we’re heading: the endpoints are not going to get any better. People are attracted to appliances because they get around the headache of system administration (which, in today’s security environment, equates to “endless patching hell”), but underneath the slick surface of the appliance we’ll have the same insecure nonsense we’ve got with general-purpose desktops. In fact, the development of appliances running general-purpose operating systems really does raise the possibility of a software monoculture. By 2017, do you think system engineering will progress to the point where we won’t see a vendor release a new product and instantly create an installed base of 1 million-plus users with root privileges? I don’t, and that scares me.

So if you’re saying the trend is to continue putting all our eggs in one basket and blithely trusting that basket, I agree.

Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.


Bruce Schneier: You’re right about the endpoints not getting any better. I’ve written again and again how measures like two-factor authentication aren’t going to make electronic banking any more secure. The problem is if someone has stuck a Trojan on your computer, it doesn’t matter how many ways you authenticate to the banking server; the Trojan is going to perform illicit transactions after you authenticate.

It’s the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.

And a misguided attempt to solve this is going to dominate computing by 2017. I mentioned software-as-a-service, which you point out is really a trick that allows businesses to lock up their customers for the long haul. I pointed to the iPhone, whose draconian rules about who can write software for that platform accomplishes much the same thing. We could also point to Microsoft’s Trusted Computing, which is being sold as a security measure but is really another lock-in mechanism designed to keep users from switching to “unauthorized” software or OSes.

I’m reminded of the post-9/11 anti-terrorist hysteria — we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

Computing is heading in the same direction, although this time it is industry that wants control over its users. They’re going to sell it to us as a security system — they may even have convinced themselves it will improve security — but it’s fundamentally a control system. And in the long run, it’s going to hurt security.

Imagine we’re living in a world of Trustworthy Computing, where no software can run on your Windows box unless Microsoft approves it. That brain drain you talk about won’t be a problem, because security won’t be in the hands of the user. Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

By then, though, we’ll be ready to start building real security. As you pointed out, networks will be so embedded into our critical infrastructure — ­and there’ll probably have been at least one real disaster by then — that we’ll have no choice. The question is how much we’ll have to dismantle and build over to get it right.


Marcus Ranum: I agree regarding your gloomy view of the future. It’s ironic the counterculture “hackers” have enabled (by providing an excuse) today’s run-patch-run-patch-reboot software environment and tomorrow’s software Stalinism.

I don’t think we’re going to start building real security. Because real security is not something you build — ­it’s something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn’t factor in patching and patch-related downtime, because if it did, the numbers would stink. Meanwhile, I’ve seen purpose-built Internet systems run for years without patching because they didn’t rely on bloated components. I doubt industry will catch on.

The future will be captive data running on purpose-built back-end systems — and it won’t be a secure future, because turning your data over always decreases your security. Few possess the understanding of complexity and good design principles necessary to build reliable or secure systems. So, effectively, outsourcing — or other forms of making security someone else’s problem — will continue to seem attractive.
That doesn’t look like a very rosy future to me. It’s a shame, too, because getting this stuff correct is important. You’re right that there are going to be disasters in our future.

I think they’re more likely to be accidents where the system crumbles under the weight of its own complexity, rather than hostile action. Will we even be able to figure out what happened, when it happens?

Folks, the captains have illuminated the “Fasten your seat belts” sign. We predict bumpy conditions ahead.

EDITED TO ADD (12/4): Commentary on the point/counterpoint.

Posted on December 3, 2007 at 12:14 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.