Security in Ten Years

This is a conversation between myself and Marcus Ranum. It will appear in Information Security Magazine this month.


Bruce Schneier: Predictions are easy and difficult. Roy Amara of the Institute for the Future once said: “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

Moore’s Law is easy: In 10 years, computers will be 100 times more powerful. My desktop will fit into my cell phone, we’ll have gigabit wireless connectivity everywhere, and personal networks will connect our computing devices and the remote services we subscribe to. Other aspects of the future are much more difficult to predict. I don’t think anyone can predict what the emergent properties of 100x computing power will bring: new uses for computing, new paradigms of communication. A 100x world will be different, in ways that will be surprising.

But throughout history and into the future, the one constant is human nature. There hasn’t been a new crime invented in millennia. Fraud, theft, impersonation and counterfeiting are perennial problems that have been around since the beginning of society. During the last 10 years, these crimes have migrated into cyberspace, and over the next 10, they will migrate into whatever computing, communications and commerce platforms we’re using.

The nature of the attacks will be different: the targets, tactics and results. Security is both a trade-off and an arms race, a balance between attacker and defender, and changes in technology upset that balance. Technology might make one particular tactic more effective, or one particular security technology cheaper and more ubiquitous. Or a new emergent application might become a favored target.

I don’t see anything by 2017 that will fundamentally alter this. Do you?


Marcus Ranum: I think you’re right; at a meta-level, the problems are going to stay the same. What’s shocking and disappointing to me is that our responses to those problems also remain the same, in spite of the obvious fact that they aren’t effective. It’s 2007 and we haven’t seemed to accept that:

  • You can’t turn shovelware into reliable software by patching it a whole lot.
  • You shouldn’t mix production systems with non-production systems.
  • You actually have to know what’s going on in your networks.
  • If you run your computers with an open execution runtime model you’ll always get viruses, spyware and Trojan horses.
  • You can pass laws about locking barn doors after horses have left, but it won’t put the horses back in the barn.
  • Security has to be designed in, as part of a system plan for reliability, rather than bolted on afterward.

The list could go on for several pages, but it would be too depressing. It would be “Marcus’ list of obvious stuff that everybody knows but nobody accepts.”

You missed one important aspect of the problem: By 2017, computers will be even more important to our lives, economies and infrastructure.

If you’re right that crime remains a constant, and I’m right that our responses to computer security remain ineffective, 2017 is going to be a lot less fun than 2007 was.

I’ve been pretty dismissive of the concepts of cyberwar and cyberterror. That dismissal was mostly motivated by my observation that the patchworked and kludgy nature of most computer systems acts as a form of defense in its own right, and that real-world attacks remain more cost-effective and practical for terror purposes.

I’d like to officially modify my position somewhat: I believe it’s increasingly likely that we’ll suffer catastrophic failures in critical infrastructure systems by 2017. It probably won’t be terrorists that do it, though. More likely, we’ll suffer some kind of horrible outage because a critical system was connected to a non-critical system that was connected to the Internet so someone could get to MySpace—­and that ancillary system gets a piece of malware. Or it’ll be some incomprehensibly complex software, layered with Band-Aids and patches, that topples over when some “merely curious” hacker pushes the wrong e-button. We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?

You’re worried criminals will continue to penetrate into cyberspace, and I’m worried complexity, poor design and mismanagement will be there to meet them.


Bruce Schneier: I think we’ve already suffered that kind of critical systems failure. The August 2003 blackout that covered much of northeastern United States and Canada­—50 million people­—was caused by a software bug.

I don’t disagree that things will continue to get worse. Complexity is the worst enemy of security, and the Internet—and the computers and processes connected to it—­is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.

Yes, IT systems will continue to become more critical to our infrastructure­—banking, communications, utilities, defense, everything.

By 2017, the interconnections will be so critical that it will probably be cost-effective—and low-risk—for a terrorist organization to attack over the Internet. I also deride talk of cyberterror today, but I don’t think I will in another 10 years.

While the trends of increased complexity and poor management don’t look good, there is another trend that points to more security—but neither you nor I is going to like it. That trend is IT as a service.

By 2017, people and organizations won’t be buying computers and connectivity the way they are today. The world will be dominated by telcos, large ISPs and systems integration companies, and computing will look a lot like a utility. Companies will be selling services, not products: email services, application services, entertainment services. We’re starting to see this trend today, and it’s going to take off in the next 10 years. Where this affects security is that by 2017, people and organizations won’t have a lot of control over their security. Everything will be handled at the ISPs and in the backbone. The free-wheeling days of general-use PCs will be largely over. Think of the iPhone model: You get what Apple decides to give you, and if you try to hack your phone, they can disable it remotely. We techie geeks won’t like it, but it’s the future. The Internet is all about commerce, and commerce won’t survive any other way.


Marcus Ranum: You’re right about the shift toward services—it’s the ultimate way to lock in customers.

If you can make it difficult for the customer to get his data back after you’ve held it for a while, you can effectively prevent the customer from ever leaving. And of course, customers will be told “trust us, your data is secure,” and they’ll take that for an answer. The back-end systems that will power the future of utility computing are going to be just as full of flaws as our current systems. Utility computing will also completely fail to address the problem of transitive trust unless people start shifting to a more reliable endpoint computing platform.

That’s the problem with where we’re heading: the endpoints are not going to get any better. People are attracted to appliances because they get around the headache of system administration (which, in today’s security environment, equates to “endless patching hell”), but underneath the slick surface of the appliance we’ll have the same insecure nonsense we’ve got with general-purpose desktops. In fact, the development of appliances running general-purpose operating systems really does raise the possibility of a software monoculture. By 2017, do you think system engineering will progress to the point where we won’t see a vendor release a new product and instantly create an installed base of 1 million-plus users with root privileges? I don’t, and that scares me.

So if you’re saying the trend is to continue putting all our eggs in one basket and blithely trusting that basket, I agree.

Another trend I see getting worse is government IT know-how. At the rate outsourcing has been brain-draining the federal workforce, by 2017 there won’t be a single government employee who knows how to do anything with a computer except run PowerPoint and Web surf. Joking aside, the result is that the government’s critical infrastructure will be almost entirely managed from the outside. The strategic implications of such a shift have scared me for a long time; it amounts to a loss of control over data, resources and communications.


Bruce Schneier: You’re right about the endpoints not getting any better. I’ve written again and again how measures like two-factor authentication aren’t going to make electronic banking any more secure. The problem is if someone has stuck a Trojan on your computer, it doesn’t matter how many ways you authenticate to the banking server; the Trojan is going to perform illicit transactions after you authenticate.

It’s the same with a lot of our secure protocols. SSL, SSH, PGP and so on all assume the endpoints are secure, and the threat is in the communications system. But we know the real risks are the endpoints.

And a misguided attempt to solve this is going to dominate computing by 2017. I mentioned software-as-a-service, which you point out is really a trick that allows businesses to lock up their customers for the long haul. I pointed to the iPhone, whose draconian rules about who can write software for that platform accomplishes much the same thing. We could also point to Microsoft’s Trusted Computing, which is being sold as a security measure but is really another lock-in mechanism designed to keep users from switching to “unauthorized” software or OSes.

I’m reminded of the post-9/11 anti-terrorist hysteria—we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.

Computing is heading in the same direction, although this time it is industry that wants control over its users. They’re going to sell it to us as a security system—they may even have convinced themselves it will improve security—but it’s fundamentally a control system. And in the long run, it’s going to hurt security.

Imagine we’re living in a world of Trustworthy Computing, where no software can run on your Windows box unless Microsoft approves it. That brain drain you talk about won’t be a problem, because security won’t be in the hands of the user. Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. That’s the problem with any system that relies on control: Once you figure out how to hack the control system, you’re pretty much golden. So instead of a zillion pesky worms, by 2017 we’re going to see fewer but worse super worms that sail past our defenses.

By then, though, we’ll be ready to start building real security. As you pointed out, networks will be so embedded into our critical infrastructure—­and there’ll probably have been at least one real disaster by then—that we’ll have no choice. The question is how much we’ll have to dismantle and build over to get it right.


Marcus Ranum: I agree regarding your gloomy view of the future. It’s ironic the counterculture “hackers” have enabled (by providing an excuse) today’s run-patch-run-patch-reboot software environment and tomorrow’s software Stalinism.

I don’t think we’re going to start building real security. Because real security is not something you build—­it’s something you get when you leave out all the other garbage as part of your design process. Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain. The prevailing wisdom about software return on investment doesn’t factor in patching and patch-related downtime, because if it did, the numbers would stink. Meanwhile, I’ve seen purpose-built Internet systems run for years without patching because they didn’t rely on bloated components. I doubt industry will catch on.

The future will be captive data running on purpose-built back-end systems—and it won’t be a secure future, because turning your data over always decreases your security. Few possess the understanding of complexity and good design principles necessary to build reliable or secure systems. So, effectively, outsourcing—or other forms of making security someone else’s problem—will continue to seem attractive.
That doesn’t look like a very rosy future to me. It’s a shame, too, because getting this stuff correct is important. You’re right that there are going to be disasters in our future.

I think they’re more likely to be accidents where the system crumbles under the weight of its own complexity, rather than hostile action. Will we even be able to figure out what happened, when it happens?

Folks, the captains have illuminated the “Fasten your seat belts” sign. We predict bumpy conditions ahead.

EDITED TO ADD (12/4): Commentary on the point/counterpoint.

Posted on December 3, 2007 at 12:14 PM98 Comments

Comments

Veronica December 3, 2007 1:00 PM

Fortunately, there’ll be a one-hundred-fold increase in computing power; unfortunately, it’ll be used to exploit and control us?

-ac- December 3, 2007 1:22 PM

We’ve got some bad-looking trend lines; all the indicators point toward a system that is more complex, less well-understood and more interdependent. With infrastructure like that, who needs enemies?

Spot on.

Carlo Graziani December 3, 2007 1:31 PM

In Today’s Episode Of “My Future Outsucks Your Future”…

For what it’s worth, I don’t think ISPs/Microsoft/NeoDotCommers taking control of the application layer away from users is really that plausible a scenario. It would require coordinated international legislative coercion. I’m sure China would be enthusiastic to support this, but I expect to see flocks of migratory pigs soaring by my window before this sort of thing is translated into a technical action program that can be effectively imposed on the Internet.

aeschylus December 3, 2007 1:36 PM

“This is a conversation between myself and Marcus Ranum.”

Should be: “This is a conversation between me and Marcus Ranum.”

I often wonder why people feel compelled to use “myself” where “me” or “I” is called for, particularly in such obvious cases as this one. It’s all too common.

Alan December 3, 2007 1:39 PM

You can’t turn shovelware into reliable software by
patching it a whole lot.

Not necessarily true. If you would quit enhancing that software you might reach a point of reliability by patching. And if you would be willing to turn of unsafe features you would stand a better chance.

Albatross December 3, 2007 1:46 PM

Does Moore’s Law still hold if the melting Greenland ice sheet stalls the Gulf Stream, plunging Europe into an ice age and the world into a global economic crisis? What ARE the emergent properties of 100X network connectivity when most of Florida and Manhattan are submerged? And how does captive data run on purpose built back-end systems under a totalitarian U.S. corporatist regime?

Actually, that last one probably runs pretty well.

Straight Shooter December 3, 2007 1:50 PM

Is there anything relatively knowledgeable but non-expert users can do about any of this?

10) Just concentrate on more important issues, like http://omg.yahoo.com/britney-spears-celebrates-her-26th-birthday-with-paris-hilton/news/4500
9) Write a well-thought-out letter to the editor
8) Drown your troubles by drinking
7) Group Hug
6) Accuse The Internet of unAmerican Activity
5) Nuke Iran to start WWIII and trigger the Second Coming of Our Lord, who’ll quickly fix everything.
4) Become an expert user
3) Invent the next killer web-based service
2) Schneier for President
1) Outsource the problem to me (www.fixmyinternets.com) for only $89 per month

Thinker December 3, 2007 2:00 PM

Clearly we need two orders of happy pills coming along. For millenia, we haven’t seen new types of crime, just new methods. Yet every generation manages to maintain a positive view of the future. Otherwise, they’d all just stop pro-creating and let the human population die off.

Think back 10 years and see what wasn’t around or what wasn’t used in the manner it is today. The explosion of information availability is a good thing. It’s created less asymmetric markets for consumers to make better decisions. It’s cut into profit levels of businesses screwing over customers.

Sure, every technology can be used for good or bad. Look at knives, cellphones, and cars.

Fred X. Quimby December 3, 2007 2:21 PM

I tuned in for blood, instead all I got was you two agreeing on everything! Well here’s my prediction: By 2017 security researchers will have evolved into a new species. Derided and shunned by normal humans, they will be forced to live a solitary nomadic existence and will only be allowed to publish one paper for each of their own kind which they consume.

wkwillis December 3, 2007 2:38 PM

I predict that your cell phone will have voice recognition, high resolution OLED video screen, linux on gigabyte ROM, a mesh connection to other cell phones, and a connection to a record of your life on a remote terrabyte hard disk.
No viruses on your ROM.
A virus or other security hazard on some service provider’s hard disk is their problem. Your data is not at risk of loss, just of loss of privacy if you give it out in the first place instead of using a virtual identity like all responsible people.

Thaddeus P. Kennedy December 3, 2007 2:45 PM

@aeschylus

For direct objects in exposative prepositions, “myself” is a valid case-wise conjugation of a personal pronoun.

Check your Strunk & White.

arlene December 3, 2007 2:51 PM

I predict that it’s not the technology companies that will own our souls in the future, it’s the marketers and demographics conglomerates. I predict that physical and electronic surveillance will converge to form complete database stories of our lives, likes, dislikes, habits, finances, and foibles. Will security be an issue in 10 years? Of course, because data repositories about us will be so pervasive and valuable.

Given up December 3, 2007 5:05 PM

@Milan

Is there anything relatively knowledgeable but non-expert users can do about any of this?

Yes. Recognize that it is too late – we can’t go back and do what it will take to make it all work right. Then, give it up and move your assets to gold, buy a lot of guns and ammo, get a self-sustaining living space in the country, and have zero dependence on what will then be the global infrastructure that runs everything in a way that is against your interests.

ConanTheGrammarian December 3, 2007 5:10 PM

“This is a conversation between myself and Marcus Ranum.”

Should be: “This is a conversation between me and Marcus Ranum.”

Ah-hem… aeschylus, did you mean to say “This is a conversation between Marcus Ranum and me” ?

DrYak December 3, 2007 5:22 PM

And I think in the future, there will be a bigger fracture that grow on one side between consumer who just trust all their data at the hands of the latest “myfacejournal” du jour and run completely virus-over-bloated computer running some monopolic OS (probably still produced at a dying microsoft lab) and won’t even notice the viruses anymore (thanks to the 16x-cores available everywhere by then), and on the other side, small grass root movement that will built more and more over small but tested and secure-proven components, used by people who got some basic knowledge of computing and security just as normally as someone learning to read today, their data will be stored at home, or on their very own web pages, either using LAMP blog-/gallery-/album-/whatever-kits deployed on rented server space, or even running on their home computers (thanks to moore’s law and even faster incresae of bandwidth). Those data will be Web “5.4beta” aware thanks to standards for exchanging data accross distributed web application (similar to modern RSS and open formats). And access-your-work-data-from-everywhere, would be guaranteed for them using basic tools such as SSH instead of relying on a central server.

It will be the group of users of daisy shaped service with a big corporation in the center telling them “Let me make a lot of profite by locking your data with me with nevermore possibilities to move elsewhere” versus users of real peer-2-peer distributed services (and thus more resistant thanks to absence of monoculture)

These to groups will be evolving in so much different realms that they will hardly contact each other or even start to forget a little bit about each other.

The second group won’t know, but they’ll be at least violating a dozen of patents, copyright-anti-circumvention-acts, a couple of “security”-laws and using material supposedly “forbiden” (because not aproved by the Microsoft/Telco/**AA/ISP/Government Consortium)…

…but that won’t matter because 99% of that second group will be located in countries who aren’t even mentioned on the internet map yet, and specifically that fraction of those countries which, by then, will have chosen the way of the open source in order to start in their information age : using tools that they can own themselves, unlike country that will listen to the Microsoft siren songs. (Maybe Negroponte’s OLPC will play some role, most probably in the end that will be caused by local grass-roots projects emulating the OLPC with local workforce).

Richard Steven Hack December 3, 2007 5:31 PM

Ah. but you see, this is really good news!

Because it means those of use who think things suck will be able to demonstrate it by taking down the government, big corporations, and other morons who are pushing this “control” nonsense.

In other words, be thankful for bad security – it’s the only thing protecting you from total control over your lives. Because if you can’t figure out a way around control, you are controlled.

The history of the world consists of primates trying to control other primates – and failing.

It’s the failure that has kept our species going.

You don’t want the future outlined here? Then learn to evade and destroy it, like your ancestors did. Remember, like Razor and Blade said, “Hacking – it’s not just a crime – it’s a survival trait!”

chromeronin December 3, 2007 5:31 PM

Is there anything relatively knowledgeable but non-expert users can do about any of this?

Yes: Beware the easy to use cool looking shiny thing. That way lies vendor tie-in and poor security design:
iTunes and the iPod – hacked in days
iPhone: Oooh, so shiny, but every process runs as root cause it’s faster that way.
Vista:Video card driver hacked, MS kills your driver. Have to download the update. That then gets hacked, MS kills the driver… see a trend, OH and CPU cycles constantly consumed to make sure your driver is the legit one. If only MS took as much care of my bank account details and browser history as they do DRM protected HD video content it might make it worth while.

lucither December 3, 2007 5:36 PM

“‘m reminded of the post-9/11 anti-terrorist hysteria — we’ve confused security with control, and instead of building systems for real security, we’re building systems of control. Think of ID checks everywhere, the no-fly list, warrantless eavesdropping, broad surveillance, data mining, and all the systems to check up on scuba divers, private pilots, peace activists and other groups of people. These give us negligible security, but put a whole lot of control in the government’s hands.”

why speak in the past tense, ‘they’ are still trying [and succeeding] to build such systems.

Anonymous December 3, 2007 6:14 PM

In 10 years everything will be connected to the Internet. Who controls the Internet? Google.

Dunno about you, but I’ll try to get a job there ASAP.

Bill December 3, 2007 6:25 PM

Security is not a trade-off. Poor security is a trade-off, because when belatedly you shove it in some other feature is pushed out.

Like many things, security is hard, but in no way is it necessarily a trade-off. This isn’t a zero sum game.

Security isn’t even something you mark off a list. If the “feature” is that I can access my bank account remotely, but it has a buggy design or implementation that allows somebody else to do that without my knowledge, then its a broken feature, if its a feature at all.

Its non-sensical to separate the concept of a feature, and the design/implementation. They’re one and the same. Its not reducible.

Instead of running around saying things are broken, we should spend at least as much time talking about actual, concrete alternatives and proper design elements. Will people listen? Of course not! But the only way to make a difference is to set expectations. Expectations of consumers, and expectations of vendors.

Thomas December 3, 2007 6:26 PM

I predict that by 2017 grammer-pedants will have be correcting 13375p33|< (or is the proper possessive diurnal impassive form of that “133t-5p3@K” when used as an indirect conjunctive preposition?)

Cosgrach December 3, 2007 6:30 PM

Is there anything relatively knowledgeable but non-expert users can do about any of this?

Yes, don’t be a lemming. Learn and fight tooth and nail for what is right. Just sitting there doing nothing will ensure the bleak future…

Thaddeus P. Kennedy December 3, 2007 6:50 PM

@Thomas

For indirect conjunctive prepositions, I’d go with non-aggressive impassive possessives, or reflexive colloidal intestinal expulsifications.

I’m sure you’d agree. Everything has changed since 9/11.

moo December 3, 2007 6:51 PM

This is a fantastic quote from Marcus:

[R]eal security is not something you build — ­it’s something you get when you leave out all the other garbage as part of your design process.

Bryan Campbell December 3, 2007 7:20 PM

Bruce Schneier and Marcus Ranum have officially stated that the sky is falling.

I guess “fear mongering” is the latest and most popular thing to do. Hey, the President of the United States does it. It must be O.K.

Instead of talking all gloom and doom, how about proposing solutions to the problem. If end-point security is the problem, then lets fix it.

Wait! Bruce and Marcus believe it can’t be fixed. So, we are all doomed.

Anyway . . . enough drama!!! If you are really interested in fixing the end-point security problem, quit whining and fix it.

Mitchell Ashley December 3, 2007 7:22 PM

You know, I think the gloom of the future is now and we’re already nearing the era where security is out of vogue. I was GM and CTO for the company StillSecure and I was recently laid off due to industry performance. It was kind of shocking to me but as I look back, we’ve seen a lot of companies come and go. Some economists are predicting a recession in the next year or two and I think the Security world might just be an early indicator for that. These days, I think the future is in blogging and virtualization. Really trying to educate readers about technology and then with virtualization I think we can totally reinvent the security world.

Devil's Advocate December 3, 2007 7:35 PM

I was very disappointed in this conversation. More and more, I am believing that when it comes to security experts, the emperor is lacking his clothes.

The through-line of this conversation is the tired and misguided notion that we have security flaws because people (be they users, engineers, or executives) are just too stupid to do what Marcus and his kin tell them to.

Perhaps “Marcus’ list of obvious stuff that everybody knows but nobody accepts” is really a list of things “companies understand are often incompatible with the transacting of business, which is rather the whole point, and therefore aren’t often practiced.”

There’s a cost associated with all computing. The resource cost of providing the security cannot be greater than the utility of the application. Security that inhibits a company’s ability to nimbly respond to market conditions, or instead places burdens on customers that drive them away, is not effective security. It’s simply the business equivalent of unplugging the box. Sure, it’s secure, but it is cutting off the nose to spite the face, isn’t it? Marcus’ insight on this issue is downright lazy.

Nice as it might be to build nothing but “purpose-driven applications” (and judging from the tenor, I perceive that the one acceptable purpose is to provide security as opposed to user functionality,) few companies find themselves with that luxury.

Why is it that security experts have stopped proposing solutions and instead continue to re-iterate the problems and complain the public hasn’t seen the light? In the next ten years, for the field of security to remain relevant, security experts must embrace the daunting challenge of making security serve the business. Step one: stop looking down on the people who are paying you.

Nonplussed December 3, 2007 7:37 PM

My god! End of the world in 10 years !

Chaos by Internet… so boring. Even a Nuclear War with the Russian seems more entertaining those days…

I predict we will be fine and dandy in 10 years. I’ll be playing with a Wii-3 on my 70 inches OLED TV like the rest of the lemmings out there.

james yue gee December 3, 2007 7:44 PM

“Complexity is the worst enemy of security, and the Internet — ­and the computers and processes connected to it — ­is getting more complex all the time. So things are getting worse, even though security technology is improving. One could say those critical insecurities are another emergent property of the 100x world of 2017.”
So my question is “Is it possible for us to reduce the complexity of the whole architecture of Internet?” and “How can we do this?” I am very optimistic about
this direction.

anonymous December 3, 2007 7:49 PM

The web is prime example of too much complexity. HTTP was never designed to deliver the dynamic content of Web 2.0 and now we have megalithic Web Apps full of XSS bugs, JavaScript malware et al , that have been born directly from trying to add more functionality to a system that should be reinvented to a simpler model that is gonna give end users what they want and be more scalable in future generations. Might I also add that the web is particularly important as it has become the way (and will become even more so) the ubiquitous way of doing any and all tasks that on a computer (software as a service). I hope we can re-structure web design before the full potential of it’s fundamental shortcomings will transform the world wide web into a hostile war-zone where personal data is never safe.

JamesG December 3, 2007 8:44 PM

Security professionals predicting more security problems, you don’t say? I use to program antivirus and encryption problems in the 90s and have always stayed current on security. Though there is more exploitation going on, there are more options if you want to be secure and are willing to learn how. When people get sick of getting exploited they will move to more secure options which will be more widely available in the future. Vendors will notice this and start making their products as secure as competitor’s when it starts to hurt them, leading to believe security will be mostly solved for those that care in the near future if not now, and for others, well, they’ll never learn anyways.

David Hopwood December 3, 2007 8:57 PM

Repeat after me: Applications should not run with the full permissions of their invoking user.

That’s the basic problem (there are other problems, but the solutions tend to follow from fixing this one). Find the projects that are making concrete attempts to fix it (CapDesk, Polaris, OLPC, Plash, …) in ways that are also usable (i.e. don’t just involve adding security dialogs that users are trained to click through like so many Pavlovian dogs), and support them. Don’t just whinge about what will happen if they’re not supported.

newtothegame December 3, 2007 9:16 PM

As I see it, letting security practices slip for the sake of the bottom line is the thoughtless practice of the corporate moguls who will soon see their day of desperation, only to lay the blame on those same people who tried to sway them in the correct direction in the first place.
As to the controls, that is moreso becoming the standard practice of big business, government and politics, so I dare say it will be a similar and growing trend in 10 and 100 years, regardless of the industry or desires of the people.

aeschylus December 3, 2007 9:25 PM

Thaddeus P. Kennedy: “For direct objects in exposative [sic] prepositions, “myself” is a valid case-wise conjugation [sic] of a personal pronoun.

Uh, no it isn’t.

Clearly wrong: “She sang a song to myself.”

Clearly wrong: “This is a conversation about myself.”

Clearly wrong: “This is a conversation between myself and Marcus Ranum.”

Verbs have conjugations; nouns have declensions. In any case, this is not problem of case. The case of “myself” is correct; it is its reflexivity that makes it wrong.

Thaddeus P. Kennedy: “Check your Strunk & White.”

Kindly provide a specific citation.

ConanTheGrammarian: “Ah-hem… aeschylus, did you mean to say ‘This is a conversation between Marcus Ranum and me’ ?”

No, I didn’t. Word order in the objective case is not prescribed, and, as often as not, the first person is placed first. You’re thinking of the subjective case, where the first person is customarily placed last. Note what sounds wrong in the following:

“[She and I/I and she] went to the movies together.”

“She called out to [me and my brother/my brother and me].”

Thomas: “I predict that by 2017 grammer-pedants will have be correcting 13375p33|< (or is the proper possessive diurnal impassive form of that “133t-5p3@K” when used as an indirect conjunctive preposition?)”

LOL!

Dan Razzell December 3, 2007 9:31 PM

Reading the discussion between these guys, I’m reminded both that security is a profound subject, and that nevertheless many readers aren’t standing where they can get a good look into its depths. Small wonder that they come away vaguely dissatisfied.

Part of the challenge is the amount of tacit security knowledge required to follow even a trendspotting discussion like the one presented here. It would be hopelessly pedantic to be constantly diving into sidebars which provide the rationale for concepts such as endpoint security or principles such as design simplicity, yet for readers to make sense of the discussion, they have to be able to tell what parts of it are speculative and what parts are foundational.

So what we really have, beneath the surface, is a literacy issue. If readers have no recourse to a generally accepted body of security principles, then a discussion like this one — if it makes any sense at all — is going to come across merely as an exchange of personal opinions.

But where exactly can that body of security principles be found? An even deeper issue to be addressed is that information security is still searching for rigorous terms of reference. It does not yet qualify as a science, much as I believe it ultimately will emerge as one.

Here’s the thing. Once we have the science, its essential features can be taught with the same confidence that math or chemistry is taught. And with that basis, I think, readers would find the present discussion much more engaging. We don’t all need to become security experts, but we at least need to understand enough to interpret what the experts have to say.

antibozo December 3, 2007 9:41 PM

Am I the only person here who thinks that Moore’s law predicts that computers will be 32 times as powerful in 10 years (doubling once per 2 years)? Am I on crack? Didn’t Schneier just say that was the easy part? I’m confused…

JackG't December 3, 2007 10:11 PM

Grammar, syntax, and usage will still be battlefields in ten years—if our species is still around.

Will evolutionary AI set all things right?

posix4e December 3, 2007 10:12 PM

I was under the impression it said something like transitor size will shrink in half every 18 months. Bell’s law is probably more useful in the 10 year period imo.

Tuomo Stauffer December 3, 2007 10:13 PM

Someone already commented “Security has to be designed in, as part of a system plan for reliability, rather than bolted on afterward.” I would add to that, you can’t buy security and security is more than your system, it is the whole infrastructure. It is a vital part of your business! And who said that securing the endpoints is most important was absolutely correct. Someone disturbing your infrastructure may cause problems but they can fixed / prevented by robust and reliable design and recovered very fast but any endpoint breach can cause damage which is either very expensive or in worst case not recoverable.
A real life comparison, CEO disappears, an expensive problem even with good insurance, take all (most of) the skilled workers out, a catastrophe for any productive corporation, fire a couple middle managers, replaced in hours or maybe we can live without for a while, nothing stops! So, don’t let your workers go postal.
Anyway, I see that level thinking coming back. It used to be an important part of any corporation security and risk assessment when I was part of that and was implied to the whole infrastructure including IT. Unfortunately often cheap politics and inside fighting, which seem always more important to some, often destroy or water down any good ideas if the management is not strong enough. So, technology is not the real issue but the missing planning and management can be.

JRT December 3, 2007 10:54 PM

While at first glance this article may seem to be an episode of “Bruce & Marcus Fly Through Hoops of Fire While Simultaneously Patting Each Other On The Back”…it isn’t.
While neither of them are entirely correct, they are long, long way from being entirely wrong.
Two quick points, both to Marcus’ side of the discussion. (1) The Government brain-drain to outsourcing is market-driven. It will stop when GS and SES-level pay at market-equivalent levels and the jobs themselves become less driven by hide-bound cultures that reward conformance instead of innovation. (2) “Purpose-designed and purpose-built software is more expensive to build, but cheaper to maintain.” is exactly right and is part of an ongoing battle I am a part of in a current project. (3) I’m an outsourcer, so there! Seriously, a lot of us in the contractor world care a lot more about getting it right than the press (and these commentators) are aware of. The sky may indeed fall, but it won’t be because folks like me weren’t working at its vaults trying their damnedest to both repair the stress fractures and design better trusses.

Hey! If Marcus can engage in hyperbole, so can I! It doesn’t make either of us wrong.

Chris December 3, 2007 10:55 PM

10 years from now will be the same as today. Just like today is pretty much no different from 1997.

xbox and playstation both have “must be approved” software paradigms – hackers exploit this by using buffer overflows in authorized software to bootstrap their warez (or hardware hacks).

Perhaps the main thing that will change is that fewer cyber crimes will be prosecuted – if you want to have a good laugh, go to your police station and tell them someone took $10K out of your bank account without you permission. This happened to me (moral – cancel your credit cards after use in Thailand). I did. They scratch their heads, and have no idea who you should talk to. If you can eventually reach someone who cares, you might get a call back about the incident, maybe 2 years later.

ice weasel December 3, 2007 11:05 PM

I realize this is a bit off topic but I’m curious on how the glacially developing net structure itself in the United States is going to dole out this data remotely. The U.S. is lagging behind a lot of countries when you look at the type of service the average the individual can buy and the cost we pay is higher than many of those same countries. So how is all this data going to get past the bottleneck and at what point will consumers just stop paying more for service that doesn’t get better?

I just wonder if the bottleneck that is the bandwidth most consumers use will limit the spread of some of the items (good and bad) you’re discussing.

Ralph December 3, 2007 11:14 PM

I agree that simplicity is the right approach for security. But what kind of simplicity? I vote for an architecture consisting of many processors, some in separate physical boxes, instead of one processor doing everything. Many of the processors will run their code directly from ROM. Physical security then becomes the centerpiece of the whole security apparatus — as it should.

Trying to cram a lot of functionality into single-processor architectures is silly. Such systems are literally always broken.

JackG't December 3, 2007 11:28 PM

If services and utility-like providers are going to be dominant, it will make a great deal of difference which players control pipes and wireless spectrum. (Obviously AT&T and Google have given that issue much thought.) How will the security of the major players, and that of their contractors, be? With the degree of secrecy that interacting with certain government agencies now requires, or permits, I’m not sure how much lesser players and mere individuals will be able to learn.

I hope that legislative bodies and regulatory agencies that will be making important decisions related to Internet and telecommunications control will, for every decision they make, look at all the security implications, and not merely national or “homeland??? security implications. I hope that their members won’t be selling themselves to interested players as has been historically commonplace.

Hal December 4, 2007 12:58 AM

I have a different and more optimistic view. There are a number of security technologies which can improve the situation and which are being actively developed today. David Hopwood cites work on least privilege and object capabilities as an example.

The big problem is backward compatibility. I foresee widespread use of microkernels and virtual machine monitors running beneath legacy monolithic OSs like Windows and Unix variants. These lowest layers will be compact and heavily analyzed and vetted. Their whole point will be security. They will allow new and more secure OS concepts to coexist with legacy software.

The legacy OS’s will still be prone to infection, which will be managed by extensive support for checkpointing and rollback a la Apple’s Time Machine. Add net-based alerts when your computer is acting up (via improved monitoring of traffic patterns at the ISP or upstream) and you can easily and routinely (even automatically) roll back your machine to a pre-infection state.

Trusted computing can play a role but not through this crazy idea that only Microsoft approved software will run. That is plainly a non-starter in a world with probably hundreds of thousands if not millions of widely used pieces of software. Rather, TC can help with auditing system state by providing a hardware root of trust to make sure the secure layer is not subverted or virtualized.

I do suspect that things will get worse before they get better, in fact they will probably have to, in order to motivate people to make changes. Hopefully we will suffer only minor rather than major catastrophes on the road to a more secure computing environment.

TJ December 4, 2007 1:02 AM

antibozo:

1) processing power doubles every 18 mths,
2) 10yrs=120mths

math is such,

2^(120mths/18mths)=101.6

so,yes…about 100x more powerful

Colin Tree December 4, 2007 1:22 AM

The basis for the internet is a 30 year old hotch potch and we can only expect problems.

A complete re-design from the ground up, incorporating all the lessons we have learned from Internet1 is required.

TCP/IP has been wonderful for establishing the rules for running a good network.

The cost of a complete rebuild is cheaper than chasing our tails for ever and the sooner the better.

RonK December 4, 2007 1:58 AM

Complexity is the worst enemy of security, and the Internet — ­and the
computers and processes connected to it — ­is getting more complex all the
time. So things are getting worse,

Well, if you believe Heinlein’s “The Moon is a Harsh Mistress” this mess also has a chance of begetting the first true AI. Which would be cool, but really, really, insecure!

perpetual December 4, 2007 2:13 AM

In 10 years we will see organisations go out of business due to electronic attacks.

whilst the personal model for security may become service based, the real treat to the economic well being will be organisations’ so I am not sure how much of their security departments will be outsourced, especially because I predict that security will be one of the topics dominating the board room.

Jonathan Mootre December 4, 2007 2:22 AM

I think the counter argument to complexity creating instability is that it is being managed by people who run large clusters. I work with a “small” cluster of 30 or so computers that executes a distributed work load. Developing for and maintaining software for a distributed environment has caused me to rethink a lot of things but it has not made me reduce complexity it has instead forced me to consider where I put the complexity. This is related to D. J. Bernstein thoughts about dividing code in to trusted and untrusted components.

In 2017 we will not be approaching problimes with thinking form 97′ like we are now we will be using thinking from 07′ and this will make a difference in favor of security.

antibozo December 4, 2007 2:37 AM

TJ> processing power doubles every 18 mths,

I’ve never seen that particular formulation of Moore’s law before, but thanks for the derivation.

KingInk December 4, 2007 2:41 AM

Unlike Marcus and Bruce, I actually think the trend towards software-as-a-service gives us an opportunity to increase security. We all have been saying for years that the endpoints are insecure. These days, endpoints are fat PC’s running bloated OS-es and complex applications. And each of these PC’s is different, sometimes running ancient versions of certain applications.

In the software-as-a-service paradigm, the endpoints could be thin clients, running a much simpler OS. Heck, it could be one of those purpose built Internet systems Marcus seems to like.
Applications would be installed, configured, and maintained by the service provider. I think we can safely assume they will know how to do that better than the average PC owner.
In other words, I see many opportunities there to actually increase security.

Mind you, I am talking from a security perspective only here. The lack of control over your own data, and the threat of monopoly and monoculture, make me feel quite uneasy otherwise about such a future….

Nostromo December 4, 2007 2:45 AM

@Thinker:
“It’s cut into profit levels of businesses screwing over customers.”

Which alternate reality are you living in? No company in history has become as profitable as Microsoft, or screwed over its customers as blatantly. About 90% of the price of Microsoft Vista can be attributed to monopoly rent.

t December 4, 2007 4:00 AM

we should all start teaching our children how to hack systems. That way, in 2017, at least they will be safe

Goober December 4, 2007 4:26 AM

Come on guys, network upheavals are fun. The blaster worm was a blast. Nobody cares what some stupid box or handheld trinket does as long as it doesn’t sting or bite. Dunderheads that rely on these types of technologies to live deserve what they get. Now, what was the IP for that reactor cooling pump?
I forget?

miw December 4, 2007 5:01 AM

There is more to consider besides raw computing power. Storage and communication are the two other essential aspects that drive applications. Historically, network speed has seen the slowest growth rate. Storage actually has a faster growth rate than processing power. So, this argues against computing as a service. Another interesting element is the cost of a computational gadget. In the timeframe considered in the discussion, a complete single purpose gadget can be built for extremely low cost and can easily use existing communication fabrics.

Smuggs December 4, 2007 6:18 AM

I thought that Moore’s Law stated that “Transistor count” was the metric that would double every 18 months. That in and of itself is not a guarantee of processing power as some architecture seems to be more scalable than others.

Morne December 4, 2007 6:42 AM

Re the comments on purpose built systems – would this not be an area where virtualization could really help to make this approach cost effective ?

I’m thinking a purpose-built stack with all the non required elements stripped out running as an image on a hypervisor – this should reduce the complexity associated with patching etc as you would only need to focus on the elements that matter to your app ?

Each different application could take this approach yet you would still get the cost benefit of sharing hardware resources ?

bob December 4, 2007 7:36 AM

What scares me is not so much the “infrastructure” (dams, power plants, hot dog vendors) that are becoming network-centric, but the “docustructure” (police, courts, government records). As more and more of this stuff becomes electronic, there will be fewer and fewer “original” paper documents that the action is based on.

So a hacker, terrorist [gotta throw that in there for funding] or disgruntled/careless employee can label you an armed, escaped felon, child molester & cop killer just by typing your name in place of the real one and you wont be able to prove it is an error because there is no paper document anywhere showing that the felon is really supposed to be Sam Bundy Kaszynski Jr. and not you. (a la Sandra Bullock in “The Net”)

A terorrist could Jiu Jitsu us just by targeting selected individuals in critical positions. So now our police forces are neutralized by tracking down and arresting (if not actually shooting them without warning since they are such scum) our own law abiding citizens who are in turn themselves unable to go to their jobs as TSA screeners, refinery guards or power plant operators. Perfect time during the confusion for a physical attack on the airport/refinery/etc where they are missing. It’s like dropping paratroops behind enemy lines to disrupt communications and sow confusion only without being detectable on radar and best of all (for the attacker) WE are paying for the paratroopers!

Chris December 4, 2007 7:55 AM

Rather than fight a war that cannot be won, accept the risks. If the bad outweighs the good, don’t do it. Pay with cash: no chance of fraud and you’ll always know how much is in your wallet. Shop local. Etc, etc.

OK, so that’s not the future. Nothing is perfect.

A major problem with the end point is the user: computers are too complicated for 95% of the population (did I just contradict myself?). In many ways the service/appliance model would take this knowledge requirement away and keep people from being their own worst enemy.

Mike Acker December 4, 2007 9:00 AM

Remote Administrative Trojans, also known as “Computer RATS” need to be excluded if we are going to conduct commerce over the net.

it is simply unacceptable to have someone else controlling my computer by remote control while I am doing anything important.

one way to be rid of RATS would be to market computers with fixed, non-modifyable programming. does anyone want to go that way? there is probably a market for that, and perhaps I should have two computers: one to play with and another, non-modifyable one to do business on.

another way is to treat all programs and I mean everyting that at is execuatable in any way: java script, hava, ActiveX, Flash, — you name it — every code fragment however small — if it is executable should be treated as a message and every message must be signed with an acceptable PGP signature

NO SIGNATURE? NO EXECUTE.

and this would apply to eMail messages as well: no acceptable signature? message goes in the spam bucket.

Mike Acker December 4, 2007 9:34 AM

what are the real implications of a

NO SIGNATURE? NO EXECUTE.

security policy?

as my browser parses any input text, when it comes to anything that is executable it is going to quarantine that fragment and check for a signature

and then it is going to notify me, with an appropriate diaglog: e.g. “Rock Phish would like to run a program on your computer. OK?”

to which I shall respond: CANCEL.

the real implication then is that unknown parties can no longer place their updates to my software.

all that is necessary is a change in our thinking: anything that is executable requires an acceptable signature.

ArgueTron December 4, 2007 9:45 AM

God how civil you both are. Can’t you atleast yell at eachother to make things more interesting?

Mike Acker December 4, 2007 9:48 AM

==>”Microsoft will tout this as the end of malware, until some hacker figures out how to get his software approved. ”

Bruce that is a Most Excellent note.

In the US Army Signal Corps, MSE systems we had to appear IN PERSON and BE RECOGNIZED to pickup our security keys

and I think the same sort of thing should apply to me. I have a PGP key. But if I want a Certificate Authority to recognize it how shall I be required to identify myself? That I think is an important question.

My thinking is that my Credit Union could act as an agent for the Certificate Authority and that I could appear there, in person. and present the usual credentials

and so I could get the Certificate Authority to post my key that way.

but if you later receive a message from your computer “Mike Acker would like to run a program on your computer: OK?” would you run my program?

you must receive the notice and the option to cancel. Otherwise we may as well get out a shovel and start digging.

Martha Stewart December 4, 2007 9:48 AM

I can’t help, but say something about the dorks who are arguing above about the use of “I”, “myself??? and “me???. You people are part of the problem and are probably better suited to secure jobs working on government FISMA compliance documentation. We all know how successful that paper-shuffling initiative has been. So if you were an English major in college please stay out of the security field… we have enough problems.

Thanks,
Martha

PS. Please be so kind to review the above for grammar mistakes.

Alex Turner December 4, 2007 10:00 AM

I think that a lot of the issues these guys are discussing relate to the “I don’t want to hear that” culture of corporate management these days. I spent a lot of time torpedoing my career by pointing out stuff like ‘that will not work’ or ‘that will be terminally insecure’. These are negatives – so mentioning them just gets you taken off the project.

Until we grow up culturally and realise some things are bad and some things do go wrong, we will be incapable of preventing the type of future Bruce and Marcus are discussing.

Alex Turner December 4, 2007 10:00 AM

I think that a lot of the issues these guys are discussing relate to the “I don’t want to hear that” culture of corporate management these days. I spent a lot of time torpedoing my career by pointing out stuff like ‘that will not work’ or ‘that will be terminally insecure’. These are negatives – so mentioning them just gets you taken off the project.

Until we grow up culturally and realise some things are bad and some things do go wrong, we will be incapable of preventing the type of future Bruce and Marcus are discussing.

Mike Acker December 4, 2007 10:08 AM

==>”Repeat after me: Applications should not run with the full permissions of their invoking user.”

right.

and that is where everyone misses the mark completely on the “Single Logon” issue

what is happening now, for may applications, I log on my computer and get my authorities for everything I do from the active directory. then I open a special application and that special application asks me to log on again

this is a PITA and does nothing to aid security: I already have complete access to my entire set of resources: I got that when I first started the computer

instead, when I first start the computer and log on as Mike.Acker the ONLY thing I should get access to is the DESKTOP

when I select an ICON that ICON needs to be executable itself so that it can log me on again but this time as Mike.Acker.Special . all of the programming under the thread launched by the Special start icon will thne operate under the authority of Mike.Acker.Special

and with this change Single Signon is a piece of cake, plus security is greatly enhanced

excellent post, Bill

Flashback December 4, 2007 10:41 AM

The show, “Max Headroom” used to seem absurd enough to be great humor in it’s day. Now it looks more like a sooth seer looking into the future.

Bminus December 4, 2007 12:03 PM

Don’t want these predictions to come true? Stop typing right now, go buy a book or three, learn to get off the ‘Net for a few hours each day. Yeah I’m saying addiction and reliance on ‘the infrastructure’ doesn’t have to be all consuming. What is your back up/out plan when the cyber wars take down the power grid? Can you produce your own power/heat? Can you grow enough food to sustain yourself? I look forward to instant access to everything in 10 years, but I plan on being able to get along when it is unavailable. I don’t have all my assets in one bank, I don’t have all my information in electronic form.

Anonymous December 4, 2007 1:01 PM

Go read Spook Country, by William Gibson. Ten years from now the problem will be the same as it is today–human beings. No amount of security can stop a determined, maleficent actor; especially if that person is ex-CIA, or has otherwise gained access privileges above and beyond the norm. Even loyal agents will turncoat, without warning. Look at the subject of Breach.

A global respect for humanity is fundamental to our continued survival.

aeschylus December 4, 2007 1:45 PM

Martha: “You people are part of the problem…”

How so?

[Sorry, no free grammar lessons for abusive hotheads. They don’t listen, anyway.]

perpetual December 4, 2007 2:17 PM

not sure about you lot, but as much as i am passionate about security its not a bad thing that security is bad because it keeps me employed!

i’d hate to see a security utopia where there was no need for security consultants…i’d probably become a plummer (and hope people’s heating systems fail).

SallyS. December 4, 2007 2:34 PM

Something else that really isn’t mentioned is that the people who kludged together all those networks and systems that are running a lot of the infratstructure are retiring.
Once gone, there will be no one who really knows how this stuff works. I’ve seen many data centers already that contain one if not several machines that no one knows what it does, there’s no owner, and everyone is afraid to reboot it because it might be the one box that will bring the whole network down.

antibozo December 4, 2007 3:15 PM

Smuggs> I thought that Moore’s Law stated that “Transistor count” was the metric that would double every 18 months.

FWIW, Wikipedia agrees with that formulation, and lists as a corollary formulation the one I mentioned: processing power doubles every two years. Perhaps if you try to factor in concurrent increases in storage capacity (hard disk storage/cost doubling annually, RAM storage/cost doubling biannually), you might end up with an 18-month figure, but I’ve never heard that one before.

Pat Cahalan December 4, 2007 5:25 PM

@ perpetual

In 10 years we will see organisations go out of business due to electronic attacks.

We see that now, remember what happened to BlueFrog?

Gordon Fecyk December 4, 2007 6:16 PM

Umbrella manufacturers predicting bad weather. What else is new?

At least Schneier got one thing right:

“There hasn’t been a new crime invented in millennia.”

Ewan Gunn December 4, 2007 6:53 PM

This is precisely why I have decided already to get out of the security industry, even though I just graduated this summer – there’s no future. Give it 20, 30 years and security will be just another part of software engineering, without it needing to be a ‘big thing’, and in the meantime all the hot-shot big names like Messr. Schneier himself will be the ones to implement the change. I love the problem from a logical perspective and focused all my undergraduate work that I could in that direction, butnow I’m looking further.

Pat Cahalan December 4, 2007 7:17 PM

even though I just graduated this summer – there’s no future. Give it 20, 30 years
and security will be just another part of software engineering,

30 years is no future? How long are you expecting to work?

Pasi December 5, 2007 12:49 AM

Complexity is the worst enemy of security. This is the main reason we’re having so many information related risks nowadays. Computing power has increased manyfold, and operating systems and software are zillion times more complex than a decade ago.

Is the growth of complexity going to stop anytime soon? I don’t see it coming. The systems and software of the future are going to be even more complex and will fault in a zillion new ways that we can’t even begin to understand now.

We information security professionals are going to have a bright and well paid future. Believe me, this is a good business since the world is going to need us.

Suheil December 5, 2007 5:12 AM

Agree with you guys that our problems and the way we deal with them will more or less stay the same by 2017. However, by 2027, I think (and hope) we will see all this change.

As we all know, human nature is influenced more by nature and human factors than by anything else. By 2027, climate change, global supply+demand change (due to India and China) and other global human forces will require us to develop more “disruptive” ways of using technology to overcome these forces.

By 2027, the digitisation of humans (genes) will be in swing (according to Craig Venter, a gene pioneer). This technology (if we can develop it to our advantage) will be more significant to us humans than the digitisation of our economy and other non-human information.

By 2027, digital security (required to protect digital biology systems) will then need to focus more on protecting human life rather than protecting systems from financial fraud, theft and the like. Digital attacks (by the bad guys) can be more life threatening!

So the future will be very different. Be prepared for a disruptive ride which will be necessary for human survival, where digital security will be a matter of life and death.

Digital Canary December 5, 2007 11:15 AM

Devil’s Advocate stated that:
“There’s a cost associated with all computing. The resource cost of providing the security cannot be greater than the utility of the application.”

Absolutely, but the biggest part of the problem as I see it is a lack of appreciation of the security-specific components of the resource value (e.g., what’s my privacy worth, how much is the continued confidentiality of this info worth, what does a 1% availability decrease cost me, etc.).

Others seem to see it in a similar light:
“It used to be an important part of any corporation security and risk assessment when I was part of that and was implied to the whole infrastructure including IT. … So, technology is not the real issue but the missing planning and management can be.” (Tuomo Stauffer, above)

Hal notes that:
“I do suspect that things will get worse before they get better, in fact they will probably have to, in order to motivate people to make changes. Hopefully we will suffer only minor rather than major catastrophes on the road to a more secure computing environment.”

Again, I think that this underscores the “valuation” problem: today, while infosec events get more press than ever before (think TJX or URI Handling), most individuals remain unable to make a reasoned cost/benefit risk analysis on the merits of better securing (or limiting) their use of digital technologies. I don’t blame the individual’s though – this is a tough process at the best of times, and not a single component of the risk equation is easily quantified (or where quantifiable, agreed upon by all us “experts”).

Until or unless a major occurrence disrupts the mindset so prevalent in today’s end users (as well as in management), the status quo will win out: human psychology tends to create a situation in which the tangible benefits of using a piece of technology will almost invariably trump a less tangible or remote risk.

I can’t extend the same psychology-based forgiveness to organizations and corporations however: failing to adequately balance risk for my own information is my own problem, but failing to do so where I am a custodian or steward of others’ information is plain negligence.

I often note (only partially “tongue in cheek”) that we need more actuaries – that is, professionals who specialize in risk analysis. More aptly, I firmly believe that, counter to Ewan Gunn’s perspective, infosec will not cease to exist but rather will evolve into a more specialized risk analysis profession, since (IMHO) the balance will always be a dynamic one. I would agree, however, that the days of specialized “technical infosec” professionals are ultimately numbered, as I see those functions returning to being core SW and system engineering duties.

There’s a lot of discussion these days about “professionalizing” information security – like engineers, doctors, and, you guessed it: actuaries. I think that’s a good thing, since the more formalized the methods become, the more likely that executive management will be able to approach infosec risk assessments more objectively, and thus the more likely that a more appropriate balance of functionality and security (both adding “value” to the customer) will be struck.

The best news of all: according to many studies, actuaries are consistently rated as one of the best careers! Now I just have to dust off my minor in math, and go back to school for a while …

Sherwood Botsford December 5, 2007 9:16 PM

“Complexity is the worst enemy of security, and the Internet — and the computers and processes connected to it — is getting more complex all the time”

Computers are state machines. With a writeable disk they have a very large numbers of states.

I have a lab full of ratty old peecees that run win2k. I have a school full of kids that download all sorts of stuff. I don’t try very hard to keep the junk off them anymore. I just re-install every night. In the morning they are in a known state.

Anything we can do to reduce the number of states I think would be an improvement.

Steps toward more secure systems:
A Boot and load critical binaries from non-writable media.
B Write once memory.
C Write only logging.

A. I think designing a more secure system would be easier if critical parts of it ran read only. E.g. Part of the root kit for many *nix systems is a modified version of ps that doesn’t show your trojan processes, a modified version of ls that doesn’t show your hidden directories.

This is much harder to accomplish if /bin comes off a read only partition.

B. Write once memeory. Could we hardwire processors so that the processor could not modify memory with an address below the value of a certain special register. After a power cycle, the register has a value of 0.
In boot state, the kernel loads, system critical modules load, certain system libraries load. The special register now has a value equal to the top of the loaded critical stuff. From this point until the machine is next shut down, there is no way to modify memory below that address. (The register can’t be decremented, only incremented. And overflowing it causes a system reset.)

Suppose that the kernel checks that the modules it loads only come off non-writeable media. If certain modules are off of writable media, it won’t change run levels. E.g. If level 1 shows that it on writable media, it won’t configure the network, or it won’t run multi-user.

Development would be a pain.

I’ve met some scsi disks that had a hardware jumper to turn on/off write access to the disk.

So this means if I want to change the root password on a Secure Operating System (SOS?) I have to go to the machine, shut it down, flip the mechanical switch that allows writing on the boot disk, boot to level 1, the system notes that it is booting from writable media, and won’t go further than this. NOW I can change the password. Halt the machine. Flip the switch, reboot normally. Needless to say, I want most of my accounts’ security information to be stored on a different disk.

System installation would work similarly.

Since the boot media is read only, managing large server farms becomes a problem of setting up some form of net boot. (Can I trust the boot server… I know I’m paranoid, but am I paranoid enough)

C. This one we in essence have. Syslog to another machine is in effect write only memory. Very hard for an intruder to cover his tracks if he can’t see the tracks on the machine he’s broken. OpenBSD has append only file modes that can’t be changed without dropping the security level back to one. And if the right syscontrol is set, even that can’t happen without a reboot.

These concepts have been around for a while. I remember talking about them with one of my students 10 years ago.

I conclude one of the following.
* My students and I were absolutely brilliant and these ideas have occured to no one else. (Fat chance…)
* The idea is fundamentally flawed in a way that I don’t yet see. (To every problem there is an immediate solution that happens to be wrong…)
* Implementing it is a lot harder than it looks.

Rainer December 7, 2007 1:48 PM

This is a very nice and informative article, but it seems that one big aspect of the information age is not properly covered there. The user! With the plans about on-demand-production in the drawers of companies and the new active Web 2.0 user, a new branch of products could be opened. Similar to OpenSource, slowly OpenHardware steps on the stage, just consider projects like OpenMoko, BugLabs or similar and think them together with stuff like OpenBIOS/LinuxBIOS or even Softwareplattforms like GoogleAndroid. And if even Microsoft tries to register file formats as open standard, something really changes. I don’t want to sound blinded by idealism, I just don’t want the ambition of many smart people to use their knowledge for creating free and open products to be ignored in your predicitons. Greetings

Peter Mellor December 7, 2007 10:02 PM

So “No new crimes have been invented for millennia.” Really? I’m sure that cavemen got angry and murdered one another, or greedy and stole one another’s stone axes, but show me one who was guilty of breach of copyright, benefit fraud, or speeding. Every time a new technology is invented and a law is passed to control it, a new crime is invented. In fact, the concept of “crime” only came into existence with the growth of hierarchical societies, city states and empires. (Hamurabi has a lot to answer for.)

Getting back to the topic, when I worked for ICL in the 1970s, converting a load of files and rewriting all of the software in an organisation (e.g., to move from ICL to IBM) was sufficiently painful to lock customers into a single supplier quite effectively. When ICL brought out the 2900 series to replace the 1900, both the hardware and software (the VME OS) were vastly different. Several large customers refused to migrate their workload. As a result, ICL produced an emulation of the 1900 on the 2900 (known as DME: Direct Machine Environment) so that customers could run their existing applications on the GEORGE 3 OS unchanged. Hardly sounds like the corporate fascists dictating to compliant customers!

The present compatibility of storage devices over hardware types was undreamed of even in 1997. (A 240 Gbyte hard disk that’s plug-and-play on Windows and Linux? You must be joking!)

As for what the “informed but non-expert user” can do, I have seen the “We’ll keep all your data; you just rent the service.” pitch for years. For the same length of time I have been determined not to trust them. When I had an employer, I could trust them to keep a back-up of my data. (I personally knew the guys who did it and I knew their methods and where they kept the tapes.) Now retired, I have all my data right here (on my 240 Gbyte disk, with extra copies of the really useful bits on memory sticks).

Of course, you need to watch out for hardware and software obsolescence. Revise and recopy all documents every so often. Use open source software and keep a copy of the software as well.

The usual remarks about storage in different locations, encryption, fireproof safes, etc., apply. (OK, I’m too lazy to have done all those yet!)

Organise politically and keep up the pressure against “Trustworthy Computing” if this looks like control and lock-in. (Palladium died the death, did it not?)

Don’t be afraid of storing printed hard copies of textual or graphical information.

owekeeskoxift December 18, 2007 11:55 AM

This is why UNIBET and many other online gambling companies encourage the notion of responsible online gambling and state clearly their reasons for doing that.

datadefender December 19, 2007 11:01 PM

An interesting parallel from the world of Agriculture:
Traditionally a farmer would use parts of his harvest as seed for the next year. The latest generation of purchased seeds from Monsanto etc. grow plants that cannot replicate – i.e. the harvest cannot be used as seed.
All of a sudden you become dependent of your seed supplier.

An excellent book that discusses how we become dependent on “services” in our daily life (not just IT):
“The Age Of Access: The New Culture of Hypercapitalism, Where All of Life is a Paid-For Experience, Putnam Publishing Group, ISBN 1-58542-018-2” ( http://www.techsoc.com/access.htm)

My Personal takeaway after that book and now your blog:
* Own your house/car/furniture/etc – don’t rent/lease it
* get familiar with Linux so you are ready to switch if Microsoft forces you to “authorize” all software you want to run
* Try to stay as independent as possibly by owning the essentials of your life.
* Stay educated

martinr December 21, 2007 12:53 PM

You can’t turn shovelware into reliable software by patching it a whole lot.

Although I wholly agree in principle, there are examples where “NOT” acutally means “not easily, i.e. it takes many years, lots of work and lots of vulnerability reports”.

Examples:
Microsoft Internet Explorer
Microsoft Internet Information Server
Apache
OpenSSL

And I’m very tempted to include two OpenSource Kerberos implementations on that list…

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.