Comments

plastek September 9, 2005 2:13 PM

From TFA
“One of the best ways to get rid of cockroaches in your kitchen is to scatter bread-crumbs under the stove, right? Wrong! That’s a dumb idea. One of the best ways to discourage hacking on the Internet is to give the hackers stock options, buy the books they write about their exploits, take classes on “extreme hacking kung fu” and pay them tens of thousands of dollars to do “penetration tests” against your systems, right? Wrong! “Hacking is Cool” is a really dumb idea.”

Yeah, all “hackers” are “bad people”, and they are just in it to ruin your computer. Typical propaganda, didnt expect to see it on Schneier’s blog though.

mark September 9, 2005 3:00 PM

don’t get the first post – unless you have mistaken the fact the article is using the word Hacker for the type of people who the computing community call crackers.

The point is good though – if we rewarded house breakers with TV shows, book contracts and celebrity status then there would be more house breakers. Surely the same applies to people who break into computers.

Interesting article BTW – plenty of food for thought.

Sys Admn September 9, 2005 3:02 PM

plastek: We lost the battle to have computer vandals called “crackers” years ago. You might not equate hackers with virus writers and social engineers, but the public does.

Justin September 9, 2005 3:07 PM

I’ll whole-heartedly agree that default-deny is nearly always preferable over default-permit. But there is always a practical limit.

For example, Mr. Ranum gives an example where he puts a list of all the valid URIs for a website into a hardware load-balancer, which then compares each request against the list. Would he suggest that amazon.com, with its millions of URIs, implement this feature on its load-balancers? What about a site like wikipedia.org, for example, where the number of URIs being created or changed per second is nontrivial? IMO, different risks should be mitigated by appropriate devices. Let firewalls and web servers handle tasks that they were each meant for.

I disagree with many of Ranum’s comments, but I think it’s best to stick to one as an eaxmple.

Pat Cahalan September 9, 2005 3:15 PM

The definition of “hacker” (and of “hacking”) is somewhat nebulous. If you just read “The New Hacker’s Dictionary”, you’d think that hackers are all brilliant engineers with odd work hours and perhaps strange personal habits. If you just watch network news, you might think that all hackers are people trying to break into things just because they can.

More people watch network news than read “The New Hacker’s Dictionary”.

You may bestow the title of “hacker” upon yourself because you like the idea of being a brilliant engineer, but the reality is that some large section of society is going to think you’re someone who is trying to steal their bank password. It doesn’t mean that they are correct, but when rap artists aspire for the label “gansta” they can’t really complain when whitebread american calls them “thugs”, either 🙂

Linguistics isn’t an individual science -> if a large body of people suddenly decides that “tubular” means “really cool”, then that’s what “tubular” means to them. If you’ve been calling yourself “anti-tubular” because you’ve decided it means “really smart”, uh, they’re going to make fun of you. Sorry, there’s not much you can do about that, except complain, I suppose.

havvok September 9, 2005 3:18 PM

@Justin

If you work in an organization/org. unit that has limited requirements, why not. And you also misrepresented what Ranum wrote:

“not matching a complete list of correctly-structured URLs”

This makes sense; one could easily create a regex to match legitimate structure for URI’s for most structured websites. As a result it makes sense to at least ensure that the URI in question matches a permitted site than let all traffic through.

Realistically speaking, if my employees are at work, using my technical resources, they have no legitimate reason to be going to Amazon.com at work.

Just because you may not understand how something can be done, does not mean that it is impossible 😀

Kevin Davidson September 9, 2005 3:25 PM

My copy of Norton Internet Security does default deny on applications trying to access the Internet.

What is really a pain is Internet-enabled code that I write. Every time I recompile the program (and that’s every few minutes when I’m testing) I have to re-permit the program with Norton. This results in my eventually turning off the firewall (meaning the security system is not well designed).

Norton has never found anything malicious trying to access the net, but it was effective in keeping my kid downstairs from getting at the printer upstairs.

Richard September 9, 2005 3:27 PM

Call me crazy, but I believe that in order to design a secure system of any sort, you need to know how to break it. And maybe Marcus Ranum could tell us if he thinks the “Penetrate and Patch” approach shouldn’t be used with cryptography.

Pat Cahalan September 9, 2005 3:33 PM

@ Kevin

Well, Norton Internet Security is essentially a consumer product, and is designed to protect end-users of desktop systems. Generally speaking, they aren’t writing Internet-enabled code that they have to recompile and test frequently.

So saying that the security system isn’t well designed isn’t properly accurate -> for their target market, it may (or may not) be well designed, but you as a consumer are not a good use-case for this particular chunk of off-the-shelf software.

David September 9, 2005 3:36 PM

Pretty much Marcus is dead on.

The problem is that you have to do all this “grunt work” of patching, mitigating, testing, etc.. that you don’t always have time to think.

Worst problem that Marcus doesn’t address: What do do when your an Administrator and your management doesn’t have a clue, your vendors are given carte blanche re: security, and your IT managers don’t even get the security basics.

I know: Get a new job 🙂

bi September 9, 2005 3:39 PM

Programmers are constantly being inundated with calls to practice Good Engineering. Yeah, yeah. So what exactly are these Good Engineering practices? Perhaps “Do Not Enumerate Badness” is one of them. But is that all? Ranum mentions Feynman’s essay, but as far as software is concerned, Feynman only talks about eyeballing the code and verifying it by brute force.

Justin September 9, 2005 3:39 PM

@havvok

I believe the misunderstanding is mutual. 😉

“The load-balancer was configured to re-vector any traffic not matching a complete list of correctly-structured URLs to a [different] server”

I took this to mean that, for example, if a web server had 3 valid URIs (a.html, b.css, and images/c.gif), then any http request not matching these patterns would be redirected to an error server.

My argument was that this could become a maintenance nightmare. Requiring network hardware to know the full list of valid URIs might be unnecessarily difficult to manage. The web server should know well enough which URLs are valid and not.

As for your amazon.com point, I was talking about Amazon as a company, and how they might have trouble implementing this on THEIR load-balancers, to filter THEIR customers’ http requests.

Pat Cahalan September 9, 2005 3:42 PM

I think the general tone of the editorial is a good one to take from a design perspective. If software engineers worked from those rules (and generally followed his philosophy of coding security), it would certainly make administration of deployed systems easier on my end!

But it’s essentially written by an engineer, about dumb engineering, and not as applicable for those supporting the “already deployed”.

As an aside, reading through Ranum’s website has led me to the conclusion that sitting in a bar with him and Bruce with nothing to do but kill several hours and have a few drinks would lead to some very entertaining conversation. The perspective difference is interesting.

Phil! Gregory September 9, 2005 3:45 PM

#1 & #2 (“Default Permit” and “Enumerating Badness”) are really the same thing, viewed from two perspectives. I do agree with the essay on them, though.

#3 (“Penetrate and Patch”) makes some valid points, in that software should be designed to be secure, but I’d say that penetration testing is still useful, if for no other reason that that all software contains bugs.

#4 (“Hacking is Cool”) paints with an overly-broad brush, I think. I would rather trust in individual hiring decisions that take into account things like, “does this person have a criminal record?” rather than saying “anyone who has ever attempted to break into a computer without authorization is unhireable.” Though I admit that I wouldn’t consider most unauthorized cracking to be a positive mark.

#5 (“Educating Users”) claims that the users should never get anything that could harm them, because of a default-deny policy. That seems unrealistic to me. Yes, there’s a lot that can be done to mitigate threats like email worms, but how can you programmatically block all phishing attempts? IMHO, educating people does far more good than harm.

Marcus J. Ranum September 9, 2005 3:50 PM

What do do when your an Administrator
and your management doesn’t have a
clue, your vendors are given carte
blanche re: security, and your IT
managers don’t even get the security
basics.

Wait for them to die??

OK, maybe that seems a bit extreme… But let me try to elaborate a little. Back in 1987 I worked for a hospital’s informatics group. At that time, an MD/PHd wouldn’t touch a keyboard to access a computer. “Keyboards? That’s what secretaries use – these hands have been touched by God!” etc. Now all of those doctors are gone. They’re all retired to the great golf-course in the sky or they’re headed that way. The new generation of doctors are totally switched-on to computers.

My guess is that a lot of computer-use-related social ills are going to have to sort themselves out as the old folks leave the technology arena and new folks come in who grew up with it. The current generation of CTOs and CIOs at big companies are still largely the old mainframe crowd, waiting for that Internet thing to blow over. What they don’t realize is that it’s they, that will blow over.

In another 10 years CTOs/CIOs are going to understand this stuff. They’ll have to. Because their kids are building clustered supercomputers for science-fair projects and they’re the generation that grew up with this stuff.

We have to remember that the history of computer security is still ridiculously small, and we’re looking at the very early stages of what promises to be a long-term evolutionary process.

mjr.

Marcus J. Ranum September 9, 2005 3:55 PM

sac writes:

http://www.google.com/search?
hl=en&lr=&q=%22Gauntlet+firewall%
22+vulnerability&btnG=Search

Cute. Ad hominem arguments play well to the crowd but don’t really say much.

There were a lot of my original design decisions reversed after control of Gauntlet was taken from me and given to marketing people in 1995. Lots of people had their hands in the product after that time.

mjr.

Marcus J. Ranum September 9, 2005 3:59 PM

Would he suggest that amazon.com,
with its millions of URIs, implement this
feature on its load-balancers?

I bet they wish they’d thought of it before it was too late.

I got hauled into a design review for a different E-banking system and discovered to my horror that whoever coded it appeared to have structured their queries with no thought whatsoever for making the correctness of the URL easy to determine. It got worse when I asked the “chief architect” for their documentation on URL structure, so we could see if we could derive a good set of filtering rules for positive testing. They didn’t have one… Arrghhh….

I’m not saying this is easy, people. Just that it’s easier than the alternative.Getting your head out of your ass once it’s firmly lodged up there is ALWAYS harder than not getting it up there in the first place.

mjr.

Davi Ottenheimer September 9, 2005 4:04 PM

@ Justin

“My argument was that this could become a maintenance nightmare.”

sigh

That is the all-too-common perspective of security. Have you ever heard anyone complain that it could become a maintenance nightmare to use passwords, keep a secret, rotate keys, manage identities, control access, patch software, etc.

URI positive validation is something already widely in practice in anti-phishing and anti-spam engines, and it is not always complex. For example, you do not have to index directories and paths if you can trust the root.

Davi Ottenheimer September 9, 2005 4:11 PM

“Cute. Ad hominem arguments play well to the crowd but don’t really say much.”

Not really an ad hominem, especially since you pointed out in the subsequent statement that “Gauntlet was taken from me and given to marketing people”.

Seems to me Sac wasn’t saying you did anything wrong, so much as he was pointing out that no system (not even Gauntlet) is really ever “hack-proof” over time, or that we still have a lot to learn, but I could be reading into things…

Justin September 9, 2005 4:27 PM

@ Davi Ottenheimer

“URI positive validation is something already widely in practice”

Look, I’m not saying this kind of thing shouldn’t be done. I’m merely suggesting that the right tools be used for the job. ‘Maintenance nightmare’ in this case means making one’s life overly difficult when there exist alternatives that are equally effective but easier to implement.

As Mr. Ranum pointed out, a well-designed website, with a uniform and easy-to-regexp URI structure is indeed much simpler to protect with URI validation. Unfortunately, most network/security engineers don’t get to dictate how the URIs for the websites should look. I would rather implement the URI validation code in software—as part of the build process—than have to modify the hardware config after each deployment. This keeps domain knowledge more compartmentalizd, and often requires fewer bodies to maintain. I prefer this even more so in an environment with many different websites, such as the ones I secure.

Davi Ottenheimer September 9, 2005 4:39 PM

@ Justin

“most network/security engineers don’t get to dictate how the URIs for the websites should look”

is just another problem to solve, unless it is true that you can

“implement the URI validation code in software—as part of the build process”

So you seem to agree that someone should get to dictate the URIs at some level, no? It really needs to happen at ALL layers of the System Development LifeCycle to be most effective, but I agree 100% that money is better spent earlier in the process.

@ Marcus

On a (slightly) unrelated topic, here’s something that should have been caught by some kind of filter:

“nude pictures of barely clothed females”

Richard Braakman September 9, 2005 4:49 PM

The thing I don’t like about this verifying load balancer is that it’s yet another moving part. Determining which URLs are valid is the web server’s job. If the web server can’t do it safely, then why would you trust the load balancer to do it safely? They’re both software, with all the evils thereof.

Brian Thomas September 9, 2005 5:26 PM

Richard:

Because attacks that rely on syntactically invalid URLs can be filtered quickly and relatively brainlessly (same thing), decreasing the load on the Web servers that sometimes simply choke on them, so they don’t waste time checking out stuff that can’t be valid.

Think of trying to filter out bacteria and viruses from river water without first screening out the gravel, sand, and mud.

A whistle-blowing, clipboard-toting guard demanding ID from enemy tanks bearing down on him at full throttle with weapons blazing makes a good British comedy, but not good security.

It’s actually a time-proven, defense-in-depth strategy: first filter out the obvious threats, then concentrate on the finer stuff. In medical emergencies, they call it triage.

More than that, it’s really bad design to make one process try to do everything.

Jay Levitt September 9, 2005 5:35 PM

I’m not sure what Marcus’s proposed solution for phishing and trojans solves. Great, so we filter all the executables out of e-mail. So now spammers just send a link to an executable instead. Still just as easy to get the executable on the system.

Oh, but we’re default-deny now, so the executable can’t run. Great. Who, exactly, do you think, is the person who would give that permission? On any home system, it’s the same person who downloaded the file in the first place.

Social engineering will always be a viable penetration path. Using virus scanners is simply paying $30 to somebody who claims to be less likely to be socially-engineered than you. For 99% of the user base, that’s a good bet.

David D September 9, 2005 5:53 PM

In the “minor dumbs” section, Marcus says that it’s an error to think that you either don’t need a firewall or don’t need decent host security. This is the conventional wisdom and with today’s generally weak network protocols and poor host security it’s probably correct.

However it does not need to be so.

I know this is kind-of heretical but someday I think that network security / Firewalls could be eliminated.

Good mid-stream communication security between apps/hosts is a relatively solved problem using cryptography. The weaknesses in most cryptographic systems lay either in errors in implementation (solved by diligent coding) or in poor host/app security that subverts the secured channel.

If all network protocols used host based, cryptographically encrypted and authenticated channels, then attacks could only come from hosts and applications I trust. By further limiting the allowed activities of my trusted hosts/apps, I could limit the damage caused by compromised trusted hosts. I would simply drop/ignore unencrypted, unauthenticated or untrusted requests.

Thus if host security were strong enough, we wouldn’t need firewalls. I believe that eventually we will move this direction as trusting anyone but yourself (such as a firewall) for your security leaves you more vulnerable.

The analogy is that if threats on the network were snipers, and we all drove tanks, we wouldn’t need to try and secure the roads. Tanks can be made invulnerable on the roads using cryptography. Having everyone in Tanks doesn’t help us at the secured bases where we have to get out, but that’s why I’m advocating much stronger host security.

Plagiarism notice: I got many of these ideas from conversations from some folks at Sun, I just think they’re on the right track.

Shura September 9, 2005 6:18 PM

Interesting read, but I think he’s contradicting himself at one point. In the “Minor Dumbs” section, he writes:

“Operating systems have security problems because they are complex and system administration is not a solved problem in computing.”

That, to me, seems to be a direct contradiction of part of the “Penetrate and Patch” section, where he states:

“If you look at major internet applications you’ll find that there are a number that consistently have problems with security vulnerabilities. There are also a handful, like PostFix, Qmail, etc, that were engineered to be compartmented against themselves, with modularized permissions and processing, and – not surprisingly – they have histories of amazingly few bugs.”

What is it now? Either programs (including operating systems) of at least a certain size will always contain security holes due to their inherent complexity (i.e., you essentially can’t get them right, no matter how hard you try), or secure design is possible – so while you obviously still can make mistakes, your program will be more secure than one designed without security in mind.

I think the latter is true, myself. Nobody is perfect, but I do think that if 90% of the world wasn’t running windows but – for example – OpenBSD, the world would be a more secure place.

Davi Ottenheimer September 9, 2005 6:19 PM

@ David D

Just a quick note to mention that your comments echo Abe Singer’s “Security Without Firewalls” approach to distributed system security. Note that Abe is credited by Marcus at the end of his essay for contributing to the list.

Pat Cahalan September 9, 2005 6:20 PM

@ Marcus

We have to remember that the history of computer security is still ridiculously
small, and we’re looking at the very early stages of what promises to be a
long-term evolutionary process.

This is an excellent point. In particular, if you’re looking at it from a long-term 10,000 foot view, much of what you’re writing in your editorial makes sense -> in the long run, the current “release buggy software and then patch it to keep up with the Joneses” software distribution model will go away, I think. Sooner or later (probably after a barn burns down), someone will start demanding more secure software. Until that happens, though, we have the status quo…

The status quo, in my estimation, is suffering from the externalizing problem (where you and Bruce seem to differ, based upon your respective stances on the liability issue) -> the average consumer doesn’t want insecure software, but the average business keeps staffs of IT guys like me employed to fix the dikes when holes spring open. The top n% of the Fortune 500 companies buy buggy software because they’ve minimalized their risk (at least in their view) by keeping a staff on hand to put out fires.

So the software company produces dreck, the corporations and the private citizens buy it (or are “given” it with their machine), but the security vulnerabilities can currently have a bigger impact (in a very real sense) on Joe Average’s computer rather than a corporation -> corporations have robust backup strategies, etc., while Joe Average can lose everything in worm attack.

Fiscally speaking, the large corporation is impacted more, but it’s been budgeted into an annoyance. The average user can lose everything irrecoverably, the corporation just has downtime (albeit expensive downtime).

So in the current market, the consumer (us) has no leverage to change things. I can’t force Microsoft to change their software development process by withholding my $100 any more than I can force Intel to stop developing faster processors I don’t need and start developing more reliable ones I don’t have to replace every 3 years. I simply don’t have the clout. Hell, I don’t even have the ability to withhold the $100 from Microsoft -> if I buy a computer from most vendors, it comes with Windows whether I want to pay for it or not. Somehow, in spite of the monopoly lawsuit, we’re still paying for Microsoft when we buy a new machine from most vendors.

However, if I could sue Microsoft for liability, suddenly I have much more clout. Your have a valid point in that the trial lawers will benefit far more than either I or Microsoft, but your alternative (hold out your money and wait) won’t work for me as a consumer…

Francois Kashy September 9, 2005 6:52 PM

@David D

Every system has unique vulnerabilities. If you harden the host but open the network, you’re not improving security. All you’re doing is shifting vulnerabilities from one target to another.

You’re right to say that limiting trust reduces your vulnerability. However, trust is vulnerable, a target subject to attack in any system. Moving trust away from the network and onto the host just changes the target from the network to the host – or from the firewall to the host application, as you suggested.

There is no such thing as 100% security. You need defense in depth. I’ll be happy to keep all of my hardened hosts, firewalls, and hetrogenous defense systems together.

Nick September 9, 2005 9:01 PM

What’s scarier is that Marcus’ list (and a couple of the minor dumbs) are applicable to my experiences in the broadcast media.

At one point, when working with graphic elements, we had one of those days where we kept doing an item over for minor style issues. My comment was, “Why do we have time to do things over, but never time to do things right?”

And the default permission boojum got us when a well-intentioned person took the ’emergency only’ key (which was left in an unguarded key box) and issued a ‘reboot’ command instead of a ‘restart ‘ command. Brought the whole newsroom computer system to a grinding halt about fifteen minutes before the show. But the practice continued – because the company couldn’t equip people with laptops and modems, I was often required to troubleshoot over the phone. (“Go to the key box, get the key, go to the computer room, type these words …”)

And user education rarely works. It’s always easier to ask the resident geek/hacker/cracker/whatever we’re calling ourselves this week.

Marcus J. Ranum September 9, 2005 10:30 PM

I think that network security / Firewalls
could be eliminated.

I agree. But I think that to do it, we’d need what would amount to a complete re-write of all the networked apps that are being fielded today. There are too many protocols that interdepend, which interdependencies create bewildering combinations of braindamage that we probably cannot comprehend.
– “It’s secure because it uses SSL!”
– “Where did it gets its certificate?”
– “From the server!”
– “How did it get to the server?”
– “DNS, of course!”
– “What secures DNS?”
– “Isn’t the IETF working on that?”

Re-coding the network layer of all apps is actually an attractive and viable solution – but when I say this most people conclude I’ve lost my mind. 😉 Consider the insane cost we pay in terms of our current “security solutions” and maintenance practices and stack that up against a complete re-write and a re-write ain’t all that bad! Those of you who were around in ’98 may recall I seriously proposed this at my Black Hat keynote: re-code the Internet and blame it on Y2K. Maybe we can do it under the aegis of homeland security or to prevent flooding in New Orleans?

But, seriously, unless the underlying layers are secure, you’ll get whacked as the app traffic crosses the network, if there’s no firewall. Then you’re dealing with transitive-trust exploiting attacks over encrypted links. Ouch.

mjr.

Marcus J. Ranum September 9, 2005 10:41 PM

What is it now? Either programs
(including operating systems) of at
least a certain size will always contain
security holes due to their inherent
complexity (i.e., you essentially can’t
get them right, no matter how hard
you try), or secure design is possible –
so while you obviously still can make
mistakes, your program will be more
secure than one designed without
security in mind

Here’s a great example: Wietse Venema’s PostFix MTA. Me and a bunch of security guys used to bust all over sendmail for being bloated, buggy and causing the “sendmail bug of the month club” (we were optimists back then). We all liked Venema’s PostFix a lot more because it was small.

Well, the other day I had a shocker: PostFix, in terms of lines of code, is quite a bit larger than sendmail is, now! But it’s still got a much better security history, even recently. Why? Because it’s designed around the notion of multiple cooperating processes with reduced privilege between them, minimization of mechanism, etc. All those good design principles that the Ancient Babylonian Programmers used to carve into the clay tablets they fed into their early granite mainframes.

So – I think we can solve some of these problems but the user community is not (yet) ready to make the trade-offs that we’d likely need to make to solve system administration (for example). We can make systems that are more powerful but less complex. I absolutely believe that to be true. It is, however, easier to build systems that are more complex and more powerful.

Another example: remember when everyone was excited about microkernels in the late 80’s? Even NT was (supposedly) a microkernel. The concept was cool: have a “security executive” that was carefully coded, which made all the security decisions. Cool concept. Of course Windows’ interface was too slow when it was done as message-passing, so all that stuff got tossed. OOps.

Step back and think BIG. Think REALLY big, OK? I dare you! This industry is still a toddler! We are too young to accept mediocrity and that’s what we’re getting today. Why doesn’t someone do something paradigm-busting like make an entire company’s desktop infrastructure network boot from a single tamper-proof code image? Yeah, yeah, there are LOTS of implementation details but – why not?

mjr.

Marcus J. Ranum September 9, 2005 10:52 PM

However, if I could sue Microsoft for
liability, suddenly I have much more
clout.

Microsoft is such an easy opponent to clobber it’s not EVEN funny.

I’ve written a column way back about “inviting cockroaches to dinner” – my opinions about the end result of what happens to software if lawyers get their hands on it. My belief is that the results would be more horrible than we even want to contemplate.

What do you think of HIPAA, SARBOX, GLBA, FISMA, et al? Do you think they have helped security, or merely transferred billions of dollars to consultants, lawyers, and beltway bandits? If the lawyers get into the software liability game, HIPAA, SARBOX, GLBA, FISMA, rolled up together will be a mere f*rt in a hurricane in comparison!!

(Hint: the way to kick Microsoft’s a*s: defer further purchases from them until you get what you want in your products. Most businesses have plenty of copies of Microsoft-whatever licensed. Keep using that, defer purchases, and essentially put Microsoft on a little “hunger strike”. For example, I still happily use Office 95. It kicks boo-tay on my 2.2Ghz machine. Microsoft is not getting any more $ from me for word processing. I am planning on stabilizing on Windows XP unless they conspire with Intel to force me to shift/upgrade somehow, in which case I will completely re-assess my O/S loadout. Now, if 10 of the FORTUNE 500 announced they were going to stabilize their software loadout of Microsoft products until Microsoft stopped shipping under a EULA – it’d be a total rout.)

mjr.

Marcus J. Ranum September 9, 2005 11:19 PM

Pat Calahan:

Sooner or later (probably after a barn
burns down), someone will start
demanding more secure software

Yeah… I keep waiting, too.. But….

Let’s see, if Slammer didn’t get the high tech industry to pull its head out of the sand, maybe CodeRed will… No, wait, maybe YogSothoth2 will be the “wake up call”… Nope…

I’ve given up even believing that there IS going to be a wake-up call. Consider the case of New Orleans… Go google the congressional record for “New Orleans Levee” and you’ll see it’s been a lively topic of discussion (not action) for a while. Wake-up call?

We humans like to ignore wake-up calls that are not highly specific to us. I think that’s what’s happening with security. I keep asking senior IT managers “why do you tolerate mediocrity from vendors?” and not one of them has had the guts to say “because we’re mediocre, too.”

mjr.

Pat Cahalan September 10, 2005 12:55 AM

@ Marcus

I was actually referring to your “cockroaches at dinner” editorial in my last post.

To answer your question, “What do you think of HIPAA, SARBOX, GLBA, FISMA, et al? Do you think they have helped security, or merely transferred billions of dollars to consultants, lawyers, and beltway bandits?”, I think your implied conclusion is right -> they’ve transferred billions of dollars to consultants, lawyers, and beltway bandits, and they’ve done very little to improve anything in a practical sense, except make CTOs and CIOs paranoid about the future (which could be a good thing, I suppose).

I agree with you that in the short run, software litigation will be largely a nightmare, and will largely increase costs significantly on software (costs that will be passed on to the consumer, of course).

However, taking your own point that “the industry is still in its infancy” into account, remember the state of the U.S. meat packing industry when Upton Sinclair wrote The Jungle? Oversight of any new industry (either by government through regulation or by the self-interested trial lawyers through liability litigation) is always a nightmare in the beginning. In a very real sense, the beginning of regulation almost always produces no net benefit, as everyone is scrambling to circumvent as many of the liabilities and regulations as they can to give themselves a strategic advantage over their competitors, and the regulatory agencies are unable to perform oversight.

I worked in a meat packing plant for a summer in college, though (Farmer John, for the record), and with a company-employed health inspector and not one but two on-site state health inspectors I can say honestly you could eat off the floor in most parts of the plant and you’d be perfectly safe. The horror stories of “Fast Food Nation” aside, I suspect most meat packing plants here in the U.S. are closer to the one I worked in than the ones in “The Jungle”.

That simply would not have come about without regulation of some sort. Companies, for the most part, act in their own self-interest, which in their case is the profit margin. If externalized costs aren’t transferred back to the corporate entity, it’s not going to bear those costs voluntarily… or if it is (because it’s run by an ethical board of directors), it will be unable to compete in a market where the competitors aren’t bearing those same costs, and can undercut prices.

I’m not an anti-corporate person by any means, and I do think that by and large corporations are run fairly ethically (maybe I’m deluding myself on this point). However, it only takes a few bad apples to ruin the balance of power for everybody, and an unregulated or unliable industry is going to attract the bad apples… and the software industry is both.

Take the power generating industry. Cut out regulation, and you get Enron’s price gouging in California (never mind the corporate malfeasance). Take the telecommunications industry. Cut out regulation, break up AT&T into the Baby Bells, and you get market competition driving down prices… but you also get Montana Power selling off assets to try and get into the unregulated data communications business and WorldCom and Global Crossing cooking their books. Is the gain that we as consumers saw in the drop in prices worth the hits we’ve taken to our 401ks and pensions? Maybe for me, I’m in my early thirties, but I doubt someone near retirement is overly thrilled.

Sure, the software industry is different from telecommunications and power generation and most other industries, for that matter – especially in the current infancy state of the industry. Even in the long run, you may very well be right -> imposing liability will have a net negative effect. I certainly agree that in the short run the effects will make me cringe.

However, the fact that you can use Office 95 or I can use Open Office (or vi or emacs or some other freeware, for that matter) doesn’t affect Microsoft in the pocketbook. Quite frankly, they don’t care if we’re educated customers, as long as the majority of the population is still willing to take their Mc Donald’s version of software. Even Fortune 500 companies that are careful customers probably aren’t going to stop buying Microsoft products (although they may demand lower prices) -> they’ve already dealt with the issue of the bad security by hiring people like us to work around it and make their systems recoverable. We may be telling them until we’re blue in the face how bad things are, but we all have anecdotes of having someone go over our collective heads to demand something insecure.

At least, until some worm virus comes around that does significant real damage -> and I’m not talking “the US economy lost billions due to Melissa or Sobig” or whathaveyou, that’s largely a magic number -> things were down for a while, revenue wasn’t being generated, but nothing was really lost. There’s a difference between “we weren’t getting revenue for a while” and “all of our data is gone and our on-site backups are encrypted with keys we don’t have” or “all of our hardware just ate itself” or “all of the main routers on the internet just burned themselves out”. Then you’re not just losing revenue, you’ve lost critical infrastructure and you can’t get it back in days or even necessarily weeks (or ever).

At that point, we’re all going to be affected, even the uber geeks who’ve been writing their own bug free web servers and mail servers for personal domains, because our banks will be down, or our credit cards won’t work, or some other consequence.

In short, regulation and/or liability laws are the only things I can see solving the problem of externality. Me or you withholding our pittance from Microsoft and Intel doesn’t do much.

PS -> regarding your last post, which you apparently wrote while I was penning the Oxford English Dictionary here…

If I was running a company with an IT budget in the hundreds of millions or billions of dollars (and I know they’re out there), I’d get the highest up guy I could get at Intel and Microsoft and Sun and AMD together in one room and say, “Look, either you monkies give me a corporate workstation hardware/software platform that will (a) last 10 years and (b) that has a secure OS with a web browser, an email client, a spreadsheet program, a word processor, and an HTML authoring tool for presentations, or I’m taking my damn money and I’m starting my own damn corporate desktop company, even if it only serves me as a customer. You’ve got 8 months to give me a working proposal to keep my business”. Sure, the hit my stock would take on Wall Street may get me kicked off the board before I could follow through, but on this I agree with you wholeheartedly -> we pay for computing power that’s hundreds of times in excess of what we need, to run applications that are hundreds of times more complex for our requirements. But the point is moot, as I don’t control a pocketbook that large.

Unless you know somebody that’s looking to hire 😉

Andrew van der Stock September 10, 2005 3:31 AM

As the lead author of the OWASP Guide 2.0, I commend Marcus for stating his view that buggy software can be improved by performing code reviews. This has been my experience as well, but I am biased 🙂

However, I found Marcus comments jilting with my own experiences in the large corporate sector.

I currently work at a very large financial institution which has for the last 25 years has made the commercial decision to not patch, not upgrade and not be bleeding edge at any time. Most workstations are still NT 4.0. Most servers are still Solaris 7. We use an ancient Java.

This approach costs them so much downtime and lost productivity, through:

a) known exploits work using known configuration issues and poorly written old software … which has been fixed by the vendor

b) known bugs which cause avoidable downtime, which are fixed by the vendor but are yet to be deployed

c) insecurable foundations – you simply cannot change a large institution in a meaningful way which has embedded such old software centrally into its daily operations.

d) avoidable downtime and data integrity lossage from worms and trojans which sweep the network on a regular basis

Sure it saves them a bit of cap ex costs with their software vendors as they seem to upgrade once in every 10 years, but it demonstrably is not secure and it costs them zillions in op ex. It does not engender scalable data integrity.

At this time, the usual bogeyman (Microsoft), which is an easy target for many who do not know any better, have cleaned up their act. Their modern stuff requires less patching, less out of the box twiddling to make secure, and is generally more robust in the face of bad software running on top of it.

For example, I have just finished a review of a system which contains both Windows 2000 and Windows 2003 hosts. The Windows 2003 “to be fixed” section is half a page long and has only low risk findings. The Windows 2000 findings is four pages long, has many high risk findings, and could be summed up by “Upgrade to Windows 2003 – it’s cheaper”. Microsoft has made this jump in every one of its recent products.

However, if a person followed Marcus’ recommendations to avoid buying buggy software, well, it’s my personal belief that you’d be shooting yourself in the foot, as the fixes are there to be had with the current versions. Ditto with Sun with Solaris 10 with app domains, and RH EL 4, with SELinux enabled by default. This is good stuff, which will improve security even for those who set and forget.

The “don’t patch and don’t touch” aspect simply does not wash with on the ground experience of large corporates who do exactly that.

It’s about making reasonable choices to reduce the attack surface area. There are many ways to do that, and in my opinion, good quality out of the box configurations and later software benefit from recent advances in secure coding. 10 year old software has none of this.

Andrew

Victor Wagner September 10, 2005 6:49 AM

Whole idea of this essay is to turn computer into equivalent of totalitarian state – anything is prohibited except what is explicitely permitted. This effectively turns wonderful universal information processing machine into equivalent of TV set.

Kevin Davidson have already mentioned here that “Default deny” approach is nightmare for programmer. And there is really no visible barrier between user and programmer. As soon as user understand that computers can do work by theirselves, he began to write macros, scripts etc.

I have mobile phone, which uses such approach to allow java applets to connect the internet. Each time I start LiveJournal or jabber client it asks me whether connection from this applet to internet should be allowed. And there is no way to tell the damn thing “This application is designed to access internet, and I trust its author. So if I started this thing, it means I intend it to connect the Internet”. Perfect example of “default deny” policy.

This essay is the manifesto of people who are afraid of computers and don’t trust them or people who are knowledgable in the computers.

Marcus J. Ranum September 10, 2005 8:12 AM

remember the state of the U.S. meat
packing industry when Upton Sinclair
wrote The Jungle?

Ummm… good point. Hm.

I can’t offhand think of an industry that has gotten more efficient as a result of government intervention or liability lawsuits, but there are some that have gotten better.

mjr.

Pat Cahalan September 10, 2005 11:46 AM

@ Victor

As soon as user understand that computers can do work by theirselves,
he began to write macros, scripts etc.

In my experience, this is not true. Generally speaking, I find that a very large majority of users (well over the 90% range) don’t code, and most of them have no desire to learn how.

Most people look at a computer as an appliance, not a platform for building tools. Of course, that’s looking at the current snapshot of computer usage in this moment in time -> it certainly was the case that 30 years ago people with computers were all coders, and it is probably true that sometime in the future the pendulum will swing the other way and a larger percentage of people will become coders.

I’m not afraid of computers, but I certainly don’t trust them. I really don’t trust networked computers – even my home machine that I’m writing this on doesn’t contain any sensitive information (not because I don’t think I could make it secure, just because I don’t want to expend the energy to make it secure “enough” for me to entrust it with my bank statements).

I have worked with enough users to know that only a percentage of them are “trustworthy”, regardless of their technical savviness. A non-trivial part of systems administration is cleaning up after people who knew just enough to get themselves in trouble 🙂

I’d say that if you were plotting trustworthiness as a Y axis and knowledge as an X axis, you’d have a curve that has two relative maximums for trustworthiness. Complete neophytes aren’t very trustworthy, as they’ll fall prey to social engineering attacks that moderately savvy people will know to avoid. Moderately skilled people are somewhat trustworthy, as they know what they want the machine to do, but they’ve seen enough bad things happen to “stick to what they know” -> they don’t explore the boundries of the machine, so they don’t often break things, but they know enough to not fall prey to simple social engineering attacks. Intermediate users (beginning coders) are generally less trustworthy than moderately skilled users, because they fall into the class of having enough knowledge to know how to break things, or they have enough knowledge to leverage things in a very insecure way. It’s only really advanced users who become trustworthy, because not only do they know enough to program with security in mind, but they usually don’t trust themselves when they start a project, so they take steps to make sure that if they do something wrong, it won’t cause catastropic failure 🙂

Poke around on this website and you’ll find one common statement to quite a few of Bruce’s posts is that security is hard, and even people who just do security for a living often don’t do it correctly on the first N attempts. If you’re trying to build a non-trivially secure system, you must implement things to make the system harder to abuse (which has the unfortunate side affect of also making them harder to use) -> I’ll agree 110% with Marcus that a “default permit” policy is utter folly.

Pat Cahalan September 10, 2005 12:06 PM

@ Marcus

I can’t offhand think of an industry that has gotten more efficient as a result
of government intervention or liability lawsuits, but there are some that have gotten better.

In uncomplicated terms, any regulation is going to result in less efficiency -> regulations almost always lead to more complex production requirements, which is going to increase cost.

What I don’t know (and suspect even Ph.D.s in social sciences and economics would have difficulty proving) is whether the loss in efficiency is made up by the benefits of regulation -> in other words, I don’t know if re-regulating the power industry in California (and the inevitable rise in “average” cost) would result in a long term economic benefit.

It’s just too damn complicated to compute -> the decreased liklihood of price gouging or corporate malfeasance is certainly a benefit, and one that could be estimated in economic terms, but when you come right down to it there’s experts on all sides of the political spectrum that have done and will do studies that have vastly different conclusions.

In a socio-political sense, its one of those points that comes down to faith more than proof – I happen to believe that the economic burden of regulation, spread across the consumer base, is less than the economic burden of unregulated business, because the failure mode of unregulated business (witness the savings and loan bailout) is so great. I can’t prove it, however – the system is too complex and the interdependencies are too involved.

It’s always a good “beer and peanuts” discussion, though.

Richard Braakman September 10, 2005 5:30 PM

One factor in the economic analysis is that regulations create a barrier to entry. Sometimes that seems to be their primary purpose… beware of “self-regulating” industries!

The software industry is famous for its low barriers to entry. Don’t destroy that lightly.

pdf23ds September 10, 2005 5:50 PM

Well, theoretically and simplistically, I think it’s pretty obvious that at the very minimum transferring all external costs to the corporation will result in the same net effect, as the “costs”, as costs, would have to be paid eventually by someone anyway.

But, as you say, it’s nowhere near that simple. For one, environmental costs are difficult to quantify, since they cause long-term damage to common resources and conditions, and, in effect, change the very environment in which we operate. If certain of those changes can be adapted to, they might not need to be considered as costs, but that’s a hard question, too. And then there’s the possibility that some actions cause irreversible and terrible harm that ends up killing us all in twenty years, and what’s the cost of that?

Neigborcat September 11, 2005 7:42 AM

On the accuracy of Ranum’s article:

I’m not an IT expert, and I don’t play one on TV. I’m a process design and “quality” engineer who specializes in new product launches, and I almost certainly fall into the “just enough computer coding knowledge to be dangerous” category, but a large part of my job falls into the “failure analysis and prevention” category, and from this general perspective, Ranum’s essay is spot-on the mark.

The industry I currently work in has historically taken the “penetrate and patch” approach to manufacturing process failures, but in the world of physical manufacturing, this means costly redesign of tooling and even entire plant lay-outs when bringing a product from concept to production.

Our foreign competitors have provided an example of “pause and study, and do it right the first time”, but the domestic companies I work for interpreted this only as “reduce re-design itterations” and have failed to provide the engineering resources and management savvy to create robust initial designs.

The result? Products are being launched with more flaws than ever before, but the tools traditionally used to fix them have been taken away because “we aren’t supposed to need them anymore”. In the industry jargon of our time, this is being called “Lean Manufacturing”

I’m writing this not to bemoan the dismal state of US manufacturing competitiveness, but to point out that Ranum’s observations have the ring of truth from a basic principles point of view, and can be applied far beyond coding and IT security.

Davi Ottenheimer September 11, 2005 8:07 AM

@ Neighborcat

I think you’re right on target. I was starting to wonder about this in similar terms. Most products in the market lack quality in some way and we (consumers) almost always have to rely on popular data to determine adoption rather than individual ability to determine quality. When people ask me how it is possible that anyone can accept insecure software from vendors, I ask them in response “do you drive a Ford?” Then I ask, regardless of whether they say yes or no, “have you heard about the latest recall?”

I figure as long as consumer reports continues to get it wrong, so will we. I mean they can only look into the past, not the future, and buying software from vendors is about the future as much as the past.

@ Richard Braakman

“The software industry is famous for its low barriers to entry.”

I like this phrase. Unfortunately, barriers to entry in the US software industry seem to come more from 900 pound gorilla companies actively shutting out competition, than from regulations. In fact I can not think of a single security regulation that stifles innovation. My experience has been that the regulations are actually opening doors for entry into areas that have been closed or did not exist prior.

Moreover, security regulations themselves are so new that they are largely based on “best practices” rather than some difficult bureaucratic hurdle. That means a new company would actually have a far easier time achieving compliance and gaining a competitive advantage (higher quality output), while retrofit costs for existing companies can be virtually prohibitive for compliance in areas such as development lifecycle controls, identity management and encryption.

Davi Ottenheimer September 11, 2005 8:29 AM

Incidentally, I’m not pro/con Fords. They just make for a good discussion point as they always seem to have recalls and few people pay attention, even if they own one. The latest was apparently September 7th, 2005 for about 4 million vehicles:

http://www.ford.com/en/vehicles/owners/recalls/default.htm

“Ford Motor Company has conducted an intense investigation to determine the cause of under-the-hood fires related to speed control deactivation switches.”

I think this is related to their recall in January of about 800,000 vehicles but different than the recall in June of 300,000 large pickup trucks. See what I mean?

A similar example of the problem is the oft repeated story of how GM, trying to achieve the same level of quality as Toyota, initiated an automation program that ended up costing more than if they had just bought Toyota and used it to produce their cars, and they still did not achieve a higher level of quality. I think that eventually led to a partnership in the mid 1980s that actually helped.

What was the difference? Toyota understood from the beginning that the objective of automation was not to reduce labor cost. Human-hours per unit was still a benchmark, but the highest quality control was achieved by Toyota’s recognition of “proper” human judgment to navigate varying conditions.

http://www.eco.utexas.edu/Homepages/Faculty/Norman/long.extra/Projects.F97/GM/Reasons.html

In that sense, I think it fair to say Marcus’ list of the “dumbest ideas” is a help if it will lead to better judgement and therefore increase quality/security.

Davi Ottenheimer September 11, 2005 9:15 AM

@ Pat

“In uncomplicated terms, any regulation is going to result in less efficiency -> regulations almost always lead to more complex production requirements, which is going to increase cost.”

Not necessarily. If you look at the larger “cost” picture the numbers usually say that practicing security early and often in development cycles can reduce the overall cost of developing and maintaining a product by ten times or more. Back in the day, this used to be called just regular computing best practices (http://www.stevemcconnell.com/ieeesoftware/bp05.htm), but today it’s increasingly becoming the realm of security regulations.

It seems to me the regulations are kind of like those pesky regulations that help ensure your house is less likely to catch fire. They might add nominal costs to the contruction phase, but significantly lower the real cost of the house. And they probably started out as simple electrical or extinguisher best practices, but now they’re part of the fire code.

Steve September 11, 2005 11:41 AM

We can make systems that are more
powerful but less complex. I
absolutely believe that to be true. It
is, however, easier to build systems
that are more complex and more
powerful.

I think everyone in IT should read Joseph Tainter’s Complexity, Problem Solving, and Sustainable Societies. The word “computer” does not appear in the paper, but the central theme of complexity fits perfectly. Ever increasing complexity eventually leads to decreasing utility. Sometimes I think we’re getting there fast.

Various places to read it. Here’s one: http://dieoff.org/page134.htm

Pat Cahalan September 11, 2005 2:10 PM

@ Davi

What I meant to say was, in the short view, any regulatory imposition increases overall production cost. This is true if the regulation is about security or occupational safety or whatever -> even if you’re already complying with the spirit of the regulation, complying with the letter usually means some sort of red tape, and that means increased cost, even if it’s just a temp guy filling out forms of some sort.

If you look at the larger “cost” picture the numbers usually say that practicing security
early and often in development cycles can reduce the overall cost of developing and
maintaining a product by ten times or more.

Only if you’re compelled to produce a secure product 🙂 If you don’t give a hang about security, you don’t have to re-engineer your product to make it more secure.

It is true, though, that if you are compelled to produce a product with some sort of regulated constraints (security, safety, whatever), it’s much cheaper to fix bad design early. A Ph.D. in IS management pointed out to me recently that a design flaw that costs $n to fix at the concept stage costs $10n to fix at the design stage, $100n to fix at the manufacturing stage, and $1000n to fix at the recall stage…

Pat Cahalan September 11, 2005 4:50 PM

@ Richard

One factor in the economic analysis is that regulations create a barrier to entry.

This isn’t necessarily always true, but is certainly a danger to protect against in many ways. You don’t want regulation (or liability) to halt innovation. Slowing it down in the name of quality is probably not a bad idea, though.

Sometimes that seems to be their primary purpose… beware of “self-regulating” industries!

Even governmentally-regulated industries have to be watch-dogged for this. It’s very true that companies can and do contribute to politicians to amend regulations in a way that benefits a single corporate entity, and this has happened in the past and will continue to happen, as long as we don’t hold our politicians accountable. It doesn’t mean that regulation isn’t the best tool we have for “de-externalizing” costs, it just means we need to keep our eye on the tool. In some instances it might be a better idea not to have formal regulations, but I stand by my earlier statement -> an unregulated and unliable industry is a bad thing. You’ve got to have some method of making external costs internal.

The software industry is famous for its low barriers to entry. Don’t destroy that lightly.

Not to disparage designers and coders as a group, but I’ve seen so much crappy software that a higher barrier to entry seems like a good idea to me -> I can’t count the amount of man hours I’ve seen wasted trying to uninstall/fix bits of freeware or shareware (or commercial software) installed by the unwary that broke a working computer, not to mention trying to secure garbage.

True, some major responsibility for this rests on the user who installed the software (and the sysadmin who gave them the rights to do so), and I’m not trying to pass the buck entirely to the coders of the bad software. I also realize, as Marcus pointed out, that non-coders (marketing guys or executives or customers) can demand changes in a product that cause these problems, too. I’m not entirely blaming the people that write the code.

But wherever the ultimate responsibility lies, it’s blindingly obvious that a large majority of software out there is seriously lacking in overall quality, of which security is only a part. Enforcing a minimal amount of quality seems to be a good idea to me.

Aside from all this is of course the problem of establishing a framework for regulation, which is admittedly a giant nightmare in itself. Bruce has posted quite a few notes on things like the “Trusted Computing Platform” here himself… I admit I don’t have an easy answer to this one.

I hope you guess my name... September 13, 2005 4:05 AM

@ Pat

@ Richard

One factor in the economic analysis is that regulations create a barrier to entry.

This isn’t necessarily always true, but is certainly a danger to protect against in many ways. You don’t want regulation (or liability) to halt innovation.

Why the hell not? What sense does it make to race ahead when practically the entire collection of all-authoritative experts are waving big neon flags at you all over the place with warnings like: “Slow Down”. “Warning: This road is paved with tacks.” “Please stop trying to cross this bridge. We’re trying to rebuild it from the other side because it’s been a known danger for a long time and we already know where you’re trying to go anyway so please trust us! PLUS, if you would have just made that left hand turn at Alberqurque you would have hit the big posted Detour sign – it’s ORANGE – Every time you try to cross this bridge in your race around the track we have to stop OUR more experienced “Innovation” which you will soon enough realize is actually the same thing as Back Peddling the hard way… which is all you’ll be doing for the rest of YOUR life if don’t stop flying by all the people waving flags because you were so sure you were getting somewhere we’ve all never gone before. Sorry to disappoint you if you’re still reading the sign. But all is not lost! Cheer up, bang a U turn, and follow the signs towards Alberque. At your speed, you might still be the first in line on the way back and we’ll finally be DONE with the new bridge so you can SAFELY race back and forth and Innovate even FASTER without EVER having to COMPLETELY STOP AND READ A DANM REGULATED WARNING SIGN LIKE THIS AGAIN! Sorry if stopping cost you more money somehow, but it’s not our fault you bought that damn car that goes so damn fast on tracks that aren’t built to support it. Good thing it has a seatbelt though!”

Honestly, I really think it stinks that you specifically said “but is certainly a danger to protect against in many ways”

Good Lord. There isn’t even ONE way. Where do get off with “many”? All you just did was frighten a portion of folks from trusting regulations and mandates all over again (which just stiffled innovation), raise the blood pressure of another portion of folks who are tired of not being able to trust new regulations and mandates, discourage those of use who actually know what and how to write a proper mandate that is only done for your benefit, and encourage the rest of them that innovation must stop at nothing! When there isn’t even anything left to innovate! (never fear everyone else, by the time we fix everything, (…) we’ll be able to start innovating all multi-threaded, cross-referencing, massively distributed multi-dimensional versions of everything that already exists in various forms one day real soon now).

Sometimes that seems to be their primary purpose… beware of “self-regulating” industries!

Even governmentally-regulated industries have to be watch-dogged for this. It’s very true that companies can and do contribute to politicians to amend regulations in a way that benefits a single corporate entity, and this has happened in the past and will continue to happen, as long as we don’t hold our politicians accountable.

Exactly. The beautiful thing about this point is that the year is 2005. As some have noted and Marcus specifically tried to spell out: the generations are moving right along. The current pulse on planet Earth and the political (etc.) landscape the last time I finished checking (about :26 ago) is still nicely gliding along toward the next plateau that has already been prepared for an intelligent politician.

Don’t scare away the almost intelligent politicians or the almost politically inclined technologist!

Please, PLEASE, know a heck of a lot of facts about a heck of a lot more things things and learn the art of embracing the situation as it IS before making such enormous waves.

All you just did was slow things down again. And it takes a lot longer to for us old-hats to go triage the entire holistic view (obviously) than it does for you type that short sighted and less experienced “comment” that has the potential to do more damage than you can obviously currently grasp at your stage in the race.

/.

I hope you guess my name... September 13, 2005 4:23 AM

@ Pat Callahan (I’m not picking on you, I’m just respond to this blog backwards).

@ Davi

What I meant to say was, in the short view, any regulatory imposition increases overall production cost. This is true if the regulation is about security or occupational safety or whatever -> even if you’re already complying with the spirit of the regulation, complying with the letter usually means some sort of red tape, and that means increased cost, even if it’s just a temp guy filling out forms of some sort.

That’s right! Let’s get used to it and move forward anyway. Eventually, as we move forward, the glaring injustices and responsible parties will start showing up in such undeniable patterns that someone somewhere will pay people back before they actually DO all get sued.

I hope Pat’s perfectly accurate articulation of the situation above doesn’t prevent anyone from doing what needs to be done to get the problem corrected.

With comments like this that can actually be propogated and understood at all levels of any organization, it won’t take half as long to get to the end of this one.

I hope you guess my name... September 13, 2005 4:26 AM

@ Marcus

The next time some C-Level suit says to you:

“That sounds great, but our enterprise network is really complicated. Knowing about all the different apps that we rely on would be impossible! What you’re saying sounds reasonable until you think about it and realize how absurd it is!” To which I respond, “What about the title ‘Chief Technology Officer’ are you earning if you don’t know what your systems are running and/or being used for?”

Throw this URL right back at them: http://www.datascientific.com.

I hope you guess my name... September 13, 2005 4:28 AM

@ Pat

Only if you’re compelled to produce a secure product 🙂 If you don’t give a hang about security, you don’t have to re-engineer your product to make it more secure.

You soon will!

I hope you guess my name... September 13, 2005 4:59 AM

@ Richard

Call me crazy, but I believe that in order to design a secure system of any sort, you need to know how to break it.

Not crazy.

But you don’t need to HAVE broken it, either. (hm. bad grammatics but this isn’t english and that word choice was specifically aimed to suit yours). I think that’s the finer line folks are usually trying to draw. Which is true.

To keep it simple, as an example, you don’t have to learn machine level code (though it’s fun and straight to the point), but if you have a binary and hex math starting point, and actually listen to your instructors, read the little C book from cover to cover, and do what your instructor suggested (don’t go straight to the keyboard – get out your pencil and write it all out before you code), you certainly develop the capacity to very quickly understand how things could break and do predictably(!) and potentially damaging things. I mean geeze, look at buffer overflows. Not rocket science there. I never popped a network stack either, but I definitely knew how it could happen if someone did not anticipate every possible scenario within their own code.

Which, in my school, hardly anyone ever did. Usually by just hitting a key or two that they never thought anyone would have a reason to hit would set them straight (after their tantrum “why did you push that button! it said only push A,B,C or D!!!!!)

They actually were always the same ones that ran straight to keyboard. Their stuff ALWAYS broke and threw errors that they would spend what seemed like hours agonizing over until someone would feel sorry for them, or felt up to the challenging of being the first to point out his line of code that started the whole problem in the first place. .

It’s not like the industry started out with teams of people writing code together, and blindly trusting and calling “canned” libraries or object oriented code or whatever you use today that someone else wrote.

There was a time where each pre-written “library” call was trusted merely because you actually READ it and made sure it did exactly what you wanted it to do the way you would have done it (or better) before CALLING it.

It absolutely DOES help to have broken things, too, though. Had I continued coding I most certainly would have starting BREAKING THINGS the instant Object Oriented Technology was introduced.
In fact, that’s the whole reason I stopped!! It sucked!

I hope you guess my name... September 13, 2005 6:49 AM

@ Victor

Whole idea of this essay is to turn computer into equivalent of totalitarian state – anything is prohibited except what is explicitely permitted. This effectively turns wonderful universal information processing machine into equivalent of TV set.

What are you talking about? No one’s talking about message content protocols or analysis or restriction or even reshaping of message content.

Slow down. Soon we’ll seriously and specifically categorize that stuff to make it easier to navigate, sort and sift through any way we want, but no one will ever get away with stifling or limiting or reshaping our actual individual freedom of speech.

Were all talking architecture and infrastructure problems in hw and sw.

Which kind requires a kind of totalatarian state..right?

The Internet is still everyone’s playground. Even though it’s all whacked out of balance.

Pat Cahalan September 13, 2005 9:53 AM

@ I hope…

Why the hell not? What sense does it make to race ahead when practically the entire collection…

Because you don’t want a regulation to inspire “security for the sake of security”. If you agree with Bruce’s overall stance of “security is measuring trade-offs”, there are times when insecurity can be (relatively) a good thing.

If you’re writing a video game that is going to be played on a game console, having a buffer overflow that can be accessed by pushing all of the buttons on the controller 100 times in 3 seconds probably isn’t that big of a deal. Sure, someone can rig up a modified controller that will force input the junk into the machine and give you root on an Playstation4, but really, do we need to force game designers to write their code against that sort of an attack? It might be in the best interests of Sony to do so (and it might be good coding practice), but I don’t see how someone could leverage such a bug into a worm that is going to affect the general public, which is what a regulation is supposed to be for, right?

That’s precisely the sort of regulation that can be used to leverage a single corporation’s monopolistic position. Blizzard sends a letter to their congressman, “this upstart game company produces a game that has four buffer overflows in it”, and suddenly state inspectors are crawling over said upstart company.

Sure, it’s a silly example, but it illustrates the point (and analogy holds -> these sorts of things are done today). Simply passing a law that says “thou shalt not write buffer overflows” would be horrible regulation.

There are other reasons to be careful with your regulations. IMO, it’s perfectly fine to hold Big Tobacco liable for smoking related health problems (because we don’t want to pay for them with our tax dollars), but should we pass laws that make it illegal to smoke?

GaryO September 13, 2005 4:04 PM

Marcus – great article. Very intelligently written and very easy for a mid-level user such as myself to understand. My question is this: How can I as an XP Home user rule out items 1-3? I get tired of constantly having to update virus protection and spyware protection. It shouldn’t be a requirement to using the PC. And I can’t tell you how many complaints I’ve heard from my family because I do try to use a Default Deny philosophy when it comes to firewalls and virus protection. It slows them down when daddy has to come in and make changes to the firewall so that they can continue surfing. But, again, it shouldn’t be necessary. So, my question once more, what can I as a home user do about this?

Ari Heikkinen September 13, 2005 5:11 PM

I think the worst thing that’s ever happened to computer security is actually firewalls and all kinds of anti-whatever programs. Those must be the number one and two of the best excuses for security through obscurity ever invented.

Ari Heikkinen September 13, 2005 5:14 PM

Just to add, how many times, when asking someone about computer security, have you gotten “I’m running firewalls and antivirus software so I’m safe” replies?

Andy B September 13, 2005 5:16 PM

@ Marcus

Why doesn’t someone do something paradigm-busting like make an entire company’s desktop infrastructure network boot from a single tamper-proof code image?

That’s funny… I actually proposed a similar concept at the Fortune 500 where I work. It was received remarkably well (it wasn’t accepted, but it isn’t completely off the table for the future). We get so caught up in how to manage patching, signature updating, and other “necessary” tasks that we don’t stop to think about ways to design systems so we don’t need them.

Javier September 14, 2005 3:27 AM

Even though Marcus essay is quite good in some aspects I really don’t understand why somebody that works at a company that is engaged in developing a vulnerability assessment product (i.e. Tenable Security) and even does training on their pen-testing products is so against “penetrate and patch”. I do agree that it shouldn’t be an end in and of itself, but crashing (trying to find security issues in) software is IMHO should be part of standard QA procedures.

Would you drive in a car model that has never been smashed against a wall to make sure that the driver survives the crash? That’s what the European New Car Assessment Programme, aka Euroncap, does. I don’t know if car manufacturers in the US do these kind of tests but similar (software-wise) tests are also described in the Common Criteria and in NIST guidelines.

Jose September 14, 2005 11:11 PM

“If you’re writing a video game that is going to be played on a game console, having a buffer overflow that can be accessed by pushing all of the buttons on the controller 100 times in 3 seconds probably isn’t that big of a deal. Sure, someone can rig up a modified controller that will force input the junk into the machine and give you root on an Playstation4, but really, do we need to force game designers to write their code against that sort of an attack?”

Pat,
Any law generally shouldn’t apply to everyone in all cases. In case you haven’t noticed it yet, the example you gave is probably not a good one because Playstation4 is likely to be a very network-ready beast.

I suppose that an application that runs on someone’s watch and can go no further should not be regulated the
same way as if it ran on a watch that was capable of participating in DDOS.

It’s like almost any law. We don’t care if you run naked around your farm shooting off the shotgun at every vase in sight while cussing and making outragiously false statements about people in public office. You can even pull out the muffler and catalytic converter from your truck and run the truck around the barn haystack 1000 times at 200 miles per hour (if all the gas stays inside — you must inhale the toxics to cause a drop to sufficiently low levels before unsealing the barn)… But once you are in a position of interacting with the public commons, things change a little.

Can the application use the network or not? — that could be a differentiating criteria within a typical regulation, for example.

I hope you guess my name... September 15, 2005 4:18 PM

@ Jose,

THANKS. I didn’t have the mental capacity to even try to respond to Pat Callahan’s response.

I hope you guess my name... September 15, 2005 4:19 PM

@ Jose,

THANKS. I didn’t have the energy or mental capacity left in me to even try to respond to what Pat Callahan posted.

wumpus September 15, 2005 10:13 PM

Jose,
“Can the application use the network or not? — that could be a differentiating criteria within a typical regulation, for example.”

There is probably some interesting internal battles about security and buffer overruns in microsoft. To hack an Xbox requires certain known buggy titles to run “saved games” which proceed to hack the box. I strongly suspect that these are buffer overflows.

I will admit that the only time I could possibly care about whether someone is using a hacked xbox is if I am playing them over a network I am wondering if they are using a bot for assistance.

Wumpus

Pat Cahalan September 22, 2005 10:05 AM

@ Jose

Any law generally shouldn’t apply to everyone in all cases.

Agreed. However, technical details are historically misunderstood by the non-technical. Remember the infamous FBI raid on Steve Jackson Games? Passing laws that regulate software writing can lead to bad interpretation of the law (note, I’m not saying it’s not a bad idea anyway, I was just agreeing with another poster that you need to be careful when you regulate things).

In case you haven’t noticed it yet, the example you gave is probably not
a good one because Playstation4 is likely to be a very network-ready beast.

Sure. Many game consoles are already network-ready. But the example is still a good one -> you can’t leverage a controller-input buffer overflow over the network. At least until they have networkable controllers. Did you not read the whole example?

I suppose that an application that runs on someone’s watch and can go no
further should not be regulated the
same way as if it ran on a watch
that was capable of participating in DDOS.

Right, that was my point. However, making distinctions in regulation isn’t easy. That’s all I was saying.

It’s like almost any law. We don’t care if you run naked around your farm…

And like almost any law, it can and probably will be misused occasionally. Your example of the naked guy isn’t accurate -> if he lives next door (or even in the same county) with someone who objects enough to his behavior and with a connection to the local legal enforcement community, he’s going to be harassed. Given his behavior, he probably deserves a little harassment, but you get my point.

When you pass a law you’re giving power to the legal enforcement community. You have to be aware of the trade-offs there. To assume that power will always be used the way you originally intended is naive.

Can the application use the network or not? — that could be a differentiating
criteria within a typical regulation, for example.

This is overly simplistic, and is an example of why regulation wouldn’t be easy -> You can have a local application that has a local overflow that enables someone to take over the machine remotely, if they can get remote access to the machine through other means.

As an example, they steal someone’s login, execute a local buffer overflow in some non-networkable application, gain control over the machine, and can now attack the network.

Any reasonably effective regulatory ruleset will be fairly complicated, and that much more difficult for a non-technical legal system to interpret.

Tethered Rose September 24, 2005 1:50 PM

“difficult for a non-technical legal system to interpret”

Is the only thing I can agree with in that sentence.

Tethered Rose September 24, 2005 1:54 PM

“(Hint: the way to kick Microsoft’s a*s: defer further purchases from them until you get what you want in your products. Most businesses have plenty of copies of Microsoft-whatever licensed. Keep using that, defer purchases, and essentially put Microsoft on a little “hunger strike”. For example, I still happily use Office 95. It kicks boo-tay on my 2.2Ghz machine. Microsoft is not getting any more $ from me for word processing. I am planning on stabilizing on Windows XP unless they conspire with Intel to force me to shift/upgrade somehow, in which case I will completely re-assess my O/S loadout. Now, if 10 of the FORTUNE 500 announced they were going to stabilize their software loadout of Microsoft products until Microsoft stopped shipping under a EULA – it’d be a total rout.)”

Start assessing your O/S loadout. They’re not conspiring with Intel.

Tethered Rose September 24, 2005 2:02 PM

“If I was running a company with an IT budget in the hundreds of millions or billions of dollars (and I know they’re out there), I’d get the highest up guy I could get at Intel and Microsoft and Sun and AMD together in one room and say, “Look, either you monkies give me a corporate workstation hardware/software platform that will (a) last 10 years and (b) that has a secure OS with a web browser, an email client, a spreadsheet program, a word processor, and an HTML authoring tool for presentations, or I’m taking my damn money and I’m starting my own damn corporate desktop company, even if it only serves me as a customer. You’ve got 8 months to give me a working proposal to keep my business”. Sure, the hit my stock would take on Wall Street may get me kicked off the board before I could follow through, but on this I agree with you wholeheartedly -> we pay for computing power that’s hundreds of times in excess of what we need, to run applications that are hundreds of times more complex for our requirements. But the point is moot, as I don’t control a pocketbook that large.”

Um, I hear Mac is where it’s at.

Craig Hubley January 25, 2009 5:37 PM

Without addressing the cultural questions, nor the implications of having a glorified hacker culture, nor commercial or military reasons why this might be encouraged in the US, all of which are out of scope for organizations:

This secure-enough OS that has already and will certainly last another ten years and has these core features exists, it’s called FreeBSD. Linux has too many variants and configurations to secure all of them, but there are secured versions of it too that work. Is this not more of a question of allowing incompetent or cowardly or conflicted or unmotivated managers to make technical decisions than one of any lack in tools or the profession? Corporations that just fired a top executive at random (I mean really at random) after each major security incident or IT loss might well do better as self-interest forced those executives to pick CTOs based on performance not loyalty to a particular vendor or architecture they inherited from the last CTO. Pretending that the tools of management are somehow not wholly dependent on what’s on desks, laps and strapped onto our heads, and on the data schemes and ontologies that structure their view of the world, is dangerous. There’s an argument to put CTOs in charge of the accountants just to be sure that corporate structures and controls do not become dependent on numbers that cannot be audited in real time or reports with no traceable logic – to just refuse to build non-transparent corporate structures full of silos. (See Don Tapscott’s “The Naked Corporation” for a good-enough summary of these issues)

CTOs that don’t implement at least one disruptive major architectural change in their organizations during their tenure are probably not doing their job. It’s the lack of best practice exchange and the silo’ed structure of most companies that cause obvious things to be neglected. Seriously, do we believe that corporate security problems are generally caused by failure to implement the very latest OS patch the second it’s released, or by failure to follow up on more obvious simple training and habit problems regarding hardware, passwords and user habits?

And if diligence, followup, courage and a sense of long term proportionality are what we need most, then labelling the “hacker” as an inherently bad guy would only turn them into allies of corporate transparency, stakeholder activists, NGO and government probity. That is, everyone competent at finding out the facts no matter what would be locked out of corporate life at least in the US. What would happen to the people with these skills? They’d be allied to all forces (criminally acquisitive, commercially competitive, militarily threatening, socially progressive, journalistically probitive, politically opportunistic) outside the corporate arena. So the seduction of crackers (“hackers” in popular parlance, and if you want this to change pick a word that isn’t racist as the alternative) is more a function of corporate players wanting to retain a recruiting pool that will otherwise fall to their many foes.

I’d debate this issue with Ranum any time, though I can see his point. I also admire his commitment to transparency in female clothing (in his photography). 😉 We all do what we can to expose a bit more of the truth.

Leave a comment

Login

Allowed HTML <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre> Markdown Extra syntax via https://michelf.ca/projects/php-markdown/extra/

Sidebar photo of Bruce Schneier by Joe MacInnis.