Threat Modeling at Microsoft

This is an excellent series of blog posts by Microsoft's Larry Osterman about threat modeling, using the PlaySound API as an example. Long, detailed, and complicated, but well worth reading. The last post is particularly good.

Posted on October 1, 2007 at 5:48 AM • 46 Comments

Comments

RoxanneOctober 1, 2007 7:06 AM

"Law #1: If a bad guy can persuade you to run his program on your computer, it's not your computer anymore."

What about all of those "Automatic updates" that MS, anti-virus companies, ad infinitum, run? Does that mean that they own my computer? Yikes...

Terry ClothOctober 1, 2007 7:26 AM

@Roxanne:
@Gustavo Bittencourt:

Your points can be generalized as

"Law #1: If a guy can persuade you to run his program on your computer, it's not your computer anymore."

The nature of the guy is orthogonal to the meaning of the law.

Gustavo BittencourtOctober 1, 2007 7:44 AM

@ Terry Cloth

"The nature of the guy is orthogonal to the meaning of the law."

No, it isn't. I run some guy's program every time I use my computer and I am sure that you do the same.

ThomasOctober 1, 2007 8:34 AM

I'm underwhelmed.

"""if the ACL on the data store prevents anyone but an admin from writing to the store, it's probably safe to trust it, for example"""

Given that most windows users run as admin, this is hilarious.

In a comment reply, Larry writes:
"""In general, you treat code that run in the same process as you as being fully trusted - after all, there's nothing that this code could do that would compromise the machine/process."""

Wow... that sure explains a lot about Microsoft security.

And then there's this gem:
"""Heck, the audio engine threat model diagram doesn't include DRM either, because the DRM system doesn't functionally change the data flow for the audio system."""

DRM is not included in the threat model? You've just added a whole new class of attacker, the legal owner of (this copy of) the content, and you're not changing the threat model?

"""Igor, I care about this stuff [security] passionately. And I blog about what I care about.

I have no idea what the deal is with autopatcher, and have no opinion one way or another."""

How can you claim to be passionate about security, but have no opinion on a tool that is (or at least, used to be) widely used to apply security patches?


"""Elevation of Privilege threat doesn't apply to data stores (since a data store simply holds data, it operates at no privilege level)"""

Wrong. _where_ and _how_ the data is stored is frequently used to make decisions about trust. If I can 'elevate' my data to seem trustworthy, you might trust it when you shouldn't.
Can I set an ACL that implies it was never changed by untrusted users?
Can I convince you to somehow copy it to disk (security zone "c:\") from the 'net (security zone "http://evil.com")?
Can I move it to another location on disk which implies more trust?

The icing on the cake? A childish expose on the recent FireFox vulnerability. Honestly, does someone working at Microsoft really need to go anywhere else to find a web-browser vulnerability to dissect?

GurkOctober 1, 2007 8:58 AM

@Terry Cloth,
Can I have the source code for the web browser you wrote to view this site?

timOctober 1, 2007 9:01 AM

Microsoft has come a long way toward writing better, more secure code and talking openly about security issues. It has been a welcome change of pace and I no longer cringe when I talk to them about an issue.

Saying that - they still can't write software that is usable. Sure they may "get it" now - but their implementation continues to suck. Prime example: vista - the very example the author uses as "raising the security bar." Note to author: People have to actually use your software first before it "raises the bar" on anything.

Larry OstermanOctober 1, 2007 9:03 AM

Thomas, if you'd like an answer to your questions, try posting them on my blog, it's much more likely I'll answer them.

Of course, if you just want to take pot shots at Microsoft, that's your call.

Let's see...
"Given that most windows users run as admin, this is hilarious."

If a user runs as an admin, they don't need to tamper with your data store - they can do anything that they want with the computer.

"Wow... that sure explains a lot about Microsoft security."

If the attacker is running code in your process, they can already do anything that your code can do.

"DRM is not included in the threat model? You've just added a whole new class of attacker, the legal owner of (this copy of) the content, and you're not changing the threat model?"

That was one of the pieces of the rules of thumb that I edited out before publishing to the web (because it's not relevent to the vast majority of people). DRM has its own set of threat models with its own set of criteria, the threat modeling process I described helps to analyze security defects, not DRM breaches.

"The icing on the cake? A childish expose on the recent FireFox vulnerability. Honestly, does someone working at Microsoft really need to go anywhere else to find a web-browser vulnerability to dissect?"
Nah, I could find lots of web browser vulnerabilities. But this particular example is (IMHO) an example of how careful threat modeling can help to show problems. I could have picked on the Apple .DMG file woes just as easily (threat modeling would have shown that mounting a filesystem in the kernel that was downloaded from the internet is a prime target for fuzzing), but I thought of the firefoxurl: issue first.

"Wrong. _where_ and _how_ the data is stored is frequently used to make decisions about trust. If I can 'elevate' my data to seem trustworthy, you might trust it when you shouldn't.
Can I set an ACL that implies it was never changed by untrusted users?
Can I convince you to somehow copy it to disk (security zone "c:\") from the 'net (security zone "http://evil.com")?
Can I move it to another location on disk which implies more trust?"

You're right, sort-of. My point was that a data store itself cannot elevate it's privilege, because privileges are associated with running code - the contents of a data store are static data typically living on magnetic media somewhere, and not code (it takes code to read the data)). But you're right: code may make trust decisions about datastores and those trust decisions may be relevant.

Brandioch ConnerOctober 1, 2007 9:52 AM

@Larry Osterman

Larry, you may not agree with all of Thomas' points, but the fact is that almost all of the zombies out there are running Windows.

This is not theoretical.

"Vista raised the security bar for attackers significantly. As Vista adoption spreads, attackers will be forced to find new ways to exploit our code."

Larry, take some time and dig back through the press releases about WinXP, Win2K and even Win98. You'll see that same claim made over and over and over.

You can release patches every single day for a broken security model. That's what the anti-virus vendors do. But it will not make the model you've chosen any more secure. There will be another patch tomorrow.

You work for Microsoft. You make some good points in your articles.

But the theoretical discussion does not matter because your company's product is still the #1 zombie platform out there.

And do NOT tell me it is because it is the most widely used.

What happens AFTER an attack has succeeded?

Larry OstermanOctober 1, 2007 10:25 AM

Folks: if you have questions or comments about my posts, please go to my blog at http://blogs.msdn.com/larryosterman and post them there. I don't read Bruce's comments regularly, but I DO respond to just about every blog comment made on my blog.

Brandioch: The claim is valid. Every OS release has raised the bar. Vista takes the bar to an entirely new level.

And you're missing the point about threat modeling. Threat modeling isn't about understanding breach responses, it's not about avoiding common coding mistakes.

Threat modeling is an analysis process that helps you better understand the attack surface of your component so you can understand what you need to do to ensure that your code is more secure.

Carlo GrazianiOctober 1, 2007 10:28 AM

Overall I would say that the modeling and the distillation of security wisdom has some value, in that it obviously reflects some serious thinking and experience with building security into software development on large systems.

However, I can't help the feeling that the maximal lessons presented here are only applied within strongly-constrained limits to MS applications and within XP/Vista. I say this because the strong, spaghetti-like interdependences that appear to exist between subsystems within the OS and between the OS and MS applications (the ubiquity of IE springs to mind) must necessarily create a nightmare maze of "trust boundaries". Many of these have probably not even been identified yet, and many of those that have have probably received only a cursory severity characterization.

Perhaps the attempt to ID those boundaries is the explanation for the much-reported (and much-deplored) security-nagging "features" in Vista, where practically any non-trivial operation results in a dialog box reminding users that the world is full of evil.

The other thing that drew my attention was part of the explanation of "Immutable Law #1":

"That's why it's important to never run, or even download, a program from an untrusted source"

The conflation of "downloading" with "running" is a very MS-centric confusion, and reveals the failure to recognize one of those "trust boundaries". In fact, this "feature" of all versions of Windows --- the automatic execution of downloaded/attached/clicked stuff --- which derives from a refusal to recognize the distinction between code and data, is an important reason that windows machines in the hands of non-IS-expert users are such a threat to the rest of the net. It should be perfectly safe to download malware, and there are certainly reasons why one might want to. It should only be unsafe to run it.

LUAandUACOctober 1, 2007 10:32 AM

@Brandioch Conner
"..the fact is that almost all of the zombies out there are running Windows."

Probably true, and how many users on those zombie computers are running as admin?

That goes to the next point about Windows Vista raising the security bar by enforcing the user security model that was there since Windows 2000.

There is no reason users need to run with admin privileges, and Windows 2000 and Windows XP actually work just fine with users running with normal non-admin privileges.

The problem which Microsoft ran into was too many lazy programmers and/or companies writing bad programs that required users to run with admin privileges for absolutely no reason (e.g. Intuit, who's programmers apparently couldn't figure out how to write working software until this year with their 2007 release!). These problems are not really Microsoft's fault, although they take the heat for all these lazy ISVs.

Microsoft's biggest mistake was to leave the enforcement of the user security model to these lazy third party ISV's, but they fixed this with Windows Vista.

The unfortunate part is that ISV's have had more than 6 years to properly write their applications, and some still haven't gotten it (aka lazy), which is why there are still programs that continue to have problems with the Windows Vista UAC, and subsequently why users are seeing unnecessary UAC prompts. Users running Windows Vista with properly written and configured applications, should never see those UAC prompts.

Merijn VogelOctober 1, 2007 10:38 AM

While not considering myself a zombie, and having to run windows in many circumstances, I did like the article. It suggest that you draw threat models, and I guess the fact threat modeling is addressed, is the reason Bruce Schneier put this article up. It is good to see that microsoft employees seem to love their job and try to take care of business.

Denis BergeronOctober 1, 2007 12:08 PM

@Gustavo Bittencourt:
If a guy do something on your computer without your knowledge or consent. He is a Bag Guy (tm).

Foolish JordanOctober 1, 2007 12:31 PM

One of your rules of thumb stands out to me: "If your code invalidates assumptions made by other entities, you need to be concerned".

For my money, this is the biggie. And the worst thing about it is that it's easy to state but seems very hard to apply in practice. What exactly are the assumptions? Who knows? Does anyone know?

Brandioch ConnerOctober 1, 2007 12:31 PM

@Larry Osterman
"The claim is valid. Every OS release has raised the bar. Vista takes the bar to an entirely new level."

No. You can put as many locks on your front door as you can fit. But if you don't have locks on the windows, you have not increased your security at all.

And as was mentioned above, Microsoft loves the spaghetti dependencies. We went over that at the Netscape trial when your company decided to "integrate" IE with the OS.

There will be WinVista zombies. And I'm not talking a few. They will be a prevalent as other Windows zombies (in proportion to their marketshare).

If WinVista was MORE secure than other Windows versions, we would see FEWER zombies (adjusted for marketshare).

Foolish JordanOctober 1, 2007 12:32 PM

One of your rules of thumb stands out to me: "If your code invalidates assumptions made by other entities, you need to be concerned".

For my money, this is the biggie. And the worst thing about it is that it's easy to state but seems very hard to apply in practice. What exactly are the assumptions? Who knows? Does anyone know?

suutarOctober 1, 2007 1:32 PM

@Gustavo:

And indeed, it's not (just) your computer anymore. Nobody is the sole owner of their own computer nowadays; we just hope (and attempt as best we can to ensure) that all the other 'owners' are benign.

AnonymousOctober 1, 2007 1:33 PM

> """In general, you treat code that runs in the same process as you as being fully trusted - after all, there's nothing that this code could do that would compromise the machine/process."""

> Wow... that sure explains a lot about Microsoft security.

Not everyone at MS agrees with that statement (which was taken out of context) so might not mean what it looks like at first).

See:
http://blogs.msdn.com/oldnewthing/archive/2004/01/01/47042.aspx

Anon IndianOctober 1, 2007 1:43 PM

Hi

This topic should be of interest to the crowd here.

The Element of Surprise
To help combat the terrorism threat, officials at Los Angeles International Airport are introducing a bold new idea into their arsenal: random placement of security checkpoints. Can game theory help keep us safe?
http://www.msnbc.msn.com/id/21035785/site/newsweek/page/0

Be Safe ...

Guest12345October 1, 2007 2:04 PM

"If WinVista was MORE secure than other Windows versions, we would see FEWER zombies (adjusted for marketshare)."

This would make sense in this context if zombies required security bugs. Yea olde Mac OSX is just as vulnerable to users who run whatever app they just pirated. So are Linux users.

The reality is that the reason Windows is the most numerous platform for zombies really is that Windows is the most numerous platform.

It has little to do with security flaws.

noteveOctober 1, 2007 2:08 PM

@Larry Osterman
"My point was that a data store itself cannot elevate it's privilege, because privileges are associated with running code - the contents of a data store are static data typically living on magnetic media somewhere, and not code (it takes code to read the data)). But you're right: code may make trust decisions about datastores and those trust decisions may be relevant."

A passive data store can't elevate its privilege, but not all data stores are passive. And even if a passive store can't elevate its privilege, another active agent, what you refer to as code, can change the trustworthiness statement (access privileges and other metadata) or even the contents of the data store in a way that tricks another agent into doing something it shouldn't.

So yes, trust decisions about data stores and the trustworthiness of the data "may be relevant", but it seems to me that they're ALWAYS relevant, not just sometimes. For example, consider how many of the Rules of Thumb given in the article are about not trusting data. It's a mystery to me how anyone doing threat modeling can say trust decisions about datastores *MAY* be relevant. It seems to me they're *ALWAYS* relevant, even when the balance of engineering tradeoffs is that you accept them as trusted.

AnonymousOctober 1, 2007 2:37 PM

@LUAandUAC, it is Microsoft's fault that they have chosen in many cases to make design and implementation decisions around maintaining compatibility for their lazy/sloppy/inept ISVs rather than cleaning up spaghetti logic and tightening security.

anonymousOctober 1, 2007 2:51 PM

@Guest12345:

"The reality is that the reason Windows is the most numerous platform for zombies really is that Windows is the most numerous platform.

It has little to do with security flaws."

Pffft. Windows is easier to infect. If a cracker wanted to cause real, SIGNIFICANT damage, they'd target the UNIX/Linux installations that power most of the world's mission-critical data centers. And yet such breaches are few and far between.

Uncle BobOctober 1, 2007 3:02 PM

@noteve

There are no passive datastores. They all involve code.

It could be the shared library for fread(), the OS's read() syscall, the driver for block devices, the bus driver for ATA, the firmware on the drive, or anything else at any other level. It's all done with code.

Merijn VogelOctober 1, 2007 3:07 PM

@Brandioch Conner: I was aware of that, but the original sentence was a bit ambiguous, which I exploited, sorry :)

Brandioch ConnerOctober 1, 2007 3:31 PM

@Guest12345
"The reality is that the reason Windows is the most numerous platform for zombies really is that Windows is the most numerous platform."

No. Not "the most numerous platform". That would be correct based upon marketshare.

But the fact is that Linux is cracked LESS often than Windows. Even when the numbers are adjusted for marketshare.

As I have stated many times:
marketshare != security

And it is Microsoft's fault. At the VERY LEAST they could release tools to validate an existing installation.

It is 10,000+ times easier to tell whether a file has been released by Microsoft than to tell whether it is a cracked version or even a replacement from a cracker.

With Linux (focus on Debian/Ubuntu here) I know what files will be in what system directories. And I can validate those files as being legit.

It is not about the "attack surface" of your code. No matter how good you are, you are not the best. There needs to be a process for AFTER you've been cracked. Or even to tell whether you HAVE been cracked.

And no, anti-virus is NOT up to the task. Just do some searching on the Sony rootkit fiasco. Anti-virus is a broken security model. It looks for things that it has been told to look for.

It does NOT look for files that have been altered.

WhiteListOctober 1, 2007 3:40 PM

@Brandioch Conner

Sounds a lot like what application security preaches: validate based off of white listing, not black listing. If I haven't authorized it, don't allow.

ThomasOctober 1, 2007 4:29 PM

@Larry
"""Thomas, if you'd like an answer to your questions, try posting them on my blog, it's much more likely I'll answer them."""

Thank you for taking the time to answer my questions. I've posted a reply on your blog as you suggested (see URL, once it clears moderation).

BrowserOctober 1, 2007 5:28 PM

Roxanne,
They own the OS or at least the rights to the OS like Andy says. It gets worse...

2.1 Digital Rights Management. Content providers are using the digital rights management technology contained in this Software ("DRM") to protect the integrity of their content ( "Secure Content") so that their intellectual property, including copyright, in such content is not misappropriated. Portions of this Software and third party applications such as media players use DRM to play Secure Content ("DRM Software"). If the DRM Software's security has been compromised, owners of Secure Content ("Secure Content Owners") may request that Microsoft revoke the DRM Software's right to copy, display and/or play Secure Content. Revocation does not alter the DRM Software's ability to play unprotected content. A list of revoked DRM Software is sent to your computer whenever you download a license for Secure Content from the Internet. You therefore agree that Microsoft may, in conjunction with such license, also download revocation lists onto your computer on behalf of Secure Content Owners. Microsoft will not retrieve any personally identifiable information, or any other information, from your computer by downloading such revocation lists. Secure Content Owners may also require you to upgrade some of the DRM components in this Software ("DRM Upgrades") before accessing their content. When you attempt to play such content, Microsoft DRM Software will notify you that a DRM Upgrade is required and then ask for your consent before the DRM Upgrade is downloaded. Third party DRM Software may do the same. If you decline the upgrade, you will not be able to access content that requires the DRM Upgrade; however, you will still be able to access unprotected content and Secure Content that does not require the upgrade.

...because they control all the stuff stored on the PC owned by somebody else.

Then there's " careful reader posts that in the latest Windows Media Player security patch, the EULA (the "license agreement" you click on) says that you give MS the right to install digital rights management software, and the right to disable any other programs which may circumvent DRM on your computer."
So if you want your machine secure, you also want microsoft to have free reign on your PC."
http://slashdot.org/articles/02/06/29/1254230.shtml?tid=109

http://slated.org/killbillsbrowser.html

ReasonableOctober 1, 2007 6:15 PM

Hmmm, I assumed the crowd here would be even less reasonable (cough) than the crown over /. - and, wow, I was right.

To the point:
1. This entire discussion is a red herring. Larry posted a list of security modeling techniques - the barking dogs reply with generic attacks on Microsoft. Ok already, we know you don't like Microsoft, it doesn't mean that you have to turn every discussion into a Microbash. I was hoping to see an impartial discussion of Larry's advice (as, the same discussion it would get if it was published by a guy in company Z). Instead, you just wasted bandwidth.

2. Windows has fewer critical patches, I think (can't recall the source offhand) than Linux, this year at least. That's not even the issue, however - many zombie computers were zombified due to user actions (download and run, or refuse to install patches). Linux users tend to be a bit more responsible - but that doesn't say much, either way, about the software platform itself. Also, market share -is- important, and more than in a linear fashion. In malware writing, you'd see a 'winner takes all' - malware developers would rather develop once, for one platform.

3. Vista took the step of constraining the user to a 'less than admin' role - like it or not. That's a show of commitment to security, and should be viewed as such. It created a toll on usability, but that's the tradeoff. Actually, security-aware people should laud that decision. If you have any ideas how you can maintain compatibility while creating less user-hassle than LUA, you should have started a startup, I'm sure Microsoft would have gobbled it up, if only for the patents. Heck, if you have a better idea now, there's still time. No? no ideas? me neither. But that might mean that the -problem- is difficult, and therefore mindless dismissal of Microsofts' current security practices is, well, mindless.

4. Comparing zombied computers to marketshare under-represents the relative weaknesses of the platforms involved. The Airforce of Monaco took less combat hits (even per capita, or per plane) than the American Airforce. Does it mean they are better trained? no.. you have to take the number of attacks/adversaries into account as well.

5. I have a few more points, but this post is too long as it is. So #5 would be "I think the original security advice was good advice. Good night."

Guest12345October 1, 2007 6:34 PM

@Brandioch Conner
@anonymous

I think you are both ignoring my actual point, which wasn't about the quality of Windows at all. It was about how zombies are really getting onto systems. Zombies aren't in the data center (as a rule) because systems in data centers are not end user systems. Zombies get onto the box because the users are inviting them in, not because the software is cleverly exploiting remote holes to install itself. They have a universal hole called the user.

The user downloads the latest crapware p2p client. Or downloads "MS Office 2007 CORP***CRACKED****.zip" from their favorite bit torrent site. Escalated privs or no escalated privs, OSes on mainstream machines do not prevent the user from running programs that connect to the internet.

At the end of the issue, if the user wants to run the program, no amount of code quality or security model is going to stop them.

And as regards MS checking signatures of DLLs, I know that it does for some of them. For example apply the patch to raise the limit on half open connections and you'll get a dialog warning you that a DLL is being modified and do you want to prevent the modification.

not THAT anonOctober 1, 2007 7:12 PM

@LUAandUAC:
"There is no reason users need to run with admin privileges, and Windows 2000 and Windows XP actually work just fine with users running with normal non-admin privileges."

I guess you haven't used WinXP much. I can run as a limited user because I use "runas". For a non-IT user, even minor settings changes to applications that should be fine often cannot be performed by limited users. When one of my users needs to access the network remotely, I have to give admin priveleges because the VPN software (and changing network settings to make it work wherever the user is) requires admin priveleges.

The truth is, the model itself is broken. Deeply imbedding IE to prevent DOJ from ordering its removal only made things worse. That's why Win2K and WinXP (both promoted as "the most secure operating system ever") have had so many holes.

not THAT anonOctober 1, 2007 7:22 PM

@ LUAandUAC
"The unfortunate part is that ISV's have had more than 6 years to properly write their applications, and some still haven't gotten it (aka lazy), which is why there are still programs that continue to have problems with the Windows Vista UAC, and subsequently why users are seeing unnecessary UAC prompts. Users running Windows Vista with properly written and configured applications, should never see those UAC prompts."

Using only MS applications and products, on a brand new Vista machine, I saw plenty of UAC prompts, so many that the machine is slated to become an XP machine soon.

LUAandUACOctober 1, 2007 10:50 PM

@not THAT anon
"I guess you haven't used WinXP much."

To the contrary. I use Windows XP _a lot_. I also manage IT for and consult to several SMB's, most running Windows XP networks and a couple with Windows 2000 networks. I am also the "IT guy" for a number of friends and family running Windows XP Home and Pro.

For all these WinXP and Win2k computers, networked or not, the users all run _without_ admin privileges (as limited users for stand-alone computers and as normal domain users, not in the local admin group, for business AD domains).

Many applications, including those from Microsoft, can be configured to work fine in a non-admin environment. There are a number of applications, like Microsoft office, which have no problems at all working in a non-admin environment. I have even used RDP and VPN for users running as non-admin.

It just takes good planning on the part of the IT department to properly test and configure applications, as well as ensuring the end users are properly trained for working without admin privileges, with proper expectations. For example, end users need to know that they can't install their own applications (but what good IT dept would let users install applications anyway?) The same goes for system level configuration changes. Unfortunately, there are simply some applications that can't be made to work. The biggest offenders I have found are printer companies (i.e. HP, Epson, etc.), where while the basic printer drivers work fine as non-admin, they create crapware printer applications that needlessly require admin level access and simply can't be easily worked-around. None of these problems are with Windows XP or Windows 2000, these are all problems with poorly written applications, by lazy programms and ISVs.

What I have found is that the only time "holes" need to be created in the WinXP or Win2k non-admin environment is to accomodate poorly written applications written by lazy programmers and/or ISVs.

For example, why would a program like Intuit's Quickbooks EVER need admin privileges to the OS registry root? It doesn't! Its an accounting program. Intuit's programmers are just plain lazy, and this bug should have never been in that application to begin with. The fact that it has taken Intuit 6 years to fix this major bug is appalling.

LUAandUACOctober 1, 2007 11:21 PM

@not THAT anon
"Using only MS applications and products, on a brand new Vista machine, I saw plenty of UAC prompts..."

A while back, I setup a Windows Vista Premium computer for a friend. They are using it for typical activities. It has Microsoft Office 2003, various third party applications (scrapbooking, photo editing, etc.), and they are using it to watch occasional SD/HDTV (via the Media Center application) from the desktop (they have a larger LCD monitor).

After the applications were all installed and properly configured, the user gets no UAC prompts under normal use.

Obviously, administrative activities do, as they should, give UAC prompts.

Additional configuration was required for a couple of third party applications (the Microsoft applications all worked fine). This configuration was due to poorly written applications (by lazy programmers/ISVs), not the fault of the Windows Vista OS.

anonymousOctober 2, 2007 12:49 AM

@Reasonable

"2. Windows has fewer critical patches, I think (can't recall the source offhand) than Linux, this year at least. That's not even the issue, however - many zombie computers were zombified due to user actions (download and run, or refuse to install patches). Linux users tend to be a bit more responsible - but that doesn't say much, either way, about the software platform itself. Also, market share -is- important, and more than in a linear fashion. In malware writing, you'd see a 'winner takes all' - malware developers would rather develop once, for one platform."

Less critical patches has nothing to do with the security model of the system--what assurance do you have that there are other holes that MS doesn't talk about or hasn't gotten around to fixing?

Also, how are you so sure that users are entirely to blame for zombification? What assurance do you have that a back-door exploit wasn't to blame (which the user has little control over), or a malicious cross-site-scripting page, or a malicious ActiveX script? Poorly crafted software is to blame there. Also, even non-admin users on Windows can install some forms of software, opening up yet another class of breaches. I call these things bad OS design. Naturally, a *lot* of users did themselves in, but even careful users can be filed by bad software.

I stand by what I said before--the fact that truly mission-critical installations running UNIX haven't been destroyed by hackers (i.e. the NYSE, the CIA, a good chunk of European governments, etc.) is more of a testament to the strength of the UNIX security paradigm than its low end-user visibility. Try to write malware for UNIX sometime--you will find it's far more difficult than writing the equivalent for Windows.

AnonymOctober 2, 2007 11:01 AM

>> UNIX/Linux installations that power most of the world's mission-critical data centers

And it's funny that Windows powers many other data centers and they don't get attacked too. It's the common desktop of the average Joe, the problem now.

nksinghOctober 2, 2007 4:52 PM

Linux servers get compromised all the time too... for largely the same reason as Windows boxes (mis-configuration or user error). The only difference is that Windows has more automated exploits since the economics of exploiting it are more favorable.

If you have an exploit that will only work 3% of the time (because it requires users to do something that no savvy person would do, or it requires an odd configuration), there's no point in multiplying that by 10% to target even the most popular alternate platform.

Also, it should be noted that running as admin is pretty orthogonal to the zombification of a machine. A malicious program can insert itself into per-user startup locations and do all of its work from the luser account. I can think of no good way to prevent this attack other than requiring all executables to be signed.

Actually, that wouldn't be such a bad idea. A company could start a whitelisting service that installs a small "antivirus" agent on the client that does nothing but intercept calls to CreateProcess and LoadLibrary. It would check the images for legitimacy with an online whitelist and add its own signatures to files that are missing them. This would not be good for developers, but it should work great for end users with simple needs. It would basically be a program quality certification service. There could be some advertising potential in the business too: blocked adware could result in a pointer to a for-pay competitor that does the same job.

Brandioch ConnerOctober 2, 2007 5:47 PM

@Guest12345
"I think you are both ignoring my actual point, which wasn't about the quality of Windows at all. It was about how zombies are really getting onto systems. Zombies aren't in the data center (as a rule) because systems in data centers are not end user systems."

Do some research on Code Red and Slammer.

Yes, they DID hit data centers. Lots of them.

You're trying to claim that because servers are not usually cracked via an IE or Outlook exploit (since it isn't very common to use IE or Outlook on a server compared to how frequently they are used on workstations) that such is not an issue.

Slammer was so successful because Microsoft made the same vulnerability available on the server AND the workstation. WITHOUT the user even being aware that there was a risk.

wmOctober 3, 2007 3:44 AM

@Reasonable: "Windows has fewer critical patches, I think (can't recall the source offhand) than Linux, this year at least."

I'm afraid that without knowing the source, this particular statement is meaningless.

The problem is that a lot of sources count "a patch for Firefox vulnerability X in Red Hat and a patch for Firefox vulnerability X in Gentoo and a patch for Firefox vulnerability X in Debian" as three Linux security patches, when actually it's just a single patch for a single vulnerability.

So the number of "Linux security patches" is often much greater than the number of Linux security vulnerabilities, whereas the number of Windows security patches doesn't get inflated in this way.

Without knowing whether the source you saw was doing this multiple-counting of Linux patches, it's impossible to draw any conclusions about the number of actual vulnerabilities.

(And this is without getting into the definition of "critical" patches -- Microsoft certainly tend to count a patch as critical only if it results in remote code execution without user involvement -- e.g. vulnerabilities to visiting malicious websites don't count -- but I suspect that at least some Linux distributions don't take such a narrow view of critical vulnerabilities.)

cdmillerOctober 3, 2007 4:29 PM

@nksingh
"The only difference is that Windows has more automated exploits since the economics of exploiting it are more favorable."

Counter example: Apache Web Server.

Terry ClothDecember 12, 2007 2:35 PM

[Sorry about the late reply. In case anyone's still listening...]

@Gustavo Bittencourt
@Gurk

You are, of course, simply underlining my point. Anyone whose code I run can own my system. I have to choose whom to trust.

Thus my point:

Good guy, owner: OS programmer, &c.
Good guy, non-owner: Anyone whose code I don't run.

Bad guy, owner: oops
Bad guy, non-owner: I didn't run her stuff.

{good, bad}{owner, not}, choose any combination, they're all valid.

Leave a comment

Allowed HTML: <a href="URL"> • <em> <cite> <i> • <strong> <b> • <sub> <sup> • <ul> <ol> <li> • <blockquote> <pre>

Photo of Bruce Schneier by Per Ervland.

Schneier on Security is a personal website. Opinions expressed are not necessarily those of Co3 Systems, Inc..