Schneier on Security
A blog covering security and security technology.
« Helpful Hint for Fugitives: Don't Update Your Location on Facebook |
| Computer Card Counter Detects Human Card Counters »
October 19, 2009
Six Years of Patch Tuesdays
Nice article summing up six years of Microsoft Patch Tuesdays:
The total number of flaws disclosed and patched by the software maker so far this year stands at around 160, more than the 155 or so that Microsoft reported for all of 2008. The number of flaws reported in Microsoft products over the last two years is more than double the number of flaws disclosed in 2004 and 2005, the first two full years of Patch Tuesdays.
The last time Microsoft did not release any patches on a Patch Tuesday was March 2007, more than 30 months ago. In the past six years, Microsoft had just four patch-free months -- two of which were in 2005. In contrast, the company has issued patches for 10 or more vulnerabilities on more than 20 occasions and patches for 20 or more flaws in a single month on about 10 occasions, including yesterday.
I wrote about the "patch treadmill," pointing out that there are simply too many patches and that it's impossible to keep up:
Security professionals are quick to blame system administrators who don't install every patch. "They should have updated their systems; it's their own fault when they get hacked." This is beginning to feel a lot like blaming the victim. "He should have known not to walk down that deserted street; it's his own fault he was mugged." "She should never have dressed that provocatively; it's her own fault she was attacked." Perhaps such precautions should have been taken, but the real blame lies elsewhere.
Those who manage computer networks are people too, and people don't always do the smartest thing. They know they're supposed to install all patches. But sometimes they can't take critical systems off-line. Sometimes they don't have the staffing available to patch every system on their network. Sometimes applying a patch breaks something else on their network. I think it's time the industry realized that expecting the patch process to improve network security just doesn't work.
Patching is essentially an impossible problem. A patch needs to be incredibly well-tested. It has to work, without tweaking, on every configuration of the software out there. And for security reasons, it needs to be pushed out to users within days -- hours, if possible. These two requirements are mutually contradictory: you can't have a piece of software that is both well-tested and quickly written.
Before October 2003, Microsoft's patching was a mess. Patches weren't well-tested. They broke systems so frequently that many sysadmins wouldn't install them without extensive testing. There were jokes that a Microsoft patch was indistinguishable from a DoS attack.
In 2003, Microsoft went to a once-a-month patching cycle, and I think it's been a resounding success. Microsoft's patches are much better tested. They're much less likely to break other things. And, as a result, many more people have turned on automatic update, meaning that many more people have their patches up to date. The downside is that the window of exposure -- the time period between a vulnerability's release and the availability of a patch -- is longer. Patch Tuesdays might be the best we can do, but the whole patching system is fundamentally broken. This is what I wrote last year:
The real lesson is that the patch treadmill doesn't work, and it hasn't for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won't prevent every vulnerability, but it's much more secure -- and cheaper -- than the patch treadmill we're all on now.
Posted on October 19, 2009 at 3:38 PM
• 60 Comments
To receive these entries once a month by e-mail, sign up for the Crypto-Gram Newsletter.
'course, if you didn't have to reboot the sodding OS for EVERY SINGLE minor patch, life'd be a lot easier...
I use HFSLIP and nLite to keep a bootable install disk of XP Pro with service patch 3 and subsequent patches slipstreamed in. Nlite also allows me to not-install XP features that annoy me.
I got used to re-installing the OS every year or so, and everything runs fast. The patch problems people have with things breaking... well, I don't see that on 4 machines.
Yeah, it's a pain doing reinstalls. But I've learnt where the config files are for things, so the full sequence of install OS, apps, and getting things as-before takes 3 hours.
The conclusion in the last paragraph reduces everything to "we need".
Unfortunately, it offers next to nothing for solutions.
It may as well say "we need" humans to start writing bug free code right now.
I have to admit that MS has gotten better at this patching business. I don't think I've had an official patch from Windows Update break a machine since 2006. I still don't let WU install updates without my approval, however.
The need to reboot the OS after patching (mentioned above) stems primarily from the idiotic semantics of file access on Windows. Unlike Unix, you cannot replace a file while it is open on Windows, and in the case of executables, the file is open whenever it is part of a running process. Nor is Windows smart enough to shut down a service or system process that has some file locked, then install updates, and then restart the program. So any time a patch involves replacing a system DLL, the new file has to be placed in a temporary location until the next reboot, at which time Windows moves it into place.
Windows has been in dire need of an OS X-style architecture reboot for many years, but I don't expect MS to do anything more than repaint the gargoyles anytime soon.
My take on this is that since we can't expect bug free code and (reasonable) patching leaves a wide window of opportunity for attacker organizations must adopt security solutions that are external to their applications, allowing fast mitigation through security policy updates until a patch is deployed in due time.
You're presuming that *any* patching is a reasonable approach.
I will concede that making programs that have been mathematically proven secure is beyond the means and resources of most software development companies. That said, there's a heck of a lot we could do to improve things.
I've heard quite a few people say that Windows XP Service Pack 2 should really have been named 'Release Candidate 1'. If more people - more sysadmins, especially - took that approach, Microsoft would be quite motivated to release well-tested software. As it is, people have been conditioned that 'all software needs patches'.
Now, on the other hand, take the console industry - Playstations, N64, etc. Up until a few years ago, they never patched anything. They couldn't, without internet access. Every game that they released for those had to be *perfect*.
They were also coding for a known hardware platform, so... :) Maybe we should be blaming hardware manufacturers for non-standardized components.
There's many places we can point fingers, but ultimately I think it comes down to the fact that in the trade off of quality, people want computers to make life easier, not more secure. They'll complain - loudly - but if they really wanted more secure software, they'd get more secure software. It is out there. Just people aren't using it. Nobody's dropping in LiveCDs to access their personal banking information - it's too inconvenient.
savanik: "Now, on the other hand, take the console industry - Playstations, N64, etc. Up until a few years ago, they never patched anything. They couldn't, without internet access. Every game that they released for those had to be *perfect*."
No, it's just that since they didn't have internet access, nobody was trying to hack into them remotely. Also, people didn't use those game consoles to store or access information that criminals would be interested in. If you think game software is bug-free, pardon my snickering.
@Slightly miffed: And to think that on Linux with Ksplice you don't even have to reboot system for *kernel* patches.
@craig: "Unlike Unix, you cannot replace a file while it is open on Windows, and in the case of executables, the file is open whenever it is part of a running process."
Does this hold for every file of ecery user? I don't have Windows any more, so I can't test it myself, but I believe to remember that I could open and change files in one editor, while modifying it in another one.
People should stop using C/C++ for applications; that wouldn't solve all problems, but a lot of them.
@Stefan: I'm only talking about how executable files are used by the system, not how documents are used by apps. Most apps load a document by opening the file, reading its contents, and closing it, so the file is available to be accessed, replaced, or deleted by other processes while the app still has a copy of it in memory.
I know that this is a security blog, but don't forget that once in a while patches are to fix functionality bugs rather than security bugs. Sometimes those are even as urgent.
While we need to develop our operating systems with security in mind from the initial designs, we also need to develop them with functional changes in mind. We need to continue to find ways to separate application services from the operating system kernel. We can then concentrate on patching applications rather than the operating environment on which they run, making life easier for everyone.
Of course, that'll make all application developers need to be as good at patching as Microsoft, Apple, and the Linux developers. Is that a bad thing? Come to think of it, the iPhone / iTunes App Store combination already has that model.
@Stefan: This old saw? You can pull out the "stop using C/C++" thing when your JVM/interpreted-language-of-choice is itself not written in C. Oh, and neither are any of the libraries it uses (so no JNI, no GDI/GDI32/Xlib/GTK/Tk/Qt...)
Truth is, we *need* our system libraries to be written in C/C++. And, unfortunately, that's where half the bugs are. (zlib, anyone - but would you put up with a zlib that ran at Java-speed?)
I wish someone would work out how to make the most central parts of the system not also be the ones which need the most optimisation, but I might be wishing for the impossible...
Planes don't crash because the software they deploy is well designed. The technology to create good software is on the table, the will and financial incentive to use it is not.
@Stefan W: a typical editor doesn't keep the file open (as in, keep an operating system file handle) once it has read and displayed the contents. It only momentarily has the disk file 'open' when it loads or saves it - then it gets out of the way, so the other editor is able to open it.
You usually only run into these problems with running executables and system files, or always-open data files (e.g. the MS Outlook data file) which are kept open for random access or deliberately to lock them.
@Anton: "Planes don't crash because the software they deploy is well designed. The technology to create good software is on the table, the will and financial incentive to use it is not."
A friend of mine worked on the F-16 avionics software. Her team averaged 13 lines of 68000 assembly language per person per day. Most of their time was spent in code reviews, and the Pentagon was willing to pony up the huge cost to save pilots' lives and improve the rate of enemy kills.
You're right. There is no will to create commercial software of that quality. Nobody is willing to pay a million dollars for a word processor.
Bob, you're missing the economics of a mass market. People pay much, much more than a million dollars for word processors - probably billions. The problem is that it goes into "features" rather than correctness, because that is what people are more willing to pay for.
(sorry for my english)
What is the problem? "in house" I am after XPsp2 no more server, I am client.
Look for issues after full (auto) patched XPsp3 as restricted user. Please tell me your exploits to hacking me.
Forget Vista and look on Win7. Please hack me when I understand UAC or hack me (behind my router with Firewall) when I am also directly restricted user.
Please tell me the day and hour and I post my IP.
For another things then clients, you have no chances with your glory security engineers. You can reach assurance, when you build-in 5 "walls" and HOPE, bad gay have after 4 wall no more desire. THATS ALL.
When you understand security, you have no more never assurance. That beginning with the design of tcp/ip and ending on ROMMON/Heap spell of 300.000$ Cisco irons.
The security drub on mickeysoft is stupid. Equally stupid to drub on both CASTs, the one of extrem few algs without weeknes analysis -> (fart)
Lol. It save for second the pilot and for third improve enemy kills. For first it save the F-16.
"People pay much, much more than a million dollars for word processors - probably billions."
Then you need to add the cost of the "blue screen of death", "Your program has stoped responding", etc etc etc etc (add in finitum).
Then the cost to lost productivity of the "features", the cost of the "training to use the features"
Then the cost of repairing broken files that "auto save" has blown out of the water...
And thats before you get to the idiotic system code issues...
And some people wonder why I still use WordStar 4 on a free version of DOS on a hard disk less "luggable" I bought in the 80's...
"You can reach assurance, when you build-in 5 "walls" and HOPE, bad gay have after 4 wall no more desire. THATS ALL."
I believe that's the approach the Pentagon takes.
Linux distributions typically do not have “Patch Tuesdays”, they tend to release patches very quickly from when vulnerabilities are discovered. Yet it’s rare to hear of fixes introducing new bugs, unexpected interactions between patches, that kind of thing.
Is this because Linux systems are simply better designed? With less opaque interactions between components, better management of component interdependencies, and just plain more eyes keeping all bugs shallow? Thereby making them easier to keep up-to-date?
Security has to be designed in but more that than there has to be a well designed Software Architecture. People joked about the Software Architecture Architecture at DEC but it worked.
Does any of this matter?
If security is important to you, you shouldn't be using Microsoft Windows.
a quick way to get your PC up to patch state after a fresh install, is to use Offline Update tool from http://www.h-online.com/. (its free)
It runs a script that gets all the patches from release up to today, (or from last time you ran the script) and creates an ISO disk image.
when you re-install, you just need to stick the update disk in your machine, check a couple of options (like automatically reboot and proceed) and you're up to date in no time.
(I have no affiliation with 'off-line update' or any other software vendor - I just think its a good tool to deal with installing all years worth of 'patch Tuesdays' after a fresh install.)
It is impossible to "mathematically prove" the correctness or security of a program -- you can only prove that the program behaves according to a specification. In the case of a modern word processor, that specification would be a multi-thousand-page monster full of internal inconsistencies, historic baggage, and whatnot (ECMA-376 can be seen as a "spec" for MS Office).
While trivial errors like buffer overflows can be fixed by proof systems (or by simply moving to a modern language), security-relevant errors are often errors in the specification, and you won't catch them as easily - macro viruses were features, not bugs!
Regarding the comparison to avionics systems: Such a system is vastly simpler in some respects, since it will be running on well-defined isolated hardware, on only a few hundred installations, and only used by professionals that know what they do.
Well, MS has done really good in this regards. I especially admire the Windows Server Update Services (WSUS) helps a lot.
"If security is important to you, you shouldn't be using Microsoft Windows."
Although I would agree with you if security where the only concern, in a modern business it is not.
Security is only one of very very many business drivers. Two you will see sited way way above it are "efficiency" and "productivity" in the workforce carrying out primary business activities.
This means in 99.99% (or more) of office worker cases MS OSs and Applications and in the more "creative" marketing types Apple OSs and Apps as well as some from the likes of Adobe. Worse many of these workers (supposadly) have legitimate reasons to connect to the Internet, (why I find doubtfull in many cases but then I don't run those businesses so my opinion on that asspect is mute).
The upshot is that "training" and "familiarisation" are considered dirty words in many organisations as senior managment regard the time and resources this requires to be "lost efficiency" and therefore lowers that all important "shareholder" metric "productivity".
So in nearly a 100% of businesses with Internet connection you will find an MS Windows box available with quite a high chance that it is vulnerable to known attack vectors. The fact that this puts all the other systems it has access to at risk as well is neither here nor there as far as the "business" types are concerned. Your job is to do as your told and get on with it.
This is because these same senior managers know who they answer to (the shareholders) and that their expected life in any organisation is often less than 18months. So the risk of a security breach (which would probably effect their competitors as well) compared to the risk of lower productivity under their watch is a compleate no brainer.
And in most cases you would not be doing yourself any promotion or job longjevity favours by drawing to their attention "officialy" that what they are doing is putting the business at risk...
And don't think the likes of senior Adobe / Apple / MS execs are unaware of this. They likewise see "proper security" at all levels of development as counter productive to getting "shinny new toys" to market.
To use an analagy, we all know we should not hide under trees or stand on hill tops holding metal umbrellers in a thunder storm. Likewise we all know we should not "jay walk" but use proper crossing points.
But we very nearly all do it at some point or another and when it inevitably happens to somebody, we put it down to bad luck or their stupidity depending on our relationship (if any) to them.
It is an ingrained risk mentality and security has a very hard "up stream" swim against it in the likes of walnut row.
Worse this risk view point is still there even after the events of the past few years and regulatory and legal requirments. The attitude is "lip service in public" and "business as usual" in private.
We are kidding ourselves about the priority of security untill the risks become very real and significant to the short life chancers in walnut row and their taskmasters the share holders.
Even then they will still treat it as a "hot potato" game and pass it on to some chancer further down the command chain who thinks they are on the "up-n-up".
@Clive Robinson at October 20, 2009 8:53 AM: Although I would agree with you if security where the only concern, in a modern business it is not.
I think you make a good point. MS Windows may be the most vulnerable, but it is also the most used. Particuarly in a place with high turnover or a lot of staff, the learning curve involved in a non-windows environment could be significant.
Since we can't count on MS to better develop their software to reduce the patches in the long haul, best we can do is harden our environment to reduce the risk a vulnerability will be exploited and reduce the damage that can occur. That's easier said than done, I know, but we can't depend on patches alone. Unfortunately, too many decision makers don't know what 0day means.
Without being seen to be or being nasty security experts and gurus are a large part of the problem.
For instance you say,
"We need to design security into our systems right from the beginning. We need assurance . We need security engineers involved in system design."
But you do not put it in terms that senior managers will understand.
It is directly analagous to "quality" all the processess that "quality" applies to "security" applies to as well in exactly the same way.
Senior execs and share holders don't give two hoots about real security because they cannot see the justification of the expense.
However most modern execs know that "quality" is important, they don't know why it produces the returns it does but they now it does.
If you look at the likes of BSI 9000 and BSI 7799 you will be struck by the similarity of the aproach.
However they both fail in the same way that is they are "auditing" frame works which tell you the "where" not the "how" of where you should focus your attention.
The real value comes when all people involved with the process "buy in" from the very top of the organisation downwards.
Quality is taught as part of most business qualifications security is not.
It is time for security experts and gurus to "walk the walk and talk the talk" and get involved with business properly.
That is learn the language "the man who cuts the cheques" speaks and talk to him in his language.
And importantly put preasure on those teaching business that security is as important to the modern day business as quality is.
@savanik: Many businesses treat Windows versions like that, not adopting an OS until it's been out a long time and service packs applied. Businesses tend to be well behind the leading edge, and often skip entire versions.
Similarly, we found Visual Studio 2008 to be mostly unusable as shipped, more like an open beta than a release. We adopted it after SP1, and it's been doing well since.
Microsoft makes the bulk of their profits off sales to business. Perhaps the idea is to get software out before it's ready, so the name will have been floating around a long time before any reasonable business would touch it, sort of giving it an established feeling.
@Stefan: There is no such language as C/C++, and I don't trust anybody who uses that particular phrase to know much about either. Modern C++, for example, can be easily written to avoid all of the standard C memory issues. Of course, when doing so it's not really suitable for OS-level programming. It does just fine for applications. It has the virtue of having a consistent method for managing all resources, not just garbage collection for memory and trusting the programmer for everything else.
The most complicated and vital software is the heart of the OS, which cannot be written in a safer language. You could start with a safer language, but when you'd added everything to it that you'd need (to handle raw memory and raw bits on disks, for example), it'd be no safer than C.
There are better languages than C for most purposes, and it's hard to think of something for which C++ would be the ideal fit (although it might be the best available language). We would likely be better off using something like Lisp for most purposes. However, programming is hard, and there's no magic way to make it easy. Language bigotry will accomplish nothing useful.
Real applications are not written like that. I can only laugh at the claim that C++ has "a consistent method for managing all resources". Spoken like a true language wonk. :) RAII is some nice syntactic sugar, nothing more. Only a crazy person would try to use C++ exception handling in code that was supposed to be secure. Exception handling is not really practical in languages without garbage collection. What most people do in real C/C++ applications (yes, that label you disdain) is write their code and manage their resources a lot like they would in a C app, except with a bunch of templates and classes and inline accessors and RAII thrown in to make it more convenient. If C++ wasn't such a shoddy language, weighed down by tons of legacy features and backward compatibility with C, this wouldn't be so painful. You might want to look at Walter Bright's D for an example of a C++-like language for writing large applications, done right. Anyway, C++ for its own sake sucks. The most useful thing about C++ is that it can be used like "C89 but with some convenient extra features". C++ is an almost-dead language, anyone who cares about programmer productivity abandoned C++ over the last 10 years, the only ones still using it are those who are tied to it due to their target platforms (e.g. embedded programmers or game programmers). This is only my opinion, but anyone who disagrees with me is clearly wrong!
wow.. edit and multi-post for the win! Maybe Moderator can blow away some of those. @David: @Stefan: @Everybody!
Anyway its flamewar territory and irrelevant to the main post. Sorry Bruce, feel free to delete them all.
@nzruss. Are you really suggesting people trust a man in the middle to update their systems? I think you may be on the wrong website ;)
@Bruce. OK microsoft's patching sucks, their systems aren't designed for security form the ground up. Is there another operating system that does it better that you would like to comment on ( Redhat, Suse, OpenBSD, Solaris)?
It's a good thing that Oracle doesn't need patches. You can't break it and you can't break in.
"best we can do is harden our environment to reduce the risk a vulnerability will be exploited and reduce the damage that can occur."
It's not just hardening the environment.
One issue I'm sure you have seen is peoples unwarented access after they have been promoted or moved to another department.
For instance a pay roll admin is given access to the pay roll system, they then get moved over to customer accounts. However their access to the pay roll does not get revoked...
One of the reasons for this is fear that the person being moved will think they are not trusted or is being punished...
Another is that they "might" need to help out their successor or help out when someone is off sick...
These are vague "touchy feely" asspects of the business (Peter Checkland kind of covered this in the 80's with "Soft Systems Analysis") that can in quite a short time make a compleate nonsance of a security policy.
Role based access / accounting was supposed to fix this sort of issue but... The problem persists with "roles" being "overloaded".
And in some circumstances allow illegal behaviour to happen and be covered up. ie back office and trading desk access allowing trading losses to be hidden (ie fraud).
So strong system auditing needs to be in place as a minimum, however you then hit the "inflexability" argument from managers.
An auditor or security bod can quickly get the reputation of being "a right royal pain..." and be called a "business inhibitor" thus having their postion significantly undermined if senior managment don't support them correctly.
I think Bruce blogged about this several months ago when he talked about security and incentives within an organisation.
@Clive: "It's not just hardening the environment. One issue I'm sure you have seen is peoples unwarented access after they have been promoted or moved to another department.
I would agree. I do consider that "hardening", we probably use the word differently. My mistake, hardening is usually not used to describe data level access much.
I've seen people transfer and keep old rights, often defended as being able to use them as backup. I've also seen this with incompatible systems too.
Another thing I have seen is sometimes lower-level people who are not screened as thoroughly accumulated more power than they should have by two simple rules:
1. "manure flows downhill."
2. "bad news is heavy" or "bad news gets camoflauged"
1, by manure flowing downhill, what you find is that work often gets passed down to the lowest level willing to accept it. Then their rights expand based on the needs, often the jobs should be segregated or it is too much information for a lower person.
2, by bad news being heavy, i mean no one wants to go to a manager and tell him that he needs to be the one doing a subordinate's job during absense or layoff. He doesn't want to, it is beneath him. So it remains the lower person's job. Also, by it being camoflauged, the higher up this goes, the less people are truthful and the more they are accomodating.
Over time, this gets messy.
HJohn: "bad news gets camoflauged"
Reminds me of the old story:
1. Staff tests product or service, tells supervisor "it is a crock of crap."
2. Supevisor tells division head "it comes in a can and it smells bad."
3. The division head tells the office manager "it has a distinct odor."
4. The office manager tells the vice president "it has a unique fragrance."
5. The vice president tells the CEO "smells good, lets do it."
Mostly humorous, but does illustrate how information is not passed along well. What's more, upper management tends to care more about how fast new products come out of the units and how much ground is covered, and they probably don't know (or don't want to know) about the segregation of duties or poorly tested products that are a incident waiting to happen.
Ellison : It's a good thing that Oracle doesn't need patches. You can't break it and you can't break in.
Diden't some "Larry" make a similar comment and then had to eat it?
Ah the joys of marketing speak to techies they just don't understand it's all smoke and mirror's };)>
@Hey Nony Mouse: "Diden't some "Larry" make a similar comment and then had to eat it?"
I remember some spokesperson stating that their product "could be hacked." Honestly, it had pretty good layered security, and probably wasn't something someone would be interested in taking the time to hack. However, since they said "it couldn't be hacked," they just put a bullseye on themselves.
No matter how good a base product is, how secure it still has a lot to do how it is used.
"Now, on the other hand, take the console industry - Playstations, N64, etc. Up until a few years ago, they never patched anything. They couldn't, without internet access. Every game that they released for those had to be *perfect*."
Not to refute your point - there definitely has been a downward trend in the quality of games at release time, with much greater expectation to "patch it in the field" - but I've seen plenty of video game crashes over the years, going back to the original arcade machines. There just wasn't usually anything interesting you could do with them.
I think this started to change with the Rainbow Six saved-game exploit that could be used to overcome the protections on an original XBox to install Linux or any other OS. There's an example that didn't even require network access to the console (although it did involve downloading a doctored saved-game file and getting it onto the XBox by way of a memory card).
You want an operating system that was designed with security in mind then look at z/OS for the IBM mainframe. You will find that when a operating system is originally designed for business, and not the consumer, you will find that security, reliability, integrity and etc are already built it.
Look beyond Windows to real business operating systems and you might be surprised.
To build on Mark R's point.. modern console games are far more complicated than the earlier-generation games. They have a lot more code in them, and therefore a lot more bugs. We're talking about games with hundreds of thousands of art assets in them, and millions of lines of C/C++ code. There are tens of thousands of bugs found and fixed in a major AAA game before it ships. Inevitably some of them slip through. Even with hundreds of testers, there is simply no way to find all of the bugs -- not as many as the millions of gamers are going to find once they buy retail copies.
bruce is right when he says "There were jokes that a Microsoft patch was indistinguishable from a DoS attack."
i used to tell this joke at swanky cocktail parties in Manhattan, and i got laid like a bandit.
i'm sure there are other reasons to spend hours discussing the merits of various versions of Linux vs. Windows operating systems, other than that chicks totally dig it, but i can't think of any right now.
@el chubbo: "i used to tell this joke at swanky cocktail parties in Manhattan, and i got laid like a bandit."
Probably works better than talking about 3 1/2 inch floppies.
@Tom: Thing is, modern Windows was designed for businesses. The line of development from 3.11 to 95 to 98 to Me was based on the idea of a single user, and the later versions of this line were intended for individual use.
However, NT was intended for business use, and it begat 2000 and XP. I'm not sure how much was rewritten for Vista/7, but there was still a business focus.
Both IBM and Microsoft designed their OSs for business, so that's not the difference between them. I won't comment on who did a better job.
It's quite easy to reduce this steady stream of patches. The methodologies to do it already exist. All of them have a consistent focus on security and/or quality throughout the lifecycle. Problematic language features or libraries are avoided. They also typically use a component or layered approach and build new functionality on high quality building blocks. The DO-178B software projects and methodologies like Cleanroom are examples with a proven track record.
Personally, I like to look at results instead of promises. So, where can we find highly secure or reliable programs, to analyze their development process? The VAX Security Kernel, OpenBSD, and Flask give some examples for modern operating systems. High security kernels or techniques can be found in Integrity-178B and seL4/L4.verified. For software development, look at Microsoft's SDL and SLAM, SPARK Ada and Tokeenier Project, and new tools like Perfect Developer. Galois claims their Haskell-based specification, refinement and other techniques for high assurance development make high integrity systems cheaper. In other words, there are many approaches that have produced excellent results. The lack of security was decided by both the market and the legal system, which doesnt hold developers accountable for defective products.
"We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design."
Arguably MS is more thorough about this than any other major commercial vendor. Their SDL is by no means perfect, but its intent is entirely in the sentiment above. Moreover, MS is transparent about their Secure Development Lifecycle - the process that they perform is well publicized with folks like Michael Howard going so far as to post in depth analysis of major security bugs, why the SDL didn't find them, and what lessons to learn from it (his writeup on the SMB 2 bug does do a great job explaining how there are some classes of bugs that just won't be caught by the process).
"It's a good thing that Oracle doesn't need patches. You can't break it and you can't break in."
Dude, do you miss the mega patches that Oracle rolls out? Of the major DB vendors MS SQL Server does fair by far the best when it comes to security flaws, having only a handful in the life of the entire product and even fewer in any core component. SQL Server is arguably the poster child for why the MS SDL is a good idea, as the comparison of pre -> post SDL in their product security, vulnerability discovery, and patching is night and day. David Litchfield, a name that carried enormous weight in the field of security analysis of DB technologies, has flat out said that it is not worth his time to look for vulnerabilities in SQL Server as they are too difficult to find. Meanwhile he just released another book on Oracle hacking since it is such a prolific enterprise.
I thing the sarcasm tag was missing from the original "can't break" quote.
What's David Litchfield's new book?
"Exception handling is not really practical in languages without garbage collection. What most people do in real C/C++ applications (yes, that label you disdain) is write their code and manage their resources a lot like they would in a C app"
It is interesting to note some of the differences between "application developers" and the likes of embedded / Real Time / OS developers.
There are five areas outside of the normal program logic to consider,
1, Input data validation.
2, Error handeling.
3, Exception handeling.
4, Data buffering.
5, Memory allocation and reuse.
Without going into details you can fairly quickly tell what experiance a programer has had by the way they deal with these areas and in some cases importantly where they put them if at all...
As a simple example "code cutters" tend to treat input validation and error checking as seperate issues to program logic. And when you study the data flows in such programs you tend to find they are pushed up at the left hand side and the program logic on the right handside. Oh and usually little or no input validation or error checking on anything except "front end input".
It's even worse than you describe (long time systems programmer for windows, finally out of that game).
How 'bout the registry, which just about everything needs to make changes to? It has the same problem set -- can't change the underlying file without a reboot.
That replace system dll thing on reboot is a fine attack vector itself, isn't it. Anyone using that one yet?
Glad to be on linux for my whole network now -- what little windoze we have to run (because some lame mass spectrometer manuf used .net) runs in virtual box...and not often.
The neat thing about going in for automatic patch updates is that you have decided to trade known problems for unknown problems and a possibly false sense of security. Given the regularity of patches, you can probably relax the "possibly" in that last sentence.
@DC: "How 'bout the registry, which just about everything needs to make changes to? It has the same problem set -- can't change the underlying file without a reboot."
I'm not quite sure what you mean. You believe Windows needs to be rebooted before changes can be made to the registry? Um, there are dozens, hundreds, thousands of changes made to the registry in various locations within a given session -- as you said, nearly every program requires registry access. Very, very few of those changes require a reboot. (Registering COM components doesn't, for example; changing file types doesn't, as another example; even installing a new driver doesn't necessarily require a reboot, though that touches the registry and all kinds of other stuff. Unregistering a COM component that's in use might conceivably cause problems, though I've never tried it.)
The registry is not really exposed as a file, but as a database (good thing, because it's actually a collection of files, not just one). As such, changes are visible immediately, and there is some effort put into granular locking, to such an extent that I have hardly ever heard of registry key access contentions, as you find with files.
Because it's not exposed as a file to anything except the registry APIs, I can't imagine any need to "change the underlying file" by means of a patch -- or for any other reason, in fact, except perhaps crude restores from backup.
Maybe I'm completely misunderstanding your comment, and if so, feel free to correct me.
From IT operations perspective, reboot or not reboot differs significantly. "reboot" means service down time. So it's much more sensitive to business user and systme admins. Actually, many MS patches nowadays requires "reboot", i.e. down time, before it starts to effect. Surely, if it's critical application, it must have HA structure. HA helps avoid down time. There is always arguement on patching between security team and server admin team. Everlasting. :(
It would be curious to see the comparison with various Linux distributions. I've had Ubuntu patches break a computer just as I've had Windows patches. I've had it to "here" with the frequency and volume of Ubuntu patches just as I've had it with the frequency and volume of Windows patches. As Bruce points out that it is the patch mechanism that is wrong... operating system specifics dont really matter.
As for the C/C++ discussion. A great many large projects use third party libraries. At this point in time many third party libraries (particularly open source) are written C... and some have C++ encapsulation. So the term C/C++ is highly relevant and very accurate - despite all good intentions to the contrary.
I've got on average five machines running XP at any given time. Eah and every one of them has been buggered at least once by a patch from MS. Invariably requiring a full Repair Install and attendant nonsense.
Turned off Updates completely on all of them.
Now I'll be the first to admit that these are older systems but the point I'm trying to make isn't that they are bad or useless but that they are working machines that are ruined by Updates and have to have significant work done on them to make them viable again.
Have an old HP laptop n5495 running XP/SP2. It's fine. Put SP3 in and it will crash permanently requiring a full Repair Install. Turn the damned things off IMO.
when you connect with the internet you are connected to a world-wide network. and the world-wide network is connected to you. this is why authentication is important: we all need to be certain who we are talking to. whether we are downloading software or updating bank transactions or just sending e/mail.
there are some folks who just can't tolerate the idea of security. these are the folks who have to jail-break their iPhones
let them modify their iPhones and their computers and their cars too if that's what they feel they need to do
but for those of us who prefer secured systems -- we should have the option to select that method of operation as well.
let's look at making a user choice out of this security question.
that way the open-system crowd can do their thing and those of us who prefer a secure environment can have that
the one thing that we ought not tolerate is some other guy telling us we can't have a secure system because he wants to run his stuff on our machines
I'm no longer so worried about the MS updates. The lurking danger is stuff like Adobe's Flash Player and Adobe Acrobat Reader. These are full of very serious security problems, infrequently patched and it's very difficult to roll out new versions.
That last paragraph about Security Engineers is important. The current thinking about Security Engineering (SE) is that an SE checks things after design. The SE needs to be involved in the design and development of the system. That means the SE must have a background in systems design and software development too. Without that plus security, the SE is doing an incomplete job!
@all: For everybody reading this, Markus here is the only one that knows what he is talking about, and Bruce Schneier talking is just silly:
- security dont bring money like features. The security needed is the one that dont cost more that its absence.
- it's impossible to prove complex softwares in a timely satisfying maneer
- only very simple softwares can be proved, in a very limited way
- all this has been proved (!!) a long time ago by Gödel, Rice, etc.
- the only solution is to hire the best programmers available, and to give them the best salaries of the compagny. The company that understant this will have the best programs and the most secured ones.
Schneier.com is a personal website. Opinions expressed are not necessarily those of BT.