Entries Tagged "Microsoft"

Page 10 of 17

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Six Years of Patch Tuesdays

Nice article summing up six years of Microsoft Patch Tuesdays:

The total number of flaws disclosed and patched by the software maker so far this year stands at around 160, more than the 155 or so that Microsoft reported for all of 2008. The number of flaws reported in Microsoft products over the last two years is more than double the number of flaws disclosed in 2004 and 2005, the first two full years of Patch Tuesdays.

The last time Microsoft did not release any patches on a Patch Tuesday was March 2007, more than 30 months ago. In the past six years, Microsoft had just four patch-free months—two of which were in 2005. In contrast, the company has issued patches for 10 or more vulnerabilities on more than 20 occasions and patches for 20 or more flaws in a single month on about 10 occasions, including yesterday.

I wrote about the “patch treadmill,” pointing out that there are simply too many patches and that it’s impossible to keep up:

Security professionals are quick to blame system administrators who don’t install every patch. “They should have updated their systems; it’s their own fault when they get hacked.” This is beginning to feel a lot like blaming the victim. “He should have known not to walk down that deserted street; it’s his own fault he was mugged.” “She should never have dressed that provocatively; it’s her own fault she was attacked.” Perhaps such precautions should have been taken, but the real blame lies elsewhere.

Those who manage computer networks are people too, and people don’t always do the smartest thing. They know they’re supposed to install all patches. But sometimes they can’t take critical systems off-line. Sometimes they don’t have the staffing available to patch every system on their network. Sometimes applying a patch breaks something else on their network. I think it’s time the industry realized that expecting the patch process to improve network security just doesn’t work.

Patching is essentially an impossible problem. A patch needs to be incredibly well-tested. It has to work, without tweaking, on every configuration of the software out there. And for security reasons, it needs to be pushed out to users within days—hours, if possible. These two requirements are mutually contradictory: you can’t have a piece of software that is both well-tested and quickly written.

Before October 2003, Microsoft’s patching was a mess. Patches weren’t well-tested. They broke systems so frequently that many sysadmins wouldn’t install them without extensive testing. There were jokes that a Microsoft patch was indistinguishable from a DoS attack.

In 2003, Microsoft went to a once-a-month patching cycle, and I think it’s been a resounding success. Microsoft’s patches are much better tested. They’re much less likely to break other things. And, as a result, many more people have turned on automatic update, meaning that many more people have their patches up to date. The downside is that the window of exposure—the time period between a vulnerability’s release and the availability of a patch—is longer. Patch Tuesdays might be the best we can do, but the whole patching system is fundamentally broken. This is what I wrote last year:

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

Posted on October 19, 2009 at 3:38 PMView Comments

Secret Questions

In 2004, I wrote about the prevalence of secret questions as backup passwords. The problem is that the answers to these “secret questions” are often much easier to guess than random passwords. Mother’s maiden name isn’t very secret. Name of first pet, name of favorite teacher: there are some common names. Favorite color: I could probably guess that in no more than five attempts.

The result is that the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

Here’s some actual research on the issue:

It’s no secret: Measuring the security and reliability of authentication via ‘secret’ questions

Abstract:

All four of the most popular webmail providers—AOL, Google, Microsoft, and Yahoo!—rely on personal questions as the secondary authentication secrets used to reset account passwords. The security of these questions has received limited formal scrutiny, almost all of which predates webmail. We ran a user study to measure the reliability and security of the questions used by all four webmail providers. We asked participants to answer these questions and then asked their acquaintances to guess their answers. Acquaintance with whom participants reported being unwilling to share their webmail passwords were able to guess 17% of their answers. Participants forgot 20% of their own answers within six months. What’s more, 13% of answers could be guessed within five attempts by guessing the most popular answers of other participants, though this weakness is partially attributable to the geographic homogeneity of our participant pool.

Posted on May 25, 2009 at 9:56 AMView Comments

Microsoft Bans Memcopy()

This seems smart:

Microsoft plans to formally banish the popular programming function that’s been responsible for an untold number of security vulnerabilities over the years, not just in Windows but in countless other applications based on the C language. Effective later this year, Microsoft will add memcpy(), CopyMemory(), and RtlCopyMemory() to its list of function calls banned under its secure development lifecycle.

Here’s the list of banned function calls. This doesn’t help secure legacy code, of course, but you have to start somewhere.

Posted on May 20, 2009 at 6:17 AMView Comments

Secure Version of Windows Created for the U.S. Air Force

I have long argued that the government should use its massive purchasing power to pressure software vendors to improve security. Seems like the U.S. Air Force has done just that:

The Air Force, on the verge of renegotiating its desktop-software contract with Microsoft, met with Ballmer and asked the company to deliver a secure configuration of Windows XP out of the box. That way, Air Force administrators wouldn’t have to spend time re-configuring, and the department would have uniform software across the board, making it easier to control and maintain patches.

Surprisingly, Microsoft quickly agreed to the plan, and Ballmer got personally involved in the project.

“He has half-a-dozen clients that he personally gets involved with, and he saw that this just made a lot of sense,” Gilligan said. “They had already done preliminary work themselves trying to identify what would be a more secure configuration. So we fine-tuned and added to that.”

The NSA got together with the National Institute of Standards and Technology, the Defense Information Systems Agency and the Center for Internet Security to decide what to lock down in the Air Force special edition.

Many of the changes were complex and technical, but Gilligan says one of the most important and simplest was an obvious fix to how Windows XP handled passwords. The Air Force insisted the system be configured so administrative passwords were unique, and different from general user passwords, preventing an average user from obtaining administrative privileges. Specifications were added to increase the length and complexity of passwords and expire them every 60 days.

It then took two years for the Air Force to catalog and test all the software applications on its networks against the new configuration to uncover conflicts. In some cases, where internally designed software interacted with Windows XP in an insecure way, they had to change the in-house software.

Now I want Microsoft to offer this configuration to everyone.

EDITED TO ADD (5/6): Microsoft responds:

Thanks for covering this topic, but unfortunately the reporter for the original article got a lot of the major facts, which you relied upon, wrong. For instance, there isn’t a special version of Windows for the Air Force. They use the same SKUs as everyone else. We didn’t deliver a special settings that only the Air Force can access. The Air Force asked us to help them to create a hardened gpos and images, which the AF could use as the standard image. We agreed to assist, as we do with any company that hires us to assist in setting their own security policy as implemented in Windows.

The work from the AF ended up morphing into the Federal Desktop Core Configuration (FDCC) recommendations maintained by NIST. There are differences, but they are essentially the same thing. NIST initially used even more secure settings in the hardening process (many of which have since been relaxed because of operational issues, and is now even closer to what the AF created).

Anyone can download the FDCC settings, documentation, and even complete images. I worked on the FDCC project for little over a year, and Aaron Margosis has been involved for many years, and continues to be involved. He offers all sorts of public knowledge and useful tools. Here, Aaron has written a couple of tools that anyone can use to apply FDCC settings to local group policy. It includes the source code, if anyone wants to customize them.

In the initial article, a lot of the other improvements, such as patching, came from the use of better tools (SCCM, etc.), and were not necessarily solely due to the changes in the base image (although that certainly didn’t hurt). So, it seems the author mixed up some of the different technology pushes and wrapped them up into a single story. He also seem to imply that this is something special and secret, but the truth is there is more openness with the FDCC program and the surrounding security outcomes than anything we’ve ever done before. Even better, there are huge agencies that have already gone first in trying to use these harden settings, and essentially been beta testers for the rest of the world. The FDCC settings may not be the best fit for every company, but it is a good model to compare against.

Let me know if you have any questions.

Roger A. Grimes, Security Architect, ACE Team, Microsoft

EDITED TO ADD (5/12): FDCC policy specs.

Posted on May 6, 2009 at 6:43 AMView Comments

Security Fears Drive Iran to Linux

According to The Age in Australia:

“We would have to pay a lot of money,” said Sephery-Rad, noting that most of the government’s estimated one million PCs and the country’s total of six to eight million computers were being run almost exclusively on the Windows platform.

“Secondly, Microsoft software has a lot of backdoors and security weaknesses that are always being patched, so it is not secure. We are also under US sanctions. All this makes us think we need an alternative operating system.”

[…]

“Microsoft is a national security concern. Security in an operating system is an important issue, and when it is on a computer in the government it is of even greater importance,” said the official.

Posted on March 27, 2009 at 5:52 AMView Comments

Threat Modeling at Microsoft

Interesting paper by Adam Shostack:

Abstract. Describes a decade of experience threat modeling products and services at Microsoft. Describes the current threat modeling methodology used in the Security Development Lifecycle. The methodology is a practical approach, usable by non-experts, centered on data ow diagrams and a threat enumeration technique of ‘STRIDE per element.’ The paper covers some lessons learned which are likely applicable to other security analysis techniques. The paper closes with some possible questions for academic research.

Posted on October 13, 2008 at 6:21 AMView Comments

Bypassing Microsoft Vista's Memory Protection

This is huge:

Two security researchers have developed a new technique that essentially bypasses all of the memory protection safeguards in the Windows Vista operating system, an advance that many in the security community say will have far-reaching implications not only for Microsoft, but also on how the entire technology industry thinks about attacks.

In a presentation at the Black Hat briefings, Mark Dowd of IBM Internet Security Systems (ISS) and Alexander Sotirov, of VMware Inc. will discuss the new methods they’ve found to get around Vista protections such as Address Space Layout Randomization(ASLR), Data Execution Prevention (DEP) and others by using Java, ActiveX controls and .NET objects to load arbitrary content into Web browsers.

By taking advantage of the way that browsers, specifically Internet Explorer, handle active scripting and .NET objects, the pair have been able to load essentially whatever content they want into a location of their choice on a user’s machine.

Paper here.

EDITED TO ADD (8/11): Here’s commentary that says this isn’t such a big deal after all. I’m not convinced; I think this will turn out to be a bigger problem than that.

Posted on August 11, 2008 at 4:26 PMView Comments

Interesting Microsoft Patent Application

Guardian Angel:

An intelligent personalized agent monitors, regulates, and advises a user in decision-making processes for efficiency or safety concerns. The agent monitors an environment and present characteristics of a user and analyzes such information in view of stored preferences specific to one of multiple profiles of the user. Based on the analysis, the agent can suggest or automatically implement a solution to a given issue or problem. In addition, the agent can identify another potential issue that requires attention and suggests or implements action accordingly. Furthermore, the agent can communicate with other users or devices by providing and acquiring information to assist in future decisions. All aspects of environment observation, decision assistance, and external communication can be flexibly limited or allowed as desired by the user.

Note that Bill Gates and Ray Ozzie are co-inventors.

Posted on May 13, 2008 at 7:05 AMView Comments

1 8 9 10 11 12 17

Sidebar photo of Bruce Schneier by Joe MacInnis.