Entries Tagged "Microsoft"

Page 8 of 16

Anonymity and the Internet

Universal identification is portrayed by some as the holy grail of Internet security. Anonymity is bad, the argument goes; and if we abolish it, we can ensure only the proper people have access to their own information. We’ll know who is sending us spam and who is trying to hack into corporate networks. And when there are massive denial-of-service attacks, such as those against Estonia or Georgia or South Korea, we’ll know who was responsible and take action accordingly.

The problem is that it won’t work. Any design of the Internet must allow for anonymity. Universal identification is impossible. Even attribution—knowing who is responsible for particular Internet packets—is impossible. Attempting to build such a system is futile, and will only give criminals and hackers new ways to hide.

Imagine a magic world in which every Internet packet could be traced to its origin. Even in this world, our Internet security problems wouldn’t be solved. There’s a huge gap between proving that a packet came from a particular computer and that a packet was directed by a particular person. This is the exact problem we have with botnets, or pedophiles storing child porn on innocents’ computers. In these cases, we know the origins of the DDoS packets and the spam; they’re from legitimate machines that have been hacked. Attribution isn’t as valuable as you might think.

Implementing an Internet without anonymity is very difficult, and causes its own problems. In order to have perfect attribution, we’d need agencies—real-world organizations—to provide Internet identity credentials based on other identification systems: passports, national identity cards, driver’s licenses, whatever. Sloppier identification systems, based on things such as credit cards, are simply too easy to subvert. We have nothing that comes close to this global identification infrastructure. Moreover, centralizing information like this actually hurts security because it makes identity theft that much more profitable a crime.

And realistically, any theoretical ideal Internet would need to allow people access even without their magic credentials. People would still use the Internet at public kiosks and at friends’ houses. People would lose their magic Internet tokens just like they lose their driver’s licenses and passports today. The legitimate bypass mechanisms would allow even more ways for criminals and hackers to subvert the system.

On top of all this, the magic attribution technology doesn’t exist. Bits are bits; they don’t come with identity information attached to them. Every software system we’ve ever invented has been successfully hacked, repeatedly. We simply don’t have anywhere near the expertise to build an airtight attribution system.

Not that it really matters. Even if everyone could trace all packets perfectly, to the person or origin and not just the computer, anonymity would still be possible. It would just take one person to set up an anonymity server. If I wanted to send a packet anonymously to someone else, I’d just route it through that server. For even greater anonymity, I could route it through multiple servers. This is called onion routing and, with appropriate cryptography and enough users, it adds anonymity back to any communications system that prohibits it.

Attempts to banish anonymity from the Internet won’t affect those savvy enough to bypass it, would cost billions, and would have only a negligible effect on security. What such attempts would do is affect the average user’s access to free speech, including those who use the Internet’s anonymity to survive: dissidents in Iran, China, and elsewhere.

Mandating universal identity and attribution is the wrong goal. Accept that there will always be anonymous speech on the Internet. Accept that you’ll never truly know where a packet came from. Work on the problems you can solve: software that’s secure in the face of whatever packet it receives, identification systems that are secure enough in the face of the risks. We can do far better at these things than we’re doing, and they’ll do more to improve security than trying to fix insoluble problems.

The whole attribution problem is very similar to the copy-protection/digital-rights-management problem. Just as it’s impossible to make specific bits not copyable, it’s impossible to know where specific bits came from. Bits are bits. They don’t naturally come with restrictions on their use attached to them, and they don’t naturally come with author information attached to them. Any attempts to circumvent this limitation will fail, and will increasingly need to be backed up by the sort of real-world police-state measures that the entertainment industry is demanding in order to make copy-protection work. That’s how China does it: police, informants, and fear.

Just as the music industry needs to learn that the world of bits requires a different business model, law enforcement and others need to understand that the old ideas of identification don’t work on the Internet. For good or for bad, whether you like it or not, there’s always going to be anonymity on the Internet.

This essay originally appeared in Information Security, as part of a point/counterpoint with Marcus Ranum. You can read Marcus’s response below my essay.

EDITED TO ADD (2/5): Microsoft’s Craig Mundie wants to abolish anonymity as well.

What Mundie is proposing is to impose authentication. He draws an analogy to automobile use. If you want to drive a car, you have to have a license (not to mention an inspection, insurance, etc). If you do something bad with that car, like break a law, there is the chance that you will lose your license and be prevented from driving in the future. In other words, there is a legal and social process for imposing discipline. Mundie imagines three tiers of Internet ID: one for people, one for machines and one for programs (which often act as proxies for the other two).

Posted on February 3, 2010 at 6:16 AMView Comments

Users Rationally Rejecting Security Advice

This paper, by Cormac Herley at Microsoft Research, sounds like me:

Abstract: It is often suggested that users are hopelessly lazy and
unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certicates errors. We argue that users’ rejection of the security advice they receive is entirely rational from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort. Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses. Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.

Sounds like me.

EDITED TO ADD (12/12): Related article on usable security.

Posted on November 24, 2009 at 12:40 PMView Comments

Ballmer Blames the Failure of Windows Vista on Security

According to the Telegraph:

Mr Ballmer said: “We got some uneven reception when [Vista] first launched in large part because we made some design decisions to improve security at the expense of compatibility. I don’t think from a word-of-mouth perspective we ever recovered from that.”

Commentary:

Vista’s failure and Ballmer’s faulting security is a bit of being careful for what you wish. Vista (codename “Longhorn” during its development) was always intended to be a more secure operating system. Following the security disasters and 2000 and 2001 that befell Windows 98 and 2000, Microsoft shut down all software development and launched the Trustworthy Computing Initiative that advocated secure coding practices. Microsoft retrained thousands of programmers to eliminate common security problems such as buffer overflows. The immediate result was a retooling of Windows XP to make it more secure for its 2002 launch. Long-term, though, was to make Vista the most secure operating system in Microsoft’s history.

What made XP and Vista more secure? Eliminating (or reducing) buffer overflow errors helped. But what really made a difference is shutting off services by default. Many of the vulnerabilities exploited in Windows 98, NT and 2000 were actually a result of unused services that were active by default. Microsoft’s own vulnerability tracking shows that Vista has far less reported vulnerabilities than any of its predecessors. Unfortunately, a Vista locked down out of the box made it less palatable to users.

Now security obstacles aren’t the only ills that Vista suffered. Huge memory footprint, incompatible graphics requirements, slow responsiveness and a general sense that it was already behind competing Mac and Linux OSes in functionality and features made Vista thud. In my humble opinion, the security gains in Vista were worth many of the tradeoffs; and it was the other technical requirements and incompatible applications that doomed this operating system.

There was also the problem of Vista’s endless security warnings. The problem is that they were almost always false alarms, and there were no adverse effects of ignoring them. So users did, which means they ended up being nothing but an annoyance.

Security warnings are often a way for the developer to avoid making a decision. “We don’t know what to do here, so we’ll put up a warning and ask the user.” But unless the users have the information and the expertise to make the decision, they’re not going to be able to. We need user interfaces that only put up warnings when it matters.

I never upgraded to Vista. I’m hoping Windows 7 is worth upgrading to. We’ll see.

EDITED TO ADD (10/22): Another opinion.

Posted on October 21, 2009 at 7:46 AMView Comments

Six Years of Patch Tuesdays

Nice article summing up six years of Microsoft Patch Tuesdays:

The total number of flaws disclosed and patched by the software maker so far this year stands at around 160, more than the 155 or so that Microsoft reported for all of 2008. The number of flaws reported in Microsoft products over the last two years is more than double the number of flaws disclosed in 2004 and 2005, the first two full years of Patch Tuesdays.

The last time Microsoft did not release any patches on a Patch Tuesday was March 2007, more than 30 months ago. In the past six years, Microsoft had just four patch-free months—two of which were in 2005. In contrast, the company has issued patches for 10 or more vulnerabilities on more than 20 occasions and patches for 20 or more flaws in a single month on about 10 occasions, including yesterday.

I wrote about the “patch treadmill,” pointing out that there are simply too many patches and that it’s impossible to keep up:

Security professionals are quick to blame system administrators who don’t install every patch. “They should have updated their systems; it’s their own fault when they get hacked.” This is beginning to feel a lot like blaming the victim. “He should have known not to walk down that deserted street; it’s his own fault he was mugged.” “She should never have dressed that provocatively; it’s her own fault she was attacked.” Perhaps such precautions should have been taken, but the real blame lies elsewhere.

Those who manage computer networks are people too, and people don’t always do the smartest thing. They know they’re supposed to install all patches. But sometimes they can’t take critical systems off-line. Sometimes they don’t have the staffing available to patch every system on their network. Sometimes applying a patch breaks something else on their network. I think it’s time the industry realized that expecting the patch process to improve network security just doesn’t work.

Patching is essentially an impossible problem. A patch needs to be incredibly well-tested. It has to work, without tweaking, on every configuration of the software out there. And for security reasons, it needs to be pushed out to users within days—hours, if possible. These two requirements are mutually contradictory: you can’t have a piece of software that is both well-tested and quickly written.

Before October 2003, Microsoft’s patching was a mess. Patches weren’t well-tested. They broke systems so frequently that many sysadmins wouldn’t install them without extensive testing. There were jokes that a Microsoft patch was indistinguishable from a DoS attack.

In 2003, Microsoft went to a once-a-month patching cycle, and I think it’s been a resounding success. Microsoft’s patches are much better tested. They’re much less likely to break other things. And, as a result, many more people have turned on automatic update, meaning that many more people have their patches up to date. The downside is that the window of exposure—the time period between a vulnerability’s release and the availability of a patch—is longer. Patch Tuesdays might be the best we can do, but the whole patching system is fundamentally broken. This is what I wrote last year:

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

Posted on October 19, 2009 at 3:38 PMView Comments

Secret Questions

In 2004, I wrote about the prevalence of secret questions as backup passwords. The problem is that the answers to these “secret questions” are often much easier to guess than random passwords. Mother’s maiden name isn’t very secret. Name of first pet, name of favorite teacher: there are some common names. Favorite color: I could probably guess that in no more than five attempts.

The result is that the normal security protocol (passwords) falls back to a much less secure protocol (secret questions). And the security of the entire system suffers.

Here’s some actual research on the issue:

It’s no secret: Measuring the security and reliability of authentication via ‘secret’ questions

Abstract:

All four of the most popular webmail providers—AOL, Google, Microsoft, and Yahoo!—rely on personal questions as the secondary authentication secrets used to reset account passwords. The security of these questions has received limited formal scrutiny, almost all of which predates webmail. We ran a user study to measure the reliability and security of the questions used by all four webmail providers. We asked participants to answer these questions and then asked their acquaintances to guess their answers. Acquaintance with whom participants reported being unwilling to share their webmail passwords were able to guess 17% of their answers. Participants forgot 20% of their own answers within six months. What’s more, 13% of answers could be guessed within five attempts by guessing the most popular answers of other participants, though this weakness is partially attributable to the geographic homogeneity of our participant pool.

Posted on May 25, 2009 at 9:56 AMView Comments

Microsoft Bans Memcopy()

This seems smart:

Microsoft plans to formally banish the popular programming function that’s been responsible for an untold number of security vulnerabilities over the years, not just in Windows but in countless other applications based on the C language. Effective later this year, Microsoft will add memcpy(), CopyMemory(), and RtlCopyMemory() to its list of function calls banned under its secure development lifecycle.

Here’s the list of banned function calls. This doesn’t help secure legacy code, of course, but you have to start somewhere.

Posted on May 20, 2009 at 6:17 AMView Comments

Secure Version of Windows Created for the U.S. Air Force

I have long argued that the government should use its massive purchasing power to pressure software vendors to improve security. Seems like the U.S. Air Force has done just that:

The Air Force, on the verge of renegotiating its desktop-software contract with Microsoft, met with Ballmer and asked the company to deliver a secure configuration of Windows XP out of the box. That way, Air Force administrators wouldn’t have to spend time re-configuring, and the department would have uniform software across the board, making it easier to control and maintain patches.

Surprisingly, Microsoft quickly agreed to the plan, and Ballmer got personally involved in the project.

“He has half-a-dozen clients that he personally gets involved with, and he saw that this just made a lot of sense,” Gilligan said. “They had already done preliminary work themselves trying to identify what would be a more secure configuration. So we fine-tuned and added to that.”

The NSA got together with the National Institute of Standards and Technology, the Defense Information Systems Agency and the Center for Internet Security to decide what to lock down in the Air Force special edition.

Many of the changes were complex and technical, but Gilligan says one of the most important and simplest was an obvious fix to how Windows XP handled passwords. The Air Force insisted the system be configured so administrative passwords were unique, and different from general user passwords, preventing an average user from obtaining administrative privileges. Specifications were added to increase the length and complexity of passwords and expire them every 60 days.

It then took two years for the Air Force to catalog and test all the software applications on its networks against the new configuration to uncover conflicts. In some cases, where internally designed software interacted with Windows XP in an insecure way, they had to change the in-house software.

Now I want Microsoft to offer this configuration to everyone.

EDITED TO ADD (5/6): Microsoft responds:

Thanks for covering this topic, but unfortunately the reporter for the original article got a lot of the major facts, which you relied upon, wrong. For instance, there isn’t a special version of Windows for the Air Force. They use the same SKUs as everyone else. We didn’t deliver a special settings that only the Air Force can access. The Air Force asked us to help them to create a hardened gpos and images, which the AF could use as the standard image. We agreed to assist, as we do with any company that hires us to assist in setting their own security policy as implemented in Windows.

The work from the AF ended up morphing into the Federal Desktop Core Configuration (FDCC) recommendations maintained by NIST. There are differences, but they are essentially the same thing. NIST initially used even more secure settings in the hardening process (many of which have since been relaxed because of operational issues, and is now even closer to what the AF created).

Anyone can download the FDCC settings, documentation, and even complete images. I worked on the FDCC project for little over a year, and Aaron Margosis has been involved for many years, and continues to be involved. He offers all sorts of public knowledge and useful tools. Here, Aaron has written a couple of tools that anyone can use to apply FDCC settings to local group policy. It includes the source code, if anyone wants to customize them.

In the initial article, a lot of the other improvements, such as patching, came from the use of better tools (SCCM, etc.), and were not necessarily solely due to the changes in the base image (although that certainly didn’t hurt). So, it seems the author mixed up some of the different technology pushes and wrapped them up into a single story. He also seem to imply that this is something special and secret, but the truth is there is more openness with the FDCC program and the surrounding security outcomes than anything we’ve ever done before. Even better, there are huge agencies that have already gone first in trying to use these harden settings, and essentially been beta testers for the rest of the world. The FDCC settings may not be the best fit for every company, but it is a good model to compare against.

Let me know if you have any questions.

Roger A. Grimes, Security Architect, ACE Team, Microsoft

EDITED TO ADD (5/12): FDCC policy specs.

Posted on May 6, 2009 at 6:43 AMView Comments

Security Fears Drive Iran to Linux

According to The Age in Australia:

“We would have to pay a lot of money,” said Sephery-Rad, noting that most of the government’s estimated one million PCs and the country’s total of six to eight million computers were being run almost exclusively on the Windows platform.

“Secondly, Microsoft software has a lot of backdoors and security weaknesses that are always being patched, so it is not secure. We are also under US sanctions. All this makes us think we need an alternative operating system.”

[…]

“Microsoft is a national security concern. Security in an operating system is an important issue, and when it is on a computer in the government it is of even greater importance,” said the official.

Posted on March 27, 2009 at 5:52 AMView Comments

Threat Modeling at Microsoft

Interesting paper by Adam Shostack:

Abstract. Describes a decade of experience threat modeling products and services at Microsoft. Describes the current threat modeling methodology used in the Security Development Lifecycle. The methodology is a practical approach, usable by non-experts, centered on data ow diagrams and a threat enumeration technique of ‘STRIDE per element.’ The paper covers some lessons learned which are likely applicable to other security analysis techniques. The paper closes with some possible questions for academic research.

Posted on October 13, 2008 at 6:21 AMView Comments

1 6 7 8 9 10 16

Sidebar photo of Bruce Schneier by Joe MacInnis.