Entries Tagged "patching"

Page 8 of 10

Worldwide Browser Patch Rates

Interesting research:

Abstract:

Although there is an increasing trend for attacks against popular Web browsers, only little is known about the actual patch level of daily used Web browsers on a global scale. We conjecture that users in large part do not actually patch their Web browsers based on recommendations, perceived threats, or any security warnings. Based on HTTP useragent header information stored in anonymized logs from Google’s web servers, we measured the patch dynamics of about 75% of the world’s Internet users for over a year. Our focus was on the Web browsers Firefox and Opera. We found that the patch level achieved is mainly determined by the ergonomics and default settings of built-in auto-update mechanisms. Firefox’ auto-update is very effective: most users installed a new version within three days. However, the maximum share of the latest, most secure version never exceeded 80% for Firefox users and 46% for Opera users at any day in 2007. This makes about 50 million Firefox users with outdated browsers an easy target for attacks. Our study is the result of the first global scale measurement of the patch dynamics of a popular browser.

Posted on February 13, 2009 at 6:27 AMView Comments

Browser Insecurity

This excellent paper measures insecurity in the global population of browsers, using Google’s web server logs. Why is this important? Because browsers are an increasingly popular attack vector.

The results aren’t good.

…at least 45.2%, or 637 million users, were not using the most secure Web browser version on any working day from January 2007 to June 2008. These browsers are an easy target for drive-by download attacks as they are potentially vulnerable to known exploits.

That number breaks down as 577 million users of Internet Explorer, 38 million of Firefox, 17 million of Safari, and 5 million of Opera. Lots more detail in the paper, including some ideas for technical solutions.

EDITED TO ADD (7/2): More commentary.

Posted on July 3, 2008 at 7:02 AMView Comments

The Ethics of Vulnerability Research

The standard way to take control of someone else’s computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it’s still how most modern malware works.

Vulnerabilities are software mistakes–mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don’t get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent–or protect against–those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society–in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities–again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don’t know, my first question is, “What has the designer broken?” Anyone can design a security system that he cannot break. So when someone announces, “Here’s my security system, and I can’t break it,” your first reaction should be, “Who are you?” If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible–and more and less legal–ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn’t whether it’s ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus’s half here.

Posted on May 14, 2008 at 11:29 AMView Comments

Reverse-Engineering Exploits from Patches

This is interesting research: given a security patch, can you automatically reverse-engineer the security vulnerability that is being patched and create exploit code to exploit it?

Turns out you can.

What does this mean?

Attackers can simply wait for a patch to be released, use these techniques, and with reasonable chance, produce a working exploit within seconds. Coupled with a worm, all vulnerable hosts could be compromised before most are even aware a patch is available, let alone download it. Thus, Microsoft should redesign Windows Update. We propose solutions which prevent several possible schemes, some of which could be done with existing technology.

Full paper here.

Posted on April 23, 2008 at 1:35 PMView Comments

Benevolent Worms

This is a stupid idea:

Milan Vojnovic and colleagues from Microsoft Research in Cambridge, UK, want to make useful pieces of information such as software updates behave more like computer worms: spreading between computers instead of being downloaded from central servers.

The research may also help defend against malicious types of worm, the researchers say.

Software worms spread by self-replicating. After infecting one computer they probe others to find new hosts. Most existing worms randomly probe computers when looking for new hosts to infect, but that is inefficient, says Vojnovic, because they waste time exploring groups or “subnets” of computers that contain few uninfected hosts.

This idea pops up every few years. This is what I wrote back in 2003, updating something I wrote in 2000:

This is tempting for several reasons. One, it’s poetic: turning a weapon against itself. Two, it lets ethical programmers share in the fun of designing worms. And three, it sounds like a promising technique to solve one of the nastiest online security problems: patching or repairing computers’ vulnerabilities.

Everyone knows that patching is in shambles. Users, especially home users, don’t do it. The best patching techniques involve a lot of negotiation, pleading, and manual labor…things that nobody enjoys very much. Beneficial worms look like a happy solution. You turn a Byzantine social problem into a fun technical problem. You don’t have to convince people to install patches and system updates; you use technology to force them to do what you want.

And that’s exactly why it’s a terrible idea. Patching other people’s machines without annoying them is good; patching other people’s machines without their consent is not. A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation — all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

EDITED TO ADD (2/19): This is worth reading on the topic.

EDITED TO ADD (2/19): Microsoft is trying to dispel the rumor that it is working on this technology.

EDITED TO ADD (2/21): Using benevolent worms to test Internet censorship.

EDITED TO ADD (3/13): The benveolent W32.Welchia.Worm, intended to fix Blaster-infected systems, just created havoc.

Posted on February 19, 2008 at 6:57 AMView Comments

Microsoft Updates Both XP and Vista Without User Permission or Notification

The details are still fuzzy, but if this is true, it’s a huge deal.

Not that Microsoft can do this; that’s just stupid company stuff. But what’s to stop anyone else from using Microsoft’s stealth remote install capability to put anything onto anyone’s computer? How long before some smart hacker exploits this, and then writes a program that will allow all the dumb hackers to do it?

When you build a capability like this into your system, you decrease your overall security.

Posted on September 17, 2007 at 6:12 AMView Comments

Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems — those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities — electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system — whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc. — is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AMView Comments

Florida E-Voting Study

Florida just recently released another study of the Diebold voting
machines. They — and it was real security researchers like the California study, and not posers — studied v4.6.5 of the Diebold TSx and v1.96.8 of the Diebold Optical Scan. (California studied older versions (v4.6.4 of the TSx and v1.96.6 of the Optical Scan).

The most interesting issues are (1) Diebold’s apparent “find- then-patch” approach to computer security, and (2) Diebold’s lousy use of cryptography.

Among the findings:

  • Section 3.5. They use RSA signatures, apparently to address previously documented flaws in the literature. But their signature verification step has a problem. It computes H = signature**3 mod N, and then compares _only 160 bits of H_ with the SHA1 hash of a message. This is a natural way to implement RSA signatures if you just read a security textbook. But this approach is also insecure — the report demonstrates how to create a 250-line Java program to forge RSA signatures over (basically) arbitrary messages of their choosing.
  • Section 3.10.3. The original Hopkins report talked about the lack of crypto for network (or dialup) communications between a TSX voting machine and the back-end GEMs server. Apparently, Diebold tried to use SSL to fix the problem. The RABA report analyzed Diebold’s SSL usage and found a security problem. Diebold then tried to patch their SSL implementation. This new report looks at the patched version, and finds that it is still vulnerable to a man-in-the-middle attack.
  • Section 3.7.1.1. Key management. Avi Rubin has already summarized some of the highlights.

    This is arguably worse than having a fixed static key in all of the machines. Because with knowledge of the machine’s serial number, anyone can calculate all of the secret keys. Whereas before, someone would have needed access to the source code or the binary in the machine.

    Other attacks mentioned in the report include swapping two candidate vote counters and many other vote switching attacks. The supervisor PIN is protected with weak cryptography, and once again Diebold has shown that they do not have even a basic understanding of how to apply cryptographic mechanisms.

Avi Rubin has a nice overall summary, too:

So, Diebold is doing some things better than they did before when they had absolutely no security, but they have yet to do them right. Anyone taking any of our cryptography classes at Johns Hopkins, for example, would do a better job applying cryptography. If you read the SAIT report, this theme repeats throughout.

Right. These are classic examples of problems that can arise if (1) you “roll your own” crypto and/or (2) employ “find and patch” rather than a principled approach to security.

It all makes me wonder what new problems will arise from future security patches.

The good news is that Florida has decided not to certify the TSX at this time. They may try to certify a revised version of the OS (optical scan) system.

Posted on August 6, 2007 at 6:34 AMView Comments

Business Models for Discovering Security Vulnerabilities

One company sells them to the vendors:

The founder of a small Moscow security company, Gleg, Legerov scrutinizes computer code in commonly used software for programming bugs, which attackers can use to break into computer systems, and sends his findings to a few dozen corporate customers around the world. Each customer pays more than $10,000 for information it can use to plug the hidden holes in its computers and stay ahead of criminal hackers.

iDefensebuys them:

This month, iDefense, a Virginia- based subsidiary of the technology company VeriSign, began offering an $8,000 bounty to the first six researchers to find holes in Vista or the newest version of Internet Explorer, and up to $4,000 more for code that take can advantage of the weaknesses. Like Gleg, iDefense, will sell information about those vulnerabilities to companies and government agencies for an undisclosed amount, though iDefense makes it a practice to alert vendors like Microsoft first.

So do criminals:

But the iDefense rewards are low compared to bounties offered on the black market. In December, the Japanese antivirus company TrendMicro found a Vista vulnerability being offered by an anonymous hacker on a Romanian Web forum for $50,000.”

There’s a lot of FUD in this article, but also some good stuff.

Posted on February 5, 2007 at 12:44 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.