Entries Tagged "patching"

Page 11 of 12

Printer Security

At BlackHat last week, Brendan O’Connor warned about the dangers of insecure printers:

“Stop treating them as printers. Treat them as servers, as workstations,” O’Connor said in his presentation on Thursday. Printers should be part of a company’s patch program and be carefully managed, not forgotten by IT and handled by the most junior person on staff, he said.

I remember the L0pht doing work on printer vulnerabilities, and ways to attack networks via the printers, years ago. But the point is still valid and bears repeating: printers are computers, and have vulnerabilities like any other computers.

Once a printer was under his control, O’Connor said he would be able to use it to map an organization’s internal network—a situation that could help stage further attacks. The breach gave him access to any of the information printed, copied or faxed from the device. He could also change the internal job counter—which can reduce, or increase, a company’s bill if the device is leased, he said.

The printer break-in also enables a number of practical jokes, such as sending print and scan jobs to arbitrary workers’ desktops, O’Connor said. Also, devices could be programmed to include, for example, an image of a paper clip on every print, fax or copy, ultimately driving office staffers to take the machine apart looking for the paper clip.

Getting copies of all printed documents is definitely a security vulnerability, but I think the biggest threat is that the printers are inside the network, and are a more-trusted launching pad for onward attacks.

One of the weaknesses in the Xerox system is an unsecured boot loader, the technology that loads the basic software on the device, O’Connor said. Other flaws lie in the device’s Web interface and in the availability of services such as the Simple Network Management Protocol and Telnet, he said.

O’Connor informed Xerox of the problems in January. The company did issue a fix for its WorkCentre 200 series, it said in a statement. “Thanks to Brendan’s efforts, we were able to post a patch for our customers in mid-January which fixes the issues,” a Xerox representative said in an e-mailed statement.

One of the reasons this is a particularly nasty problem is that people don’t update their printer software. Want to bet approximately 0% of the printer’s users installed that patch? And what about printers whose code can’t be patched?

EDITED TO ADD (8/7): O’Connor’s name corrected.

Posted on August 7, 2006 at 10:59 AMView Comments

Hacked MySpace Server Infects a Million Computers with Malware

According to The Washington Post:

An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows….

Clever attack.

EDITED TO ADD (7/27): It wasn’t MySpace that was hacked, but a server belonging to the third-party advertising service that MySpace uses. The ad probably appeared on other websites as well, but MySpace seems to have been the biggest one.

EDITED TO ADD (8/5): Ed Felten comments.

Posted on July 24, 2006 at 6:46 AMView Comments

What if Your Vendor Won't Sell You a Security Upgrade?

Good question:

More frightening than my experience is the possibility that the company might do this to an existing customer. What good is a security product if the vendor refuses to sell you service on it? Without updates, most of these products are barely useful as doorstops.

The article demonstrates that a vendor might refuse to sell you a product, for reasons you can’t understand. And that you might not get any warning of that fact. The moral is that you’re not only buying a security product, you’re buying a security company.

In our tests, we look at products, not companies. Things such as training, finances and corporate style don’t come into it. But when it comes to buying products, our tests aren’t enough. It’s important to investigate all those peripheral aspects of the vendor before you sign a purchase order. I was reminded of that the hard way.

Posted on April 12, 2006 at 12:40 PMView Comments

Benevolent Worms

Yet another story about benevolent worms and how they can secure our networks. This idea shows up every few years. (I wrote about it in 2000, and again in 2003. This quote (emphasis mine) from the article shows what the problem is:

Simulations show that the larger the network grows, the more efficient this scheme should be. For example, if a network has 50,000 nodes (computers), and just 0.4% of those are honeypots, just 5% of the network will be infected before the immune system halts the virus, assuming the fix works properly. But, a 200-million-node network ­ with the same proportion of honeypots ­ should see just 0.001% of machines get infected.

This is from my 2003 essay:

A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation—all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

All of this makes worms easy to get wrong and hard to recover from. Experimentation, most of it involuntary, proves that worms are very hard to debug successfully: in other words, once worms starts spreading it’s hard to predict exactly what they will do. Some viruses were written to propagate harmlessly, but did damage—ranging from crashed machines to clogged networks—because of bugs in their code. Many worms were written to do damage and turned out to be harmless (which is even more revealing).

Intentional experimentation by well-meaning system administrators proves that in your average office environment, the code that successfully patches one machine won’t work on another. Indeed, sometimes the results are worse than any threat of external attack. Combining a tricky problem with a distribution mechanism that’s impossible to debug and difficult to control is fraught with danger. Every system administrator who’s ever distributed software automatically on his network has had the “I just automatically, with the press of a button, destroyed the software on hundreds of machines at once!” experience. And that’s with systems you can debug and control; self-propagating systems don’t even let you shut them down when you find the problem. Patching systems is fundamentally a human problem, and beneficial worms are a technical solution that doesn’t work.

Posted on December 5, 2005 at 2:50 PMView Comments

Still More on Sony's DRM Rootkit

This story is just getting weirder and weirder (previous posts here and here).

Sony already said that they’re stopping production of CDs with the embedded rootkit. Now they’re saying that they will pull the infected disks from stores and offer free exchanges to people who inadvertently bought them.

Sony BMG Music Entertainment said Monday it will pull some of its most popular CDs from stores in response to backlash over copy-protection software on the discs.

Sony also said it will offer exchanges for consumers who purchased the discs, which contain hidden files that leave them vulnerable to computer viruses when played on a PC.

That’s good news, but there’s more bad news. The patch Sony is distributing to remove the rootkit opens a huge security hole:

The root of the problem is a serious design flaw in Sony’s web-based uninstaller. When you first fill out Sony’s form to request a copy of the uninstaller, the request form downloads and installs a program – an ActiveX control created by the DRM vendor, First4Internet – called CodeSupport. CodeSupport remains on your system after you leave Sony’s site, and it is marked as safe for scripting, so any web page can ask CodeSupport to do things. One thing CodeSupport can be told to do is download and install code from an Internet site. Unfortunately, CodeSupport doesn’t verify that the downloaded code actually came from Sony or First4Internet. This means any web page can make CodeSupport download and install code from any URL without asking the user’s permission.

Even more interesting is that there may be at least half a million infected computers:

Using statistical sampling methods and a secret feature of XCP that notifies Sony when its CDs are placed in a computer, [security researcher Dan] Kaminsky was able to trace evidence of infections in a sample that points to the probable existence of at least one compromised machine in roughly 568,200 networks worldwide. This does not reflect a tally of actual infections, however, and the real number could be much higher.

I say “may be at least” because the data doesn’t smell right to me. Look at the list of infected titles, and estimate what percentage of CD buyers will play them on their computers; does that seem like half a million sales to you? It doesn’t to me, although I readily admit that I don’t know the music business. Their methodology seems sound, though:

Kaminsky discovered that each of these requests leaves a trace that he could follow and track through the internet’s domain name system, or DNS. While this couldn’t directly give him the number of computers compromised by Sony, it provided him the number and location (both on the net and in the physical world) of networks that contained compromised computers. That is a number guaranteed to be smaller than the total of machines running XCP.

His research technique is called DNS cache snooping, a method of nondestructively examining patterns of DNS use. Luis Grangeia invented the technique, and Kaminsky became famous in the security community for refining it.

Kaminsky asked more than 3 million DNS servers across the net whether they knew the addresses associated with the Sony rootkit—connected.sonymusic.com, updates.xcp-aurora.com and license.suncom2.com. He uses a “non-recursive DNS query” that allows him to peek into a server’s cache and find out if anyone else has asked that particular machine for those addresses recently.

If the DNS server said yes, it had a cached copy of the address, which means that at least one of its client computers had used it to look up Sony’s digital-rights-management site. If the DNS server said no, then Kaminsky knew for sure that no Sony-compromised machines existed behind it.

The results have surprised Kaminsky himself: 568,200 DNS servers knew about the Sony addresses. With no other reason for people to visit them, that points to one or more computers behind those DNS servers that are Sony-compromised. That’s one in six DNS servers, across a statistical sampling of a third of the 9 million DNS servers Kaminsky estimates are on the net.

In any case, Sony’s rapid fall from grace is a great example of the power of blogs; it’s been fifteen days since Mark Russinovich first posted about the rootkit. In that time the news spread like a firestorm, first through the blogs, then to the tech media, and then into the mainstream media.

Posted on November 15, 2005 at 3:16 PMView Comments

The Zotob Worm

If you’ll forgive the possible comparison to hurricanes, Internet epidemics are much like severe weather: they happen randomly, they affect some segments of the population more than others, and your previous preparation determines how effective your defense is.

Zotob was the first major worm outbreak since MyDoom in January 2004. It happened quickly—less than five days after Microsoft published a critical security bulletin (its 39th of the year). Zotob’s effects varied greatly from organization to organization: some networks were brought to their knees, while others didn’t even notice.

The worm started spreading on Sunday, 14 August. Honestly, it wasn’t much of a big deal, but it got a lot of play in the press because it hit several major news outlets, most notably CNN. If a news organization is personally affected by something, it’s much more likely to report extensively on it. But my company, Counterpane Internet Security, monitors more than 500 networks worldwide, and we didn’t think it was worth all the press coverage.

By the 17th, there were at least a dozen other worms that exploited the same vulnerability, both Zotob variants and others that were completely different. Most of them tried to recruit computers for bot networks, and some of the different variants warred against each other—stealing “owned” computers back and forth. If your network was infected, it was a mess.

Two weeks later, the 18-year-old who wrote the original Zotob worm was arrested, along with the 21-year-old who paid him to write it. It seems likely the person who funded the worm’s creation was not a hacker, but rather a criminal looking to profit.

The nature of worms has changed in the past few years. Previously, hackers looking for prestige or just wanting to cause damage were responsible for most worms. Today, they’re increasingly written or commissioned by criminals. By taking over computers, worms can send spam, launch denial-of-service extortion attacks, or search for credit-card numbers and other personal information.

What could you have done beforehand to protect yourself against Zotob and its kin? “Install the patch” is the obvious answer, but it’s not really a satisfactory one. There are simply too many patches. Although a single computer user can easily set up patches to automatically download and install—at least Microsoft Windows system patches—large corporate networks can’t. Far too often, patches cause other things to break.

It would be great to know which patches are actually important and which ones just sound important. Before that weekend in August, the patch that would have protected against Zotob was just another patch; by Monday morning, it was the most important thing a sysadmin could do to secure the network.

Microsoft had six new patches available on 9 August, three designated as critical (including the one that Zotob used), one important, and two moderate. Could you have guessed beforehand which one would have actually been critical? With the next patch release, will you know which ones you can put off and for which ones you need to drop everything, test, and install across your network?

Given that it’s impossible to know what’s coming beforehand, how you respond to an actual worm largely determines your defense’s effectiveness. You might need to respond quickly, and you most certainly need to respond accurately. Because it’s impossible to know beforehand what the necessary response should be, you need a process for that response. Employees come and go, so the only thing that ensures a continuity of effective security is a process. You need accurate and timely information to fuel this process. And finally, you need experts to decipher the information, determine what to do, and implement a solution.

The Zotob storm was both typical and unique. It started soon after the vulnerability was published, but I don’t think that made a difference. Even worms that use six-month-old vulnerabilities find huge swaths of the Internet unpatched. It was a surprise, but they all are.

This essay will appear in the November/December 2005 issue of IEEE Security & Privacy.

Posted on November 11, 2005 at 7:46 AMView Comments

Trusted Computing Best Practices

The Trusted Computing Group (TCG) is an industry consortium that is trying to build more secure computers. They have a lot of members, although the board of directors consists of Microsoft, Sony, AMD, Intel, IBM, SUN, HP, and two smaller companies who are voted on in a rotating basis.

The basic idea is that you build a computer from the ground up securely, with a core hardware “root of trust” called a Trusted Platform Module (TPM). Applications can run securely on the computer, can communicate with other applications and their owners securely, and can be sure that no untrusted applications have access to their data or code.

This sounds great, but it’s a double-edged sword. The same system that prevents worms and viruses from running on your computer might also stop you from using any legitimate software that your hardware or operating system vendor simply doesn’t like. The same system that protects spyware from accessing your data files might also stop you from copying audio and video files. The same system that ensures that all the patches you download are legitimate might also prevent you from, well, doing pretty much anything.

(Ross Anderson has an excellent FAQ on the topic. I wrote about it back when Microsoft called it Palladium.)

In May, the Trusted Computing Group published a best practices document: “Design, Implementation, and Usage Principles for TPM-Based Platforms.” Written for users and implementers of TCG technology, the document tries to draw a line between good uses and bad uses of this technology.

The principles that TCG believes underlie the effective, useful, and acceptable design, implementation, and use of TCG technologies are the following:

  • Security: TCG-enabled components should achieve controlled access to designated critical secured data and should reliably measure and report the system’s security properties. The reporting mechanism should be fully under the owner’s control.
  • Privacy: TCG-enabled components should be designed and implemented with privacy in mind and adhere to the letter and spirit of all relevant guidelines, laws, and regulations. This includes, but is not limited to, the OECD Guidelines, the Fair Information Practices, and the European Union Data Protection Directive (95/46/EC).
  • Interoperability: Implementations and deployments of TCG specifications should facilitate interoperability. Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.
  • Portability of data: Deployment should support established principles and practices of data ownership.
  • Controllability: Each owner should have effective choice and control over the use and operation of the TCG-enabled capabilities that belong to them; their participation must be opt-in. Subsequently, any user should be able to reliably disable the TCG functionality in a way that does not violate the owner’s policy.
  • Ease-of-use: The nontechnical user should find the TCG-enabled capabilities comprehensible and usable.

It’s basically a good document, although there are some valid criticisms. I like that the document clearly states that coercive use of the technology—forcing people to use digital rights management systems, for example, are inappropriate:

The use of coercion to effectively force the use of the TPM capabilities is not an appropriate use of the TCG technology.

I like that the document tries to protect user privacy:

All implementations of TCG-enabled components should ensure that the TCG technology is not inappropriately used for data aggregation of personal information/

I wish that interoperability were more strongly enforced. The language has too much wiggle room for companies to break interoperability under the guise of security:

Furthermore, implementations and deployments of TCG specifications should not introduce any new interoperability obstacles that are not for the purpose of security.

That sounds good, but what does “security” mean in that context? Security of the user against malicious code? Security of big media against people copying music and videos? Security of software vendors against competition? The big problem with TCG technology is that it can be used to further all three of these “security” goals, and this document is where “security” should be better defined.

Complaints aside, it’s a good document and we should all hope that companies follow it. Compliance is totally voluntary, but it’s the kind of document that governments and large corporations can point to and demand that vendors follow.

But there’s something fishy going on. Microsoft is doing its best to stall the document, and to ensure that it doesn’t apply to Vista (formerly known as Longhorn), Microsoft’s next-generation operating system.

The document was first written in the fall of 2003, and went through the standard review process in early 2004. Microsoft delayed the adoption and publication of the document, demanding more review. Eventually the document was published in June of this year (with a May date on the cover).

Meanwhile, the TCG built a purely software version of the specification: Trusted Network Connect (TNC). Basically, it’s a TCG system without a TPM.

The best practices document doesn’t apply to TNC, because Microsoft (as a member of the TCG board of directors) blocked it. The excuse is that the document hadn’t been written with software-only applications in mind, so it shouldn’t apply to software-only TCG systems.

This is absurd. The document outlines best practices for how the system is used. There’s nothing in it about how the system works internally. There’s nothing unique to hardware-based systems, nothing that would be different for software-only systems. You can go through the document yourself and replace all references to “TPM” or “hardware” with “software” (or, better yet, “hardware or software”) in five minutes. There are about a dozen changes, and none of them make any meaningful difference.

The only reason I can think of for all this Machiavellian maneuvering is that the TCG board of directors is making sure that the document doesn’t apply to Vista. If the document isn’t published until after Vista is released, then obviously it doesn’t apply.

Near as I can tell, no one is following this story. No one is asking why TCG best practices apply to hardware-based systems if they’re writing software-only specifications. No one is asking why the document doesn’t apply to all TCG systems, since it’s obviously written without any particular technology in mind. And no one is asking why the TCG is delaying the adoption of any software best practices.

I believe the reason is Microsoft and Vista, but clearly there’s some investigative reporting to be done.

(A version of this essay previously appeared on CNet’s News.com and ZDNet.)

EDITED TO ADD: This comment completely misses my point. Which is odd; I thought I was pretty clear.

EDITED TO ADD: There is a thread on Slashdot on the topic.

EDITED TO ADD: The Sydney Morning Herald republished this essay. Also “The Age.”

Posted on August 31, 2005 at 8:27 AMView Comments

New Windows Vulnerability

There’s a new Windows 2000 vulnerability:

A serious flaw has been discovered in a core component of Windows 2000, with no possible work-around until it gets fixed, a security company said.

The vulnerability in Microsoft’s operating system could enable remote intruders to enter a PC via its Internet Protocol address, Marc Maiffret, chief hacking officer at eEye Digital Security, said on Wednesday. As no action on the part of the computer user is required, the flaw could easily be exploited to create a worm attack, he noted.

What may be particularly problematic with this unpatched security hole is that a work-around is unlikely, he said.

“You can’t turn this (vulnerable) component off,” Maiffret said. “It’s always on. You can’t disable it. You can’t uninstall.”

Don’t fail to notice the sensationalist explanation from eEye. This is what I call a “publicity attack” (note that the particular example in that essay is wrong): it’s an attempt by eEye Digital Security to get publicity for their company. Yes, I’m sure it’s a bad vulnerability. Yes, I’m sure Microsoft should have done more to secure their systems. But eEye isn’t blameless in this; they’re searching for vulnerabilities that make good press releases.

Posted on August 5, 2005 at 2:25 PMView Comments

Microsoft Permits Pirated Software to Receive Security Patches

Microsoft wants to make pirated software less useful by preventing it from receiving patches and updates. At the same time, it is in everyone’s best interest for all software to be more secure: legitimate and pirated. This issue has been percolating for a while, and I’ve written about it twice before. After much back and forth, Microsoft is going to do the right thing:

From now on, customers looking to get the latest add-ons to Windows will have to verify that their copy of the operating system is legit….

The only exception is for security-related patches. Regardless of whether a system passes the test, security updates will be available to all Windows users via either manual download or automatic update.

Microsoft deserves praise for this.

On the other hand, the system was cracked within 24 hours.

Posted on July 29, 2005 at 11:26 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.