Entries Tagged "patching"

Page 9 of 10

Firefox JavaScript Flaw: Real or Hoax?

Two hackers — Mischa Spiegelmock and Andrew Wbeelsoi — have announced a flaw in Firefox’s JavaScript:

An attacker could commandeer a computer running the browser simply by crafting a Web page that contains some malicious JavaScript code, Mischa Spiegelmock and Andrew Wbeelsoi said in a presentation at the ToorCon hacker conference here. The flaw affects Firefox on Windows, Apple Computer’s Mac OS X and Linux, they said.

More interesting was this piece:

The hackers claim they know of about 30 unpatched Firefox flaws. They don’t plan to disclose them, instead holding onto the bugs.

Jesse Ruderman, a Mozilla security staffer, attended the presentation and was called up on the stage with the two hackers. He attempted to persuade the presenters to responsibly disclose flaws via Mozilla’s bug bounty program instead of using them for malicious purposes such as creating networks of hijacked PCs, called botnets.

“I do hope you guys change your minds and decide to report the holes to us and take away $500 per vulnerability instead of using them for botnets,” Ruderman said.

The two hackers laughed off the comment. “It is a double-edged sword, but what we’re doing is really for the greater good of the Internet. We’re setting up communication networks for black hats,” Wbeelsoi said.

Sounds pretty bad? But maybe it’s all a hoax:

Spiegelmock, a developer at Six Apart, a blog software company in San Francisco, now says the ToorCon talk was meant “to be humorous” and insists the code presented at the conference cannot result in code execution.

Spiegelmock’s strange about-face comes as Mozilla’s security response team is racing to piece together information from the ToorCon talk to figure out how to fix the issue.

[…]

On the claim that there are 30 undisclosed Firefox vulnerabilities, Spiegelmock pinned that entirely on co-presenter Wbeelsoi. “I have no undisclosed Firefox vulnerabilities. The person who was speaking with me made this claim, and I honestly have no idea if he has them or not. I apologize to everyone involved, and I hope I have made everything as clear as possible,” Spiegelmock added.

I vote: hoax, with maybe some seeds of real.

Posted on October 4, 2006 at 7:04 AMView Comments

FairUse4WM News

A couple of weeks I ago I wrote about the battle between Microsoft’s DRM system and FairUse4WM, which breaks it. The news for this week is that Microsoft has patched their security against FairUseWM 1.2 and filed a lawsuit against the program’s anonymous authors, and those same anonymous authors have released FairUse4WM 1.3, which breaks the latest Microsoft patch.

We asked Viodentia about Redmond’s accusation that he and/or his associates broke into its systems in order to obtain the IP necessary to crack PlaysForSure; Vio replied that he’s “utterly shocked” by the charge. “I didn’t use any Microsoft source code. However, I believe that this lawsuit is a fishing expedition to get identity information, which can then be used to either bring more targeted lawsuits, or to cause other trouble.” We’re sure Microsoft would like its partners and the public to think that its DRM is generally infallible and could only be cracked by stealing its IP, so Viodentia’s conclusion about its legal tactics seems pretty fair, obvious, and logical to us.

What’s interesting about this continuing saga is how different it is from the normal find-vulnerability-then-patch sequence. The authors of FairUse4WM aren’t finding bugs and figuring out how to exploit them, forcing Microsoft to patch them. This is a sequence of crack, fix, re-crack, re-fix, etc.

The reason we’re seeing this — and this is going to be the norm for DRM systems — is that DRM is fundamentally an impossible problem. Making it work at all involves tricks, and breaking DRM is akin to “fixing” the software so the tricks don’t work. Anyone looking for a demonstation that technical DRM is doomed should watch this story unfold. (If Microsoft has any chance of winning at all, it’s via the legal route.)

Posted on September 28, 2006 at 12:55 PMView Comments

Programming ATMs to Believe $20 Bills Are $5 Bills

Clever attack:

Last month, a man reprogrammed an automated teller machine at a gas station on Lynnhaven Parkway to spit out four times as much money as it should.

He then made off with an undisclosed amount of cash.

No one noticed until nine days later, when a customer told the clerk at a Crown gas station that the machine was disbursing more money than it should. Police are now investigating the incident as fraud.

Police spokeswoman Rene Ball said the first withdrawal occurred at 6:17 p.m. Aug. 19. Surveillance footage documented a man about 5-foot-8 with a thin build walking into the gas station on the 2400 block of Lynnhaven Parkway and swiping an ATM card.

The man then punched a series of numbers on the machine’s keypad, breaking the security code. The ATM was programmed to disburse $20 bills. The man reprogrammed the machine so it recorded each $20 bill as a $5 debit to his account.

The suspect returned to the gas station a short time later and took more money, but authorities did not say how much. Because the account was pre-paid and the card could be purchased at several places, police are not sure who is behind the theft.

What’s weird is that it seems that this is easy. The ATM is a Tranax Mini Bank 1500. And you can buy the manuals from the Tranax website. And they’re useful for this sort of thing:

I am holding in my hands a legitimately obtained copy of the manual. There are a lot of security sensitive things inside of this manual. As promised, I am not going to reveal them, but there are:

  • Instructions on how to enter the diagnostic mode
  • Default passwords

  • Default Combinations For the Safe

Do not ask me for them. If you maintain one of these devices, make sure that you are not using the default password. If you are, change it immediately.

This is from an eWeek article:

“If you get your hand on this manual, you can basically reconfigure the ATM if the default password was not changed. My guess is that most of these mini-bank terminals are sitting around with default passwords untouched,” Goldsmith said.

Officials at Tranax did not respond to eWEEK requests for comment. According to a note on the company’s Web site, Tranax has shipped 70,000 ATMs, self-service terminals and transactional kiosks around the country. The majority of those shipments are of the flagship Mini-Bank 1500 machine that was rigged in the Virginia Beach heist.

So, as long as you can use an account that’s not traceable back to you, and you disguise yourself for the ATM cameras, this is a pretty easy crime.

eWeek claims you can get a copy of the manual simply by Googling for it. (Here’s one on eBay.

And Tranax is promising a fix that will force operators to change the default passwords. But honestly, what’s the liklihood that someone who can’t be bothered to change the default password will take the time to install a software patch?

EDITED TO ADD (9/22): Here’s the manual.

Posted on September 22, 2006 at 7:04 AMView Comments

Microsoft and FairUse4WM

If you really want to see Microsoft scramble to patch a hole in its software, don’t look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond’s DRM.

Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory — and then quietly fix the problem in the next software release.

That changed with the full disclosure movement. Independent security researchers started going public with the holes they found, making vulnerabilities impossible for vendors to ignore. Then worms became more common; patching — and patching quickly — became the norm.

But even now, no software vendor likes to issue patches. Every patch is a public admission that the company made a mistake. Moreover, the process diverts engineering resources from new development. Patches annoy users by making them update their software, and piss them off even more if the update doesn’t work properly.

For the vendor, there’s an economic balancing act: how much more will your users be annoyed by unpatched software than they will be by the patch, and is that reduction in annoyance worth the cost of patching?

Since 2003, Microsoft’s strategy to balance these costs and benefits has been to batch patches: instead of issuing them one at a time, it’s been issuing them all together on the second Tuesday of each month. This decreases Microsoft’s development costs and increases the reliability of its patches.

The user pays for this strategy by remaining open to known vulnerabilities for up to a month. On the other hand, users benefit from a predictable schedule: Microsoft can test all the patches that are going out at the same time, which means that patches are more reliable and users are able to install them faster with more confidence.

In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, “Patch Tuesday” is the best users are likely to get.

Why? Because it makes near-term financial sense to Microsoft. The company is not a public charity, and if the internet suffers, or if computers are compromised en masse, the economic impact on Microsoft is still minimal.

Microsoft is in the business of making money, and keeping users secure by patching its software is only incidental to that goal.

There’s no better example of this of this principle in action than Microsoft’s behavior around the vulnerability in its digital rights management software PlaysForSure.

Last week, a hacker developed an application called FairUse4WM that strips the copy protection from Windows Media DRM 10 and 11 files.

Now, this isn’t a “vulnerability” in the normal sense of the word: digital rights management is not a feature that users want. Being able to remove copy protection is a good thing for some users, and completely irrelevant for everyone else. No user is ever going to say: “Oh no. I can now play the music I bought for my computer in my car. I must install a patch so I can’t do that anymore.”

But to Microsoft, this vulnerability is a big deal. It affects the company’s relationship with major record labels. It affects the company’s product offerings. It affects the company’s bottom line. Fixing this “vulnerability” is in the company’s best interest; never mind the customer.

So Microsoft wasted no time; it issued a patch three days after learning about the hack. There’s no month-long wait for copyright holders who rely on Microsoft’s DRM.

This clearly demonstrates that economics is a much more powerful motivator than security.

It should surprise no one that the system didn’t stay patched for long. FairUse4WM 1.2 gets around Microsoft’s patch, and also circumvents the copy protection in Windows Media DRM 9 and 11beta2 files.

That was Saturday. Any guess on how long it will take Microsoft to patch Media Player once again? And then how long before the FairUse4WM people update their own software?

Certainly much less time than it will take Microsoft and the recording industry to realize they’re playing a losing game, and that trying to make digital files uncopyable is like trying to make water not wet.

If Microsoft abandoned this Sisyphean effort and put the same development effort into building a fast and reliable patching system, the entire internet would benefit. But simple economics says it probably never will.

This essay originally appeared on Wired.com.

EDITED TO ADD (9/8): Commentary.

EDITED TO ADD (9/9): Microsoft released a patch for FairUse4WM 1.2 on Thursday, September 7th.

EDITED TO ADD (9/13): BSkyB halts download service because of the breaks.

EDITED TO ADD (9/16): Microsoft is threatening legal action against people hosting copies of FairUse4WM.

Posted on September 7, 2006 at 8:33 AMView Comments

Faux Disclosure

Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.

You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.

Security researchers Dave Maynor of ISS and Johnny Cache — a.k.a. Jon Ellch — demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card — which they declined to identify — was inserted
in a Mac, Windows, or Linux laptop.

[…]

How is that for murky and non-transparent? The whole world is at risk — if the exploit is real — whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.

It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers — mainly because Apple had not fixed the problem yet.”

That’s part of what is meant by full disclosure these days — giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.

Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

Posted on August 14, 2006 at 1:41 PMView Comments

Printer Security

At BlackHat last week, Brendan O’Connor warned about the dangers of insecure printers:

“Stop treating them as printers. Treat them as servers, as workstations,” O’Connor said in his presentation on Thursday. Printers should be part of a company’s patch program and be carefully managed, not forgotten by IT and handled by the most junior person on staff, he said.

I remember the L0pht doing work on printer vulnerabilities, and ways to attack networks via the printers, years ago. But the point is still valid and bears repeating: printers are computers, and have vulnerabilities like any other computers.

Once a printer was under his control, O’Connor said he would be able to use it to map an organization’s internal network–a situation that could help stage further attacks. The breach gave him access to any of the information printed, copied or faxed from the device. He could also change the internal job counter–which can reduce, or increase, a company’s bill if the device is leased, he said.

The printer break-in also enables a number of practical jokes, such as sending print and scan jobs to arbitrary workers’ desktops, O’Connor said. Also, devices could be programmed to include, for example, an image of a paper clip on every print, fax or copy, ultimately driving office staffers to take the machine apart looking for the paper clip.

Getting copies of all printed documents is definitely a security vulnerability, but I think the biggest threat is that the printers are inside the network, and are a more-trusted launching pad for onward attacks.

One of the weaknesses in the Xerox system is an unsecured boot loader, the technology that loads the basic software on the device, O’Connor said. Other flaws lie in the device’s Web interface and in the availability of services such as the Simple Network Management Protocol and Telnet, he said.

O’Connor informed Xerox of the problems in January. The company did issue a fix for its WorkCentre 200 series, it said in a statement. “Thanks to Brendan’s efforts, we were able to post a patch for our customers in mid-January which fixes the issues,” a Xerox representative said in an e-mailed statement.

One of the reasons this is a particularly nasty problem is that people don’t update their printer software. Want to bet approximately 0% of the printer’s users installed that patch? And what about printers whose code can’t be patched?

EDITED TO ADD (8/7): O’Connor’s name corrected.

Posted on August 7, 2006 at 10:59 AMView Comments

Hacked MySpace Server Infects a Million Computers with Malware

According to The Washington Post:

An online banner advertisement that ran on MySpace.com and other sites over the past week used a Windows security flaw to infect more than a million users with spyware when people merely browsed the sites with unpatched versions of Windows….

Clever attack.

EDITED TO ADD (7/27): It wasn’t MySpace that was hacked, but a server belonging to the third-party advertising service that MySpace uses. The ad probably appeared on other websites as well, but MySpace seems to have been the biggest one.

EDITED TO ADD (8/5): Ed Felten comments.

Posted on July 24, 2006 at 6:46 AMView Comments

What if Your Vendor Won't Sell You a Security Upgrade?

Good question:

More frightening than my experience is the possibility that the company might do this to an existing customer. What good is a security product if the vendor refuses to sell you service on it? Without updates, most of these products are barely useful as doorstops.

The article demonstrates that a vendor might refuse to sell you a product, for reasons you can’t understand. And that you might not get any warning of that fact. The moral is that you’re not only buying a security product, you’re buying a security company.

In our tests, we look at products, not companies. Things such as training, finances and corporate style don’t come into it. But when it comes to buying products, our tests aren’t enough. It’s important to investigate all those peripheral aspects of the vendor before you sign a purchase order. I was reminded of that the hard way.

Posted on April 12, 2006 at 12:40 PMView Comments

Benevolent Worms

Yet another story about benevolent worms and how they can secure our networks. This idea shows up every few years. (I wrote about it in 2000, and again in 2003. This quote (emphasis mine) from the article shows what the problem is:

Simulations show that the larger the network grows, the more efficient this scheme should be. For example, if a network has 50,000 nodes (computers), and just 0.4% of those are honeypots, just 5% of the network will be infected before the immune system halts the virus, assuming the fix works properly. But, a 200-million-node network ­ with the same proportion of honeypots ­ should see just 0.001% of machines get infected.

This is from my 2003 essay:

A worm is not “bad” or “good” depending on its payload. Viral propagation mechanisms are inherently bad, and giving them beneficial payloads doesn’t make things better. A worm is no tool for any rational network administrator, regardless of intent.

A good software distribution mechanism has the following characteristics:

  1. People can choose the options they want.
  2. Installation is adapted to the host it’s running on.
  3. It’s easy to stop an installation in progress, or uninstall the software.
  4. It’s easy to know what has been installed where.

A successful worm, on the other hand, runs without the consent of the user. It has a small amount of code, and once it starts to spread, it is self-propagating, and will keep going automatically until it’s halted.

These characteristics are simply incompatible. Giving the user more choice, making installation flexible and universal, allowing for uninstallation — all of these make worms harder to propagate. Designing a better software distribution mechanism, makes it a worse worm, and vice versa. On the other hand, making the worm quieter and less obvious to the user, making it smaller and easier to propagate, and making it impossible to contain, all make for bad software distribution.

All of this makes worms easy to get wrong and hard to recover from. Experimentation, most of it involuntary, proves that worms are very hard to debug successfully: in other words, once worms starts spreading it’s hard to predict exactly what they will do. Some viruses were written to propagate harmlessly, but did damage — ranging from crashed machines to clogged networks — because of bugs in their code. Many worms were written to do damage and turned out to be harmless (which is even more revealing).

Intentional experimentation by well-meaning system administrators proves that in your average office environment, the code that successfully patches one machine won’t work on another. Indeed, sometimes the results are worse than any threat of external attack. Combining a tricky problem with a distribution mechanism that’s impossible to debug and difficult to control is fraught with danger. Every system administrator who’s ever distributed software automatically on his network has had the “I just automatically, with the press of a button, destroyed the software on hundreds of machines at once!” experience. And that’s with systems you can debug and control; self-propagating systems don’t even let you shut them down when you find the problem. Patching systems is fundamentally a human problem, and beneficial worms are a technical solution that doesn’t work.

Posted on December 5, 2005 at 2:50 PMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.