Entries Tagged "vulnerabilities"

Page 36 of 45

Firefox JavaScript Flaw: Real or Hoax?

Two hackers—Mischa Spiegelmock and Andrew Wbeelsoi—have announced a flaw in Firefox’s JavaScript:

An attacker could commandeer a computer running the browser simply by crafting a Web page that contains some malicious JavaScript code, Mischa Spiegelmock and Andrew Wbeelsoi said in a presentation at the ToorCon hacker conference here. The flaw affects Firefox on Windows, Apple Computer’s Mac OS X and Linux, they said.

More interesting was this piece:

The hackers claim they know of about 30 unpatched Firefox flaws. They don’t plan to disclose them, instead holding onto the bugs.

Jesse Ruderman, a Mozilla security staffer, attended the presentation and was called up on the stage with the two hackers. He attempted to persuade the presenters to responsibly disclose flaws via Mozilla’s bug bounty program instead of using them for malicious purposes such as creating networks of hijacked PCs, called botnets.

“I do hope you guys change your minds and decide to report the holes to us and take away $500 per vulnerability instead of using them for botnets,” Ruderman said.

The two hackers laughed off the comment. “It is a double-edged sword, but what we’re doing is really for the greater good of the Internet. We’re setting up communication networks for black hats,” Wbeelsoi said.

Sounds pretty bad? But maybe it’s all a hoax:

Spiegelmock, a developer at Six Apart, a blog software company in San Francisco, now says the ToorCon talk was meant “to be humorous” and insists the code presented at the conference cannot result in code execution.

Spiegelmock’s strange about-face comes as Mozilla’s security response team is racing to piece together information from the ToorCon talk to figure out how to fix the issue.

[…]

On the claim that there are 30 undisclosed Firefox vulnerabilities, Spiegelmock pinned that entirely on co-presenter Wbeelsoi. “I have no undisclosed Firefox vulnerabilities. The person who was speaking with me made this claim, and I honestly have no idea if he has them or not. I apologize to everyone involved, and I hope I have made everything as clear as possible,” Spiegelmock added.

I vote: hoax, with maybe some seeds of real.

Posted on October 4, 2006 at 7:04 AMView Comments

Voting Software and Secrecy

Here’s a quote from an elections official in Los Angeles:

“The software developed for InkaVote is proprietary software. All the software developed by vendors is proprietary. I think it’s odd that some people don’t want it to be proprietary. If you give people the open source code, they would have the directions on how to hack into it. We think the proprietary nature of the software is good for security.”

It’s funny, really. What she should be saying is something like: “I think it’s odd that everyone who has any expertise in computer security doesn’t want the software to be proprietary. Speaking as someone who knows nothing about computer security, I think that secrecy is an asset.” That’s a more realistic quote.

As I’ve said many times, secrecy is not the same as security. And in many cases, secrecy hurts security.

Posted on October 2, 2006 at 7:10 AMView Comments

The Hidden Benefits of Network Attack

An anonymous note in the Harvard Law Review argues that there is a significant benefit from Internet attacks:

This Note argues that computer networks, particularly the Internet, can be thought of as having immune systems that are strengthened by certain attacks. Exploitation of security holes prompts users and vendors to close those holes, vendors to emphasize security in system development, and users to adopt improved security practices. This constant strengthening of security reduces the likelihood of a catastrophic attack—one that would threaten national or even global security. In essence, certain cybercrime can create more benefits than costs, and cybercrime policy should take this concept into account.

Posted on September 26, 2006 at 6:42 AMView Comments

Programming ATMs to Believe $20 Bills Are $5 Bills

Clever attack:

Last month, a man reprogrammed an automated teller machine at a gas station on Lynnhaven Parkway to spit out four times as much money as it should.

He then made off with an undisclosed amount of cash.

No one noticed until nine days later, when a customer told the clerk at a Crown gas station that the machine was disbursing more money than it should. Police are now investigating the incident as fraud.

Police spokeswoman Rene Ball said the first withdrawal occurred at 6:17 p.m. Aug. 19. Surveillance footage documented a man about 5-foot-8 with a thin build walking into the gas station on the 2400 block of Lynnhaven Parkway and swiping an ATM card.

The man then punched a series of numbers on the machine’s keypad, breaking the security code. The ATM was programmed to disburse $20 bills. The man reprogrammed the machine so it recorded each $20 bill as a $5 debit to his account.

The suspect returned to the gas station a short time later and took more money, but authorities did not say how much. Because the account was pre-paid and the card could be purchased at several places, police are not sure who is behind the theft.

What’s weird is that it seems that this is easy. The ATM is a Tranax Mini Bank 1500. And you can buy the manuals from the Tranax website. And they’re useful for this sort of thing:

I am holding in my hands a legitimately obtained copy of the manual. There are a lot of security sensitive things inside of this manual. As promised, I am not going to reveal them, but there are:

  • Instructions on how to enter the diagnostic mode
  • Default passwords

  • Default Combinations For the Safe

Do not ask me for them. If you maintain one of these devices, make sure that you are not using the default password. If you are, change it immediately.

This is from an eWeek article:

“If you get your hand on this manual, you can basically reconfigure the ATM if the default password was not changed. My guess is that most of these mini-bank terminals are sitting around with default passwords untouched,” Goldsmith said.

Officials at Tranax did not respond to eWEEK requests for comment. According to a note on the company’s Web site, Tranax has shipped 70,000 ATMs, self-service terminals and transactional kiosks around the country. The majority of those shipments are of the flagship Mini-Bank 1500 machine that was rigged in the Virginia Beach heist.

So, as long as you can use an account that’s not traceable back to you, and you disguise yourself for the ATM cameras, this is a pretty easy crime.

eWeek claims you can get a copy of the manual simply by Googling for it. (Here’s one on eBay.

And Tranax is promising a fix that will force operators to change the default passwords. But honestly, what’s the liklihood that someone who can’t be bothered to change the default password will take the time to install a software patch?

EDITED TO ADD (9/22): Here’s the manual.

Posted on September 22, 2006 at 7:04 AMView Comments

New Diebold Vulnerability

Ed Felten and his team at Princeton have analyzed a Diebold machine:

This paper presents a fully independent security study of a Diebold AccuVote-TS voting machine, including its hardware and software. We obtained the machine from a private party. Analysis of the machine, in light of real election procedures, shows that it is vulnerable to extremely serious attacks. For example, an attacker who gets physical access to a machine or its removable memory card for as little as one minute could install malicious code; malicious code on a machine could steal votes undetectably, modifying all records, logs, and counters to be consistent with the fraudulent vote count it creates. An attacker could also create malicious code that spreads automatically and silently from machine to machine during normal election activities—a voting-machine virus. We have constructed working demonstrations of these attacks in our lab. Mitigating these threats will require changes to the voting machine’s hardware and software and the adoption of more rigorous election procedures.

(Executive summary. Full paper. FAQ. Video demonstration.)

Salon said:

Diebold has repeatedly disputed the findings then as speculation. But the Princeton study appears to demonstrate conclusively that a single malicious person could insert a virus into a machine and flip votes. The study also reveals a number of other vulnerabilities, including that voter access cards used on Diebold systems could be created inexpensively on a personal laptop computer, allowing people to vote as many times as they wish.

More news stories.

Posted on September 14, 2006 at 3:32 PMView Comments

Is There Strategic Software?

If you define “critical infrastructure” as “things essential for the functioning of a society and economy,” then software is critical infrastructure. For many companies and individuals, if their computers stop working, they stop working.

It’s a situation that snuck up on us. Everyone knew that the software that flies 747s or targets cruise missiles was critical, but who thought of the airlines’ weight and balance computers, or the operating system running the databases and spreadsheets that determine which cruise missiles get shipped where?

And over the years, common, off-the-shelf, personal- and business-grade software has been used for more and more critical applications. Today we find ourselves in a situation where a well-positioned flaw in Windows, Cisco routers or Apache could seriously affect the economy.

It’s perfectly rational to assume that some programmers—a tiny minority I’m sure—are deliberately adding vulnerabilities and back doors into the code they write. I’m actually kind of amazed that back doors secretly added by the CIA/NSA, MI5, the Chinese, Mossad and others don’t conflict with each other. Even if these groups aren’t infiltrating software companies with back doors, you can be sure they’re scouring products for vulnerabilities they can exploit, if necessary. On the other hand, we’re already living in a world where dozens of new flaws are discovered in common software products weekly, and the economy is humming along. But we’re not talking about this month’s worm from Asia or new phishing software from the Russian mafia—we’re talking national intelligence organizations. “Infowar” is an overhyped term, but the next war will have a cyberspace component, and these organizations wouldn’t be doing their jobs if they weren’t preparing for it.

Marcus is 100 percent correct when he says it’s simply too late to do anything about it. The software industry is international, and no country can start demanding domestic-only software and expect to get anywhere. Nor would that actually solve the problem, which is more about the allegiance of millions of individual programmers than which country they happen to inhabit.

So, what to do? The key here is to remember the real problem: current commercial software practices are not secure enough to reliably detect and delete deliberately inserted malicious code. Once you understand this, you’ll drop the red herring arguments that led to CheckPoint not being able to buy Sourcefire and concentrate on the real solution: defense in depth.

In theory, security software are after-the-fact kludges because the underlying OS and apps are riddled with vulnerabilities. If your software were written properly, you wouldn’t need a firewall—right?

If we were to get serious about critical infrastructure, we’d recognize it’s all critical and start building security software to protect it. We’d build our security based on the principles of safe failure; we’d assume security would fail and make sure it’s OK when it does. We’d use defense in depth and compartmentalization to minimize the effects of failure. Basically, we’d do everything we’re supposed to do now to secure our networks.

It’d be expensive, probably prohibitively so. Maybe it would be easier to continue to ignore the problem, or at least manage geopolitics so that no national military wants to take us down.

This is the second half of a point/counterpoint I did with Marcus Ranum (here’s his half) for the September 2006 issue of Information Security Magazine.

Posted on September 12, 2006 at 10:38 AMView Comments

Faux Disclosure

Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.

You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.

Security researchers Dave Maynor of ISS and Johnny Cache—a.k.a. Jon Ellch—demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card—which they declined to identify—was inserted
in a Mac, Windows, or Linux laptop.

[…]

How is that for murky and non-transparent? The whole world is at risk—if the exploit is real—whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.

It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers—mainly because Apple had not fixed the problem yet.”

That’s part of what is meant by full disclosure these days—giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.

Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

Posted on August 14, 2006 at 1:41 PMView Comments

HSBC Insecurity Hype

The Guardian has the story:

One of Britain’s biggest high street banks has left millions of online bank accounts exposed to potential fraud because of a glaring security loophole, the Guardian has learned.

The defect in HSBC’s online banking system means that 3.1 million UK customers registered to use the service have been vulnerable to attack for at least two years. One computing expert called the lapse “scandalous”.

The discovery was made by a group of researchers at Cardiff University, who found that anyone exploiting the flaw was guaranteed to be able to break into any account within nine attempts.

Sounds pretty bad.

But look at this:

The flaw, which is not being detailed by the Guardian, revolves around the way HSBC customers access their web-based banking service. Criminals using so-called “keyloggers” – readily available gadgets or viruses which record every keystroke made on a target computer – can easily deduce the data needed to gain unfettered access to accounts in just a few attempts.

So, the “scandalous” flaw is that an attacker who already has a keylogger installed on someone’s computer can break into his HSBC account. Seems to me if an attacker has a keylogger installed on someone’s computer, then he’s got all sorts of security issues.

If this is the biggest flaw in HSBC’s login authentication system, I think they’re doing pretty good.

Posted on August 14, 2006 at 7:06 AMView Comments

1 34 35 36 37 38 45

Sidebar photo of Bruce Schneier by Joe MacInnis.