Entries Tagged "disclosure"

Page 7 of 10

The DNS Vulnerability

Despite the best efforts of the security community, the details of a critical internet vulnerability discovered by Dan Kaminsky about six months ago have leaked. Hackers are racing to produce exploit code, and network operators who haven’t already patched the hole are scrambling to catch up. The whole mess is a good illustration of the problems with researching and disclosing flaws like this.

The details of the vulnerability aren’t important, but basically it’s a form of DNS cache poisoning. The DNS system is what translates domain names people understand, like www.schneier.com, to IP addresses computers understand: 204.11.246.1. There is a whole family of vulnerabilities where the DNS system on your computer is fooled into thinking that the IP address for www.badsite.com is really the IP address for www.goodsite.com—there’s no way for you to tell the difference—and that allows the criminals at www.badsite.com to trick you into doing all sorts of things, like giving up your bank account details. Kaminsky discovered a particularly nasty variant of this cache-poisoning attack.

Here’s the way the timeline was supposed to work: Kaminsky discovered the vulnerability about six months ago, and quietly worked with vendors to patch it. (There’s a fairly straightforward fix, although the implementation nuances are complicated.) Of course, this meant describing the vulnerability to them; why would companies like Microsoft and Cisco believe him otherwise? On July 8, he held a press conference to announce the vulnerability—but not the details—and reveal that a patch was available from a long list of vendors. We would all have a month to patch, and Kaminsky would release details of the vulnerability at the BlackHat conference early next month.

Of course, the details leaked. How isn’t important; it could have leaked a zillion different ways. Too many people knew about it for it to remain secret. Others who knew the general idea were too smart not to speculate on the details. I’m kind of amazed the details remained secret for this long; undoubtedly it had leaked into the underground community before the public leak two days ago. So now everyone who back-burnered the problem is rushing to patch, while the hacker community is racing to produce working exploits.

What’s the moral here? It’s easy to condemn Kaminsky: If he had shut up about the problem, we wouldn’t be in this mess. But that’s just wrong. Kaminsky found the vulnerability by accident. There’s no reason to believe he was the first one to find it, and it’s ridiculous to believe he would be the last. Don’t shoot the messenger. The problem is with the DNS protocol; it’s insecure.

The real lesson is that the patch treadmill doesn’t work, and it hasn’t for years. This cycle of finding security holes and rushing to patch them before the bad guys exploit those vulnerabilities is expensive, inefficient and incomplete. We need to design security into our systems right from the beginning. We need assurance. We need security engineers involved in system design. This process won’t prevent every vulnerability, but it’s much more secure—and cheaper—than the patch treadmill we’re all on now.

What a security engineer brings to the problem is a particular mindset. He thinks about systems from a security perspective. It’s not that he discovers all possible attacks before the bad guys do; it’s more that he anticipates potential types of attacks, and defends against them even if he doesn’t know their details. I see this all the time in good cryptographic designs. It’s over-engineering based on intuition, but if the security engineer has good intuition, it generally works.

Kaminsky’s vulnerability is a perfect example of this. Years ago, cryptographer Daniel J. Bernstein looked at DNS security and decided that Source Port Randomization was a smart design choice. That’s exactly the work-around being rolled out now following Kaminsky’s discovery. Bernstein didn’t discover Kaminsky’s attack; instead, he saw a general class of attacks and realized that this enhancement could protect against them. Consequently, the DNS program he wrote in 2000, djbdns, doesn’t need to be patched; it’s already immune to Kaminsky’s attack.

That’s what a good design looks like. It’s not just secure against known attacks; it’s also secure against unknown attacks. We need more of this, not just on the internet but in voting machines, ID cards, transportation payment cards … everywhere. Stop assuming that systems are secure unless demonstrated insecure; start assuming that systems are insecure unless designed securely.

This essay previously appeared on Wired.com.

EDITED TO ADD (8/7): Seems like the flaw is much worse than we thought.

EDITED TO ADD (8/13): Someone else discovered the vulnerability first.

Posted on July 29, 2008 at 6:01 AMView Comments

Locksmiths Hate Computer Geeks who Learn Lockpicking

They do:

Hobby groups throughout North America have cracked supposedly unbeatable locks. Mr. Nekrep, who maintains a personal collection of more than 300 locks, has demonstrated online how to open a Kensington laptop lock using Scotch tape and a Post-it note. Another Lockpicking101.com member discovered the well-publicized method of opening Kryptonite bike locks with a ball-point pen, a revelation that prompted Kryptonite to replace all of its compromised locks.

Other lock manufacturers haven’t admitted their flaws so readily. Marc Tobias, a lawyer and security expert, recently shook up the lock-picking community by publishing a detailed analysis of how to crack the uncrackable: Medeco locks.

“We’ve figured out how to break them in as little as 30 seconds,” he said. “[Medeco] won’t admit it, though. They still believe in security through obscurity. But by not fixing the problems we identify, lock-makers are putting the public at risk. They have a duty to disclose vulnerabilities. If they don’t, we will.”

Posted on July 17, 2008 at 1:30 PM

The Ethics of Vulnerability Research

The standard way to take control of someone else’s computer is by exploiting a vulnerability in a software program on it. This was true in the 1960s when buffer overflows were first exploited to attack computers. It was true in 1988 when the Morris worm exploited a Unix vulnerability to attack computers on the Internet, and it’s still how most modern malware works.

Vulnerabilities are software mistakes—mistakes in specification and design, but mostly mistakes in programming. Any large software package will have thousands of mistakes. These vulnerabilities lie dormant in our software systems, waiting to be discovered. Once discovered, they can be used to attack systems. This is the point of security patching: eliminating known vulnerabilities. But many systems don’t get patched, so the Internet is filled with known, exploitable vulnerabilities.

New vulnerabilities are hot commodities. A hacker who discovers one can sell it on the black market, blackmail the vendor with disclosure, or simply publish it without regard to the consequences. Even if he does none of these, the mere fact the vulnerability is known by someone increases the risk to every user of that software. Given that, is it ethical to research new vulnerabilities?

Unequivocally, yes. Despite the risks, vulnerability research is enormously valuable. Security is a mindset, and looking for vulnerabilities nurtures that mindset. Deny practitioners this vital learning tool, and security suffers accordingly.

Security engineers see the world differently than other engineers. Instead of focusing on how systems work, they focus on how systems fail, how they can be made to fail, and how to prevent—or protect against—those failures. Most software vulnerabilities don’t ever appear in normal operations, only when an attacker deliberately exploits them. So security engineers need to think like attackers.

People without the mindset sometimes think they can design security products, but they can’t. And you see the results all over society—in snake-oil cryptography, software, Internet protocols, voting machines, and fare card and other payment systems. Many of these systems had someone in charge of “security” on their teams, but it wasn’t someone who thought like an attacker.

This mindset is difficult to teach, and may be something you’re born with or not. But in order to train people possessing the mindset, they need to search for and find security vulnerabilities—again and again and again. And this is true regardless of the domain. Good cryptographers discover vulnerabilities in others’ algorithms and protocols. Good software security experts find vulnerabilities in others’ code. Good airport security designers figure out new ways to subvert airport security. And so on.

This is so important that when someone shows me a security design by someone I don’t know, my first question is, “What has the designer broken?” Anyone can design a security system that he cannot break. So when someone announces, “Here’s my security system, and I can’t break it,” your first reaction should be, “Who are you?” If he’s someone who has broken dozens of similar systems, his system is worth looking at. If he’s never broken anything, the chance is zero that it will be any good.

Vulnerability research is vital because it trains our next generation of computer security experts. Yes, newly discovered vulnerabilities in software and airports put us at risk, but they also give us more realistic information about how good the security actually is. And yes, there are more and less responsible—and more and less legal—ways to handle a new vulnerability. But the bad guys are constantly searching for new vulnerabilities, and if we have any hope of securing our systems, we need the good guys to be at least as competent. To me, the question isn’t whether it’s ethical to do vulnerability research. If someone has the skill to analyze and provide better insights into the problem, the question is whether it is ethical for him not to do vulnerability research.

This was originally published in InfoSecurity Magazine, as part of a point-counterpoint with Marcus Ranum. You can read Marcus’s half here.

Posted on May 14, 2008 at 11:29 AMView Comments

Risk of Knowing Too Much About Risk

Interesting:

Dread is a powerful force. The problem with dread is that it leads to terrible decision-making.

Slovic says all of this results from how our brains process risk, which is in two ways. The first is intuitive, emotional and experience based. Not only do we fear more what we can’t control, but we also fear more what we can imagine or what we experience. This seems to be an evolutionary survival mechanism. In the presence of uncertainty, fear is a valuable defense. Our brains react emotionally, generate anxiety and tell us, “Remember the news report that showed what happened when those other kids took the bus? Don’t put your kids on the bus.”

The second way we process risk is analytical: we use probability and statistics to override, or at least prioritize, our dread. That is, our brain plays devil’s advocate with its initial intuitive reaction, and tries to say, “I know it seems scary, but eight times as many people die in cars as they do on buses. In fact, only one person dies on a bus for every 500 million miles buses travel. Buses are safer than cars.”

Unfortunately for us, that’s often not the voice that wins. Intuitive risk processors can easily overwhelm analytical ones, especially in the presence of those etched-in images, sounds and experiences. Intuition is so strong, in fact, that if you presented someone who had experienced a bus accident with factual risk analysis about the relative safety of buses over cars, it’s highly possible that they’d still choose to drive their kids to school, because their brain washes them in those dreadful images and reminds them that they control a car but don’t control a bus. A car just feels safer. “We have to work real hard in the presence of images to get the analytical part of risk response to work in our brains,” says Slovic. “It’s not easy at all.”

And we’re making it harder by disclosing more risks than ever to more people than ever. Not only does all of this disclosure make us feel helpless, but it also gives us ever more of those images and experiences that trigger the intuitive response without analytical rigor to override the fear. Slovic points to several recent cases where reason has lost to fear: The sniper who terrorized Washington D.C.; pathogenic threats like MRSA and brain-eating amoeba. Even the widely publicized drunk-driving death of a baseball player this year led to decisions that, from a risk perspective, were irrational.

Posted on March 6, 2008 at 6:24 AMView Comments

Hacking Power Networks

The CIA unleashed a big one at a SANS conference:

On Wednesday, in New Orleans, US Central Intelligence Agency senior analyst Tom Donahue told a gathering of 300 US, UK, Swedish, and Dutch government officials and engineers and security managers from electric, water, oil & gas and other critical industry asset owners from all across North America, that “We have information, from multiple regions outside the United States, of cyber intrusions into utilities, followed by extortion demands. We suspect, but cannot confirm, that some of these attackers had the benefit of inside knowledge. We have information that cyber attacks have been used to disrupt power equipment in several regions outside the United States. In at least one case, the disruption caused a power outage affecting multiple cities. We do not know who executed these attacks or why, but all involved intrusions through the Internet.”

According to Mr. Donahue, the CIA actively and thoroughly considered the benefits and risks of making this information public, and came down on the side of disclosure.

I’ll bet. There’s nothing like an vague unsubstantiated rumor to forestall reasoned discussion. But, of course, everyone is writing about it anyway.

SANS’s Alan Paller is happy to add details:

In the past two years, hackers have in fact successfully penetrated and extorted multiple utility companies that use SCADA systems, says Alan Paller, director of the SANS Institute, an organization that hosts a crisis center for hacked companies. “Hundreds of millions of dollars have been extorted, and possibly more. It’s difficult to know, because they pay to keep it a secret,” Paller says. “This kind of extortion is the biggest untold story of the cybercrime industry.”

And to up the fear factor:

The prospect of cyberattacks crippling multicity regions appears to have prompted the government to make this information public. The issue “went from ‘we should be concerned about to this’ to ‘this is something we should fix now,’ ” said Paller. “That’s why, I think, the government decided to disclose this.”

More rumor:

An attendee of the meeting said that the attack was not well-known through the industry and came as a surprise to many there. Said the person who asked to remain anonymous, “There were apparently a couple of incidents where extortionists cut off power to several cities using some sort of attack on the power grid, and it does not appear to be a physical attack.”

And more hyperbole from someone in the industry:

Over the past year to 18 months, there has been “a huge increase in focused attacks on our national infrastructure networks, . . . and they have been coming from outside the United States,” said Ralph Logan, principal of the Logan Group, a cybersecurity firm.

It is difficult to track the sources of such attacks, because they are usually made by people who have disguised themselves by worming into three or four other computer networks, Logan said. He said he thinks the attacks were launched from computers belonging to foreign governments or militaries, not terrorist groups.”

I’m more than a bit skeptical here. To be sure—fake staged attacks aside—there are serious risks to SCADA systems (Ganesh Devarajan gave a talk at DefCon this year about some potential attack vectors), although at this point I think they’re more a future threat than present danger. But this CIA tidbit tells us nothing about how the attacks happened. Were they against SCADA systems? Were they against general-purpose computer, maybe Windows machines? Insiders may have been involved, so was this a computer security vulnerability at all? We have no idea.

Cyber-extortion is certainly on the rise; we see it at Counterpane. Primarily it’s against fringe industries—online gambling, online gaming, online porn—operating offshore in countries like Bermuda and the Cayman Islands. It is going mainstream, but this is the first I’ve heard of it targeting power companies. Certainly possible, but is that part of the CIA rumor or was it tacked on afterwards?

And here’s list of power outages. Which ones were hacker caused? Some details would be nice.

I’d like a little bit more information before I start panicking.

EDITED TO ADD (1/23): Slashdot thread.

Posted on January 22, 2008 at 2:24 PMView Comments

Security of Adult Websites Compromised

This article claims the software that runs the back end of either 35% or 80%-95% (depending on which part of the article you read) has been compromised, and that the adult industry is hushing this up. Like many of these sorts of stories, there’s no evidence that the bad guys have the personal information database. The vulnerability only means that they could have it.

Does anyone know about this?

Slashdot thread.

Posted on December 28, 2007 at 7:54 AMView Comments

Security-Breach Notification Laws

Interesting study on the effects of security-breach notification laws in the U.S.:

This study surveys the literature on changes in the information security world and significantly expands upon it with qualitative data from seven in-depth discussions with information security officers. These interviews focused on the most important factors driving security investment at their organizations and how security breach notification laws fit into that list. Often missing from the debate is that, regardless of the risk of identity theft and alleged consumer apathy towards notices, the simple fact of having to publicly notify causes organizations to implement stronger security standards that protect personal information.

The interviews showed that security breaches drive information exchange among security professionals, causing them to engage in discussions about information security issues that may arise at their and others’ organizations. For example, we found that some CSOs summarize news reports from breaches at other organizations and circulate them to staff with “lessons learned” from each incident. In some cases, organizations have a “that could have been us” moment, and patch systems with similar vulnerabilities to the entity that had a breach.

Breach notification laws have significantly contributed to heightened awareness of the importance of information security throughout all levels of a business organization and to development of a level of cooperation among different departments within an organization hat resulted from the need to monitor data access for the purposes of detecting, investigating, and reporting breaches. CSOs reported that breach notification duties empowered them to implement new access controls, auditing measures, and encryption. Aside from the organization’s own efforts at complying with notification laws, reports of breaches at other organizations help information officers maintain that sense of awareness.

Posted on December 12, 2007 at 1:53 PMView Comments

More on the California Voting Machine Review

This is a follow-on to this post. What’s new is that the source code reviews are now available.

I haven’t had the chance to review the reports. Matt Blaze has a good summary on his blog:

We found significant, deeply-rooted security weaknesses in all three vendors’ software. Our newly-released source code analyses address many of the supposed shortcomings of the red team studies, which have been (quite unfairly, I think) criticized as being “unrealistic”. It should now be clear that the red teams were successful not because they somehow “cheated,” but rather because the built-in security mechanisms they were up against simply don’t work properly. Reliably protecting these systems under operational conditions will likely be very hard.

I just read Matt Bishop’s description of the miserable schedule and support that the California Secretary of State’s office gave to the voting-machine review effort:

The major problem with this study is time. Although the study did not start until mid-June, the end date was set at July 20, and the Secretary of States said that under no circumstandes would it be extended.

[…]

The second problem was lack of information. In particular, various documents did not become available until July 13, too late to be of any value to the red teams, and the red teams did not have several security-related documents. Further, some software that would have materially helped the study was never made available.

Matt Blaze, who led the team that reviewed the Sequoia code, had similar things to say:

Reviewing that much code in less than two months was, to say the least, a huge undertaking. We spent our first week (while we were waiting for the code to arrive) setting up infrastructure, including a Trac Wiki on the internal network that proved invaluable for keeping everyone up to speed as we dug deeper and deeper into the system. By the end of the project, we were literally working around the clock.

It seems that we have a new problem to worry about: the Secretary of State has no clue how to get a decent security review done. Perversely, it was good luck that the voting machines tested were so horribly bad that the reviewers found vulnerabilities despite a ridiculous schedule—one month simply isn’t reasonable—and egregious foot-dragging by vendors in providing needed materials.

Next time, we might not be so lucky. If one vendor sees he can avoid embarrassment by stalling delivery of his most vulnerable source code for four weeks, we might end up with the Secretary of State declaring that the system survived vigorous testing and therefore is secure. Given that refusing cooperation incurred no penalty in this series of tests, we can expect vendors to work that angle more energetically in the future.

The Secretary of State’s own web page gives top billing to the need “to restore the public’s confidence in the integrity of the electoral process,” while the actual security of the machines is relegated to second place.

We need real security evaluations, not feel-good fake tests. I wish this were more the former than the latter.

EDITED TO ADD (8/4): California Secretary of State Bowen’s certification decisions are online.

She has totally decertified the ES&S Inkavote Plus system, used in L.A. County, because of ES&S noncompliance with the Top to Bottom Review. The Diebold and Sequoia systems have been decertified and conditionally recertified. The same was done with one Hart Intercivic system (system 6.2.1). (Certification of the Hart system 6.1 was voluntarily withdrawn.)

To those who thought she was staging this review as security theater, this seems like evidence to the contrary. She wants to do the right thing, but has no idea how to conduct a security review.

Another article.

EDITED TO ADD (8/4): The Diebold software is pretty bad.

EDITED TO ADD (8/5): Ed Felten comments:

It is interesting (at least to me as a computer security guy) to see how often the three companies made similar mistakes. They misuse cryptography in the same ways: using fixed unchangeable keys, using ciphers in ECB mode, using a cyclic redundancy code for data integrity, and so on. Their central tabulators use poorly protected database software. Their code suffers from buffer overflows, integer overflow errors, and format string vulnerabilities. They store votes in a way that compromises the secret ballot.

And Avi Rubin comments:

As I read the three new reports, I could not help but marvel at the fact that so many places in the US are using these machines. When it comes to prescription medications, we perform extensive tests before drugs hit the market. When it comes to aviation, planes are held to standards and tested before people fly on them. But, it seems that the voting machines we are using are even more poorly designed and poorly implemented than I had realized.

He’s right, of course.

Posted on August 3, 2007 at 12:55 PMView Comments

California Voting Machine Audit Results

The state of California conducted a security review of their electronic voting machines earlier this year. This was a serious review, with real security researchers getting access to the source code. The report was issued last week, and the researchers were able to compromise all three machines—by Diebold Election Systems, Hart Intercivic, and Sequoia Voting Systems—multiple ways. (They said they could probably find more ways, if they had more time.)

Final report and details about the audit here. Good blog entries here and here. We don’t know what California will do now.

This is no surprise, really. The notion that electronic voting machines were somehow more secure every other computer system ever built was ridiculous from the start. And the claims by machine manufacturers that releasing their source code would hurt the security of the machine was—like all these sorts of claims—really an attempt to prevent embarrassment to the company.

Not everyone gets this, unfortunately. And not everyone involved in voting:

Letting the hackers have the source codes, operating manuals and unlimited access to the voting machines “is like giving a burglar the keys to your house,” said Steve Weir, clerk-recorder of Contra Costa County and head of the state Association of Clerks and Election Officials.

No. It’s like giving burglars the schematics, installation manuals, and unlimited access to your front door lock. If your lock is good, it will survive the burglar having that information. If your lock isn’t good, the burglar will get in.

I have two essays on this, from 2004: “Why Election Technology is Hard,” and “Electronic Voting Machines.” This essay—”Voting and Technology“—was written in 2000.

EDITED TO ADD (7/31): Another article.

EDITED TO ADD (8/2): Good commentary.

Posted on July 31, 2007 at 10:57 AMView Comments

Sidebar photo of Bruce Schneier by Joe MacInnis.