Entries Tagged "disclosure"

Page 5 of 10

Scientists Banned from Revealing Details of Car-Security Hack

The UK has banned researchers from revealing details of security vulnerabilities in car locks. In 2008, Phillips brought a similar suit against researchers who broke the Mifare chip. That time, they lost. This time, Volkswagen sued and won.

This is bad news for security researchers. (Remember back in 2001 when security researcher Ed Felten sued the RIAA in the US to be able to publish his research results?) We’re not going to improve security unless we’re allowed to publish our results. And we can’t start suppressing scientific results, just because a big corporation doesn’t like what it does to their reputation.

EDITED TO ADD (8/14): Here’s the ruling.

Posted on August 1, 2013 at 6:37 AMView Comments

The Effectiveness of Privacy Audits

This study concludes that there is a benefit to forcing companies to undergo privacy audits: “The results show that there are empirical regularities consistent with the privacy disclosures in the audited financial statements having some effect. Companies disclosing privacy risks are less likely to incur a breach of privacy related to unintentional disclosure of privacy information; while companies suffering a breach of privacy related to credit cards are more likely to disclose privacy risks afterwards. Disclosure after a breach is negatively related to privacy breaches related to hacking, and disclosure before a breach is positively related to breaches concerning insider trading.”

Posted on July 9, 2013 at 12:17 PMView Comments

On Securing Potentially Dangerous Virology Research

Abstract: The problem of securing biological research data is a difficult and complicated one. Our ability to secure data on computers is not robust enough to ensure the security of existing data sets. Lessons from cryptography illustrate that neither secrecy measures, such as deleting technical details, nor national solutions, such as export controls, will work.———Science and Nature have each published papers on the H5N1 virus in humans after considerable debate about whether the research results in those papers could help terrorists create a bioweapon. This notion of “dual use” research is an important one for the community, and one that will sooner or later become critical. Perhaps these two papers are not dangerous in the wrong hands, but eventually there will be research results that are.

My background is in cryptography and computer security. I cannot comment on the potential value or harm from any particular piece of biological research, but I can discuss what works and what does not to keep research data secure. The cryptography and computer security communities have been wrestling for decades now with dual-use research: for example, whether to publish new Windows (Microsoft Corporation) vulnerabilities that can be immediately used to attack computers but whose publication helps us make the operating system more secure in the long run. From this experience, I offer five points to the virology community.

First, security based on secrecy is inherently fragile. The more secrets a system has, the less secure it is. A door lock that has a secret but unchangeable locking mechanism is less secure than a commercially purchased door lock with an easily changeable key. In cryptography, this is known as Kerckhoffs’ principle: Put all your secrecy into the key and none into the cryptographic algorithm. The key is unique and easily changeable; the algorithm is system-wide and much more likely to become public. In fact, algorithms are deliberately published so that they get analyzed broadly. The lesson for dual-use virology research is that it is risky to base your security on keeping research secret. Militaries spend an enormous amount of money trying to maintain secret research laboratories, and even they do not always get security right. Once secret data become public, there is no way to go back.

Second, omitting technical details from published research is a poor security measure. We tried this in computer security with regard to vulnerabilities, announcing general information but not publishing specifics. The problem is that once the general information is announced, it is much easier for another researcher to replicate the results and generate the details. This is probably even more true in virology research than in computer security research, where the very existence of a result can provide much of the road map to that result.

Third, technical difficulty as a security measure has only short-term value. Technology only gets better; it never gets worse. To believe that some research cannot be replicated by amateurs because it requires equipment only available to state-of-the-art research institutions is short-sighted at best. What is impossible today will be a Ph.D. thesis in 20 years, and what was a Ph.D. thesis 20 years ago is a high-school science fair project today.

Fourth, securing research data in computer networks is risky at best. If you read newspapers, you know the current state of the art in computer security: Everything gets hacked. Cyber criminals steal money from banks. Cyber spies steal data from military computers. Although people talk about H5N1 research in terms of securing the research papers, that is largely a red herring; even if no papers existed, the research data would still be on a network-connected computer somewhere.

Not all computers are hacked and not all data gets stolen, but the risks are there. There are two basic types of threats in cyberspace. There are the opportunists: for example, criminals who want to break into a retail merchant’s system and steal a thousand credit card numbers. Against these attackers, relative security is what matters. Because the criminals do not care whom they attack, you are safe if you are more secure than other networks. The other type of threat is a targeted attack. These are attackers who, for whatever reason, want to attack a particular network. The buzzword in Internet security for this is “advanced persistent threat.” It is almost impossible to secure a network against a sufficiently skilled and tenacious adversary. All we can do is make the attacker’s job harder.

This does not mean that all virology data will be stolen via computer networks, but it does mean that, once the existence of that data becomes public knowledge, you should assume that the bad guys will be able to get their hands on it.

Lastly, national measures that prohibit publication will not work in an international community, especially in the Internet age. If either Science or Nature had refused to publish the H5N1 papers, they would have been published somewhere else. Even if some countries stop funding—or ban—this sort of research, it will still happen in another country.

The U.S. cryptography community saw this in the 1970s and early 1980s. At that time, the National Security Agency (NSA) controlled cryptography research, which included denying funding for research, classifying results after the fact, and using export-control laws to limit what ended up in products. This was the pre-Internet world, and it worked for a while. In the 1980s they gave up on classifying research, because an international community arose. The limited ability for U.S. researchers to get funding for block-cipher cryptanalysis merely moved that research to Europe and Asia. The NSA continued to limit the spread of cryptography via export-control laws; the U.S.-centric nature of the computer industry meant that this was effective. In the 1990s they gave up on controlling software because the international online community became mainstream; this period was called “the Crypto Wars.” Export-control laws did prevent Microsoft from embedding cryptography into Windows for over a decade, but it did nothing to prevent products made in other countries from filling the market gaps.

Today, there are no restrictions on cryptography, and many U.S. government standards are the result of public international competitions. Right now the National Institute of Standards and Technology is working on a new Secure Hash Algorithm standard. When it is announced next year, it will be the product of a public call for algorithms that resulted in 64 submissions from over a dozen countries and then years of international analysis. The practical effects of unrestricted research are seen in the computer security you use today: on your computer, as you browse the Internet and engage in commerce, and on your cell phone and other smart devices. Sure, the bad guys make use of this research, too, but the beneficial uses far outweigh the malicious ones.

The computer security community has also had to wrestle with these dual-use issues. In the early days of public computing, researchers who discovered vulnerabilities would quietly tell the product vendors so as to not also alert hackers. But all too often, the vendors would ignore the researchers. Because the vulnerability was not public, there was no urgency to fix it. Fixes might go into the next product release. Researchers, tired of this, started publishing the existence of vulnerabilities but not the details. Vendors, in response, tried to muzzle the researchers. They threatened them with lawsuits and belittled them in the press, calling the vulnerabilities only theoretical and not practical. The response from the researchers was predictable: They started publishing full details, and sometimes even code, demonstrating the vulnerabilities they found. This was called “full disclosure” and is the primary reason vendors now patch vulnerabilities quickly. Faced with published vulnerabilities that they could not pretend did not exist and that the hackers could use, they started building internal procedures to quickly issue patches. If you use Microsoft Windows, you know about “patch Tuesday”; the once-a-month automatic download and installation of security patches.

Once vendors started taking security patches seriously, the research community (university researchers, security consultants, and informal hackers) moved to something called "responsible disclosure." Now it is common for researchers to alert vendors before publication, giving them a month or two head start to release a security patch. But without the threat of full disclosure, responsible disclosure would not work, and vendors would go back to ignoring security vulnerabilities.

Could a similar process work for viruses? That is, could the makers work in concert with people who develop vaccines so that vaccines become available at the same time as the original results are released? Certainly this is not easy in practice, but perhaps it is a goal to work toward.

Limiting research, either through government classification or legal threats from venders, has a chilling effect. Why would professors or graduate students choose cryptography or computer security if they were going to be prevented from publishing their results? Once these sorts of research slow down, the increasing ignorance hurts us all.

On the other hand, the current vibrant fields of cryptography and computer security are a direct result of our willingness to publish methods of attack. Making and breaking systems are one and the same; you cannot learn one without the other. (Some universities even offer classes in computer virus writing.) Cryptography is better, and computers and networks are more secure, because our communities openly publish details on how to attack systems.

Virology is not computer science. A biological virus is not the same as a computer virus. A vulnerability that affects every individual copy of Windows is not as bad as a vulnerability that affects every individual person. Still, the lessons from computer security are valuable to anyone considering policies intended to encourage life-saving research in virology while at the same time prevent that research from being used to cause harm. This debate will not go away; it will only get more urgent.

This essay was originally published in Science.

EDITED TO ADD (7/14): Related article: “What Biology Can Learn from Infosec.”

Posted on June 29, 2012 at 6:35 AMView Comments

The Vulnerabilities Market and the Future of Security

Recently, there have been several articles about the new market in zero-day exploits: new and unpatched computer vulnerabilities. It’s not just software companies, who sometimes pay bounties to researchers who alert them of security vulnerabilities so they can fix them. And it’s not only criminal organizations, who pay for vulnerabilities they can exploit. Now there are governments, and companies who sell to governments, who buy vulnerabilities with the intent of keeping them secret so they can exploit them.

This market is larger than most people realize, and it’s becoming even larger. Forbes recently published a price list for zero-day exploits, along with the story of a hacker who received $250K from “a U.S. government contractor” (At first I didn’t believe the story or the price list, but I have been convinced that they both are true.) Forbes published a profile of a company called Vupen, whose business is selling zero-day exploits. Other companies doing this range from startups like Netragard and Endgame to large defense contractors like Northrop Grumman, General Dynamics, and Raytheon.

This is very different than in 2007, when researcher Charlie Miller wrote about his attempts to sell zero-day exploits; and a 2010 survey implied that there wasn’t much money in selling zero days. The market has matured substantially in the past few years.

This new market perturbs the economics of finding security vulnerabilities. And it does so to the detriment of us all.

I’ve long argued that the process of finding vulnerabilities in software systems increases overall security. This is because the economics of vulnerability hunting favored disclosure. As long as the principal gain from finding a vulnerability was notoriety, publicly disclosing vulnerabilities was the only obvious path. In fact, it took years for our industry to move from a norm of full-disclosure—announcing the vulnerability publicly and damn the consequences—to something called “responsible disclosure”: giving the software vendor a head start in fixing the vulnerability. Changing economics is what made the change stick: instead of just hacker notoriety, a successful vulnerability finder could land some lucrative consulting gigs, and being a responsible security researcher helped. But regardless of the motivations, a disclosed vulnerability is one that—at least in most cases—is patched. And a patched vulnerability makes us all more secure.

This is why the new market for vulnerabilities is so dangerous; it results in vulnerabilities remaining secret and unpatched. That it’s even more lucrative than the public vulnerabilities market means that more hackers will choose this path. And unlike the previous reward of notoriety and consulting gigs, it gives software programmers within a company the incentive to deliberately create vulnerabilities in the products they’re working on—and then secretly sell them to some government agency.

No commercial vendors perform the level of code review that would be necessary to detect, and prove mal-intent for, this kind of sabotage.

Even more importantly, the new market for security vulnerabilities results in a variety of government agencies around the world that have a strong interest in those vulnerabilities remaining unpatched. These range from law-enforcement agencies (like the FBI and the German police who are trying to build targeted Internet surveillance tools, to intelligence agencies like the NSA who are trying to build mass Internet surveillance tools, to military organizations who are trying to build cyber-weapons.

All of these agencies have long had to wrestle with the choice of whether to use newly discovered vulnerabilities to protect or to attack. Inside the NSA, this was traditionally known as the “equities issue,” and the debate was between the COMSEC (communications security) side of the NSA and the SIGINT (signals intelligence) side. If they found a flaw in a popular cryptographic algorithm, they could either use that knowledge to fix the algorithm and make everyone’s communications more secure, or they could exploit the flaw to eavesdrop on others—while at the same time allowing even the people they wanted to protect to remain vulnerable. This debate raged through the decades inside the NSA. From what I’ve heard, by 2000, the COMSEC side had largely won, but things flipped completely around after 9/11.

The whole point of disclosing security vulnerabilities is to put pressure on vendors to release more secure software. It’s not just that they patch the vulnerabilities that are made public—the fear of bad press makes them implement more secure software development processes. It’s another economic process; the cost of designing software securely in the first place is less than the cost of the bad press after a vulnerability is announced plus the cost of writing and deploying the patch. I’d be the first to admit that this isn’t perfect—there’s a lot of very poorly written software still out there—but it’s the best incentive we have.

We’ve always expected the NSA, and those like them, to keep the vulnerabilities they discover secret. We have been counting on the public community to find and publicize vulnerabilities, forcing vendors to fix them. With the rise of these new pressures to keep zero-day exploits secret, and to sell them for exploitation, there will be even less incentive on software vendors to ensure the security of their products.

As the incentive for hackers to keep their vulnerabilities secret grows, the incentive for vendors to build secure software shrinks. As a recent EFF essay put it, this is “security for the 1%.” And it makes the rest of us less safe.

This essay previously appeared on Forbes.com.

Edited to add (6/6): Brazillian Portuguese translation here.

EDITED TO ADD (6/12): This presentation makes similar points as my essay.

Posted on June 1, 2012 at 6:48 AMView Comments

Recent Developments in Full Disclosure

Last week, I had a long conversation with Robert Lemos over an article he was writing about full disclosure. He had noticed that companies have recently been reacting more negatively to security researchers publishing vulnerabilities about their products.

The debate over full disclosure is as old as computing, and I’ve written about it before. Disclosing security vulnerabilities is good for security and good for society, but vendors really hate it. It results in bad press, forces them to spend money fixing vulnerabilities, and comes out of nowhere. Over the past decade or so, we’ve had an uneasy truce between security researchers and product vendors. That truce seems to be breaking down.

Lemos believes the problem is that because today’s research targets aren’t traditional computer companies—they’re phone companies, or embedded system companies, or whatnot—they’re not aware of the history of the debate or the truce, and are responding more viscerally. For example, Carrier IQ threatened legal action against the researcher that outed it, and only backed down after the EFF got involved. I am reminded of the reaction of locksmiths to Matt Blaze’s vulnerability disclosures about lock security; they thought he was evil incarnate for publicizing hundred-year-old security vulnerabilities in lock systems. And just last week, I posted about a full-disclosure debate in the virology community.

I think Lemos has put his finger on part of what’s going on, but that there’s more. I think that companies, both computer and non-computer, are trying to retain control over the situation. Apple’s heavy-handed retaliation against researcher Charlie Miller is an example of that. On one hand, Apple should know better than to do this. On the other hand, it’s acting in the best interest of its brand: the fewer researchers looking for vulnerabilities, the fewer vulnerabilities it has to deal with.

It’s easy to believe that if only people wouldn’t disclose problems, we could pretend they didn’t exist, and everything would be better. Certainly this is the position taken by the DHS over terrorism: public information about the problem is worse than the problem itself. It’s similar to Americans’ willingness to give both Bush and Obama the power to arrest and indefinitely detain any American without any trial whatsoever. It largely explains the common public backlash against whistle-blowers. What we don’t know can’t hurt us, and what we do know will also be known by those who want to hurt us.

There’s some profound psychological denial going on here, and I’m not sure of the implications of it all. It’s worth paying attention to, though. Security requires transparency and disclosure, and if we willingly give that up, we’re a lot less safe as a society.

Posted on December 6, 2011 at 7:31 AMView Comments

Full Disclosure in Biology

The debate over full disclosure in computer security has been going on for the better part of two decades now. The stakes are much higher in biology:

The virus is an H5N1 avian influenza strain that has been genetically altered and is now easily transmissible between ferrets, the animals that most closely mimic the human response to flu. Scientists believe it’s likely that the pathogen, if it emerged in nature or were released, would trigger an influenza pandemic, quite possibly with many millions of deaths.

In a 17th floor office in the same building, virologist Ron Fouchier of Erasmus Medical Center calmly explains why his team created what he says is “probably one of the most dangerous viruses you can make”­and why he wants to publish a paper describing how they did it. Fouchier is also bracing for a media storm. After he talked to ScienceInsider yesterday, he had an appointment with an institutional press officer to chart a communication strategy.

Of course, there’s value to the research:

“These studies are very important,” says biodefense and flu expert Michael Osterholm, director of the Center for Infectious Disease Research and Policy at the University of Minnesota, Twin Cities. The researchers “have the full support of the influenza community,” Osterholm says, because there are potential benefits for public health. For instance, the results show that those downplaying the risks of an H5N1 pandemic should think again, he says.

Knowing the exact mutations that make the virus transmissible also enables scientists to look for them in the field and take more aggressive control measures when one or more show up, adds Fouchier. The study also enables researchers to test whether H5N1 vaccines and antiviral drugs would work against the new strain.

And we know how badly this sort of security works:

Osterholm says he can’t discuss details of the papers because he’s an NSABB member. But he says it should be possible to omit certain key details from controversial papers and make them available to people who really need to know. “We don’t want to give bad guys a road map on how to make bad bugs really bad,” he says.

Posted on November 30, 2011 at 12:28 PMView Comments

Open-Source Software Feels Insecure

At first glance, this seems like a particularly dumb opening line of an article:

Open-source software may not sound compatible with the idea of strong cybersecurity, but….

But it’s not. Open source does sound like a security risk. Why would you want the bad guys to be able to look at the source code? They’ll figure out how it works. They’ll find flaws. They’ll—in extreme cases—sneak back-doors into the code when no one is looking.

Of course, these statements rely on the erroneous assumptions that security vulnerabilities are easy to find, and that proprietary source code makes them harder to find. And that secrecy is somehow aligned with security. I’ve written about this several times in the past, and there’s no need to rewrite the arguments again.

Still, we have to remember that the popular wisdom is that secrecy equals security, and open-source software doesn’t sound compatible with the idea of strong cybersecurity.

Posted on June 2, 2011 at 12:11 PMView Comments

New Siemens SCADA Vulnerabilities Kept Secret

SCADA systems—computer systems that control industrial processes—are one of the ways a computer hack can directly affect the real world. Here, the fears multiply. It’s not bad guys deleting your files, or getting your personal information and taking out credit cards in your name; it’s bad guys spewing chemicals into the atmosphere and dumping raw sewage into waterways. It’s Stuxnet: centrifuges spinning out of control and destroying themselves. Never mind how realistic the threat is, it’s scarier.

Last week, a researcher was successfully pressured by the Department of Homeland Security not to disclose details “before Siemens could patch the vulnerabilities.”

Beresford wouldn’t say how many vulnerabilities he found in the Siemens products, but said he gave the company four exploit modules to test. He believes that at least one of the vulnerabilities he found affects multiple SCADA-system vendors, which share “commonality” in their products. Beresford wouldn’t reveal more details, but says he hopes to do so at a later date.

We’ve been living with full disclosure for so long that many people have forgotten what life was like before it was routine.

Before full disclosure was the norm, researchers would discover vulnerabilities in software and send details to the software companies—who would ignore them, trusting in the security of secrecy. Some would go so far as to threaten the researchers with legal action if they disclosed the vulnerabilities.

Later on, researchers announced that particular vulnerabilities existed, but did not publish details. Software companies would then call the vulnerabilities “theoretical” and deny that they actually existed. Of course, they would still ignore the problems, and occasionally threaten the researcher with legal action. Then, of course, some hacker would create an exploit using the vulnerability—and the company would release a really quick patch, apologize profusely, and then go on to explain that the whole thing was entirely the fault of the evil, vile hackers.

I wrote that in 2007. Siemens is doing it right now:

Beresford expressed frustration that Siemens appeared to imply the flaws in its SCADA systems gear might be difficult for a typical hacker to exploit because the vulnerabilities unearthed by NSS Labs “were discovered while working under special laboratory conditions with unlimited access to protocols and controllers.”

There were no “‘special laboratory conditions’ with ‘unlimited access to the protocols,'” Beresford wrote Monday about how he managed to find flaws in Siemens PLC gear that would allow an attacker to compromise them. “My personal apartment on the wrong side of town where I can hear gunshots at night hardly defines a special laboratory.” Beresford said he purchased the Siemens controllers with funding from his company and found the vulnerabilities, which he says hackers with bad intentions could do as well.

That’s precisely the point. Me again from 2007:

Unfortunately, secrecy sounds like a good idea. Keeping software vulnerabilities secret, the argument goes, keeps them out of the hands of the hackers…. But that assumes that hackers can’t discover vulnerabilities on their own, and that software companies will spend time and money fixing secret vulnerabilities. Both of those assumptions are false. Hackers have proven to be quite adept at discovering secret vulnerabilities, and full disclosure is the only reason vendors routinely patch their systems.

With the pressure off, Siemens is motivated to deal with the PR problem and ignore the underlying security problem.

Posted on May 24, 2011 at 5:50 AMView Comments

34 SCADA Vulnerabilities Published

It’s hard to tell how serious this is.

Computer security experts who examined the code say the vulnerabilities are not highly dangerous on their own, because they would mostly just allow an attacker to crash a system or siphon sensitive data, and are targeted at operator viewing platforms, not the backend systems that directly control critical processes. But experts caution that the vulnerabilities could still allow an attacker to gain a foothold on a system to find additional security holes that could affect core processes.

Posted on April 1, 2011 at 6:58 AMView Comments

1 3 4 5 6 7 10

Sidebar photo of Bruce Schneier by Joe MacInnis.